Tuesday, December 7, 2010

Paths to the Artificial General Intelligence. Ben Goertzel (optimistic) v.s. Eliezer Yudchowsky (alarmist)

After reading these 2 posts http://multiverseaccordingtoben.blogspot.com/2010/10/what-would-it-take-to-move-rapidly.html and  http://lesswrong.com/lw/y3/value_is_fragile/ , I am making up my mind not to believe blindly the approach of the SIAI.
I am now leaning more towards Ben's approach.


"Human values are fragile". Yes, so what?
It is not new;
The utility function optimizer that is the darwinian process of Selection of the Fittest, is not perfect either and yet it generated Humans, which are already literrally AGIs (artificially made by mother Nature) that are able of Morality and doing good, while being able to do bad (e.g. atomic bomb) but have overcome it;

I rather believe more in the argument of hard take-off: AGIs that suddenly take off the ground, grow explosively exponentially and wipe us all out; 
But this is preventable just by testing it in Virtual Reality (e.g. Linden lab, research) and then implement it in our Reality;

What Eliezer says is that if we don't make sure to get the right AI from the beginning with the right values inside, then we'll get a worthless world.
Well, I would argue it's already started.
AI is not something that begins with intelligent machines.
AI has begun with PCs, software, iphone, calculators...
So the question now is rather: did we make sure that we have begun by implementing the right values inside say, the Intel chips?
I don't know how to answer that.
The real question is how do we use that technology?
As Ben pointed out, Technology can be used by evil or by good people.
We just need to make sure that it is not used in a dramatic way by evil people.

More extremely, Hitler can kill all the gipsies albeit in his own virtual world. Why not? I don't have any problem with that, from my perspective. 
Otherwise, that would extend to say that I am able to forbid you to kill someone in your dream.
That would constitute an infringement on the Freedom of Dreaming Act....I think we cannot question that. :)

The thing with Eliezer is: doesn't he try to hide the fact that he has no clues about AGI by using overwhelming complicated expressions?
When you understand something well enough, de facto you should able to put it into simple words, isn't it?





1 comment:

  1. I think one of the differences between the 2 thinker is rational optimism.

    I think btw that it should be a value uploaded inside the electronic brains of general artificial intelligences :)

    ReplyDelete