Thursday, February 4, 2010

The Singularity won't happen if we choose I.A. over A.I.




By direct recommendation from Michael Anissimov in an email conversation, I read for the first time the paper said to be the foundation of the Singularity movement: The Coming Technological Singularity: How to Survive in the Post-Human Era by Vernor Vinge (1993).
Read it, it's really simple and compelling.

What's interesting is that there are 2 ways to achieve Super-Human Intelligence:
1) Artificial Intelligence (AI), namely Humans building an intelligent machine that is external to the brain and a priori independent.
2) Intelligence Amplified (IA), namely Technology that you connect to your brain to enhance any intellectual ability.

The quotes (taken out of context) I liked are:

- On why Technological Evolution is much faster than Natural Evolution.
"We humans have the ability to internalize the world and conduct "what if's" in our heads; we can solve many problems thousands of times faster than natural selection."

- On AI
"But it's much more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so, then the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans' natural equipment."


"Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!"

"Good [11] proposed a "Meta-Golden Rule", which might be paraphrased as "Treat your inferiors as you would be treated by your superiors."  It's a wonderful, paradoxical idea (and most of my friends don't believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.)"

- On IA
"But as I noted at the beginning of this paper, there are other paths to superhumanity. Computer networks and human-computer interfaces seem more mundane than AI, and yet they could lead to the Singularity. I call this contrasting approach Intelligence Amplification (IA). IA is something that is proceeding very naturally, in most cases not even recognized by its developers for what it is. But every time our ability to access information and to communicate it to others is improved, in some sense we have achieved an increase over natural intelligence."

"Instead of simply trying to model and understand biological life with computers, research could be directed toward the creation of composite systems that rely on biological life for guidance or for the providing features we don't understand well enough yet to implement in hardware."


So, as Vernor Vinge says, "Which is the valid viewpoint?".
I personally think that it's definitely IA that will happen for the very reason that.. we have been actually already choosing it!
When you type on your laptop, it's a collaboration between you and your laptop.
When you locate yourself with your iPhone, it's a collaboration between you and your iPhone.
When you ask a question to your followers on Twitter, it's a collaboration between you and Twitter engine.

So our Intelligence is already composite, composed of biological and non-biological parts. 

We are already in the process of merging with machines.
And that's why I think that it can be argued that The Singularity, as defined by the paper will not happen.

Definition:

"It is a point where our models must be discarded and a new reality rules."



"One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." (Paraphrasing John von Neumann)



"I think Freeman Dyson has it right when he says [8]: "God is what mind becomes when it has passed beyond the scale of our comprehension."

So actually, the Singularity only means that the state of the world and the intelligence that the average subject will earn, say in 2028, is just beyond the understanding ability for an average human subject pertaining to our current times.
Simply put, the Singularity is just relative to our poor human brains, with their current capacity.
But of course, as seen for an average subject pertaining to 2028 (i.e. a super-human relatively to us, an evolved version of us), the notion of Singulariy doesn't exist. 

2 comments:

  1. Nice post that you wrote. I believe as well that the singularity won't happen as a sudden phenomenon. It will just correspond to the mathematical integration of all the progress that is being made in terms of collaboration between humans and machines. As you said, we are already merging with machines and machines are already merging with us. We just won't be aware of the singularity, that point when super-intelligence will be created. But it assumes that we accept to merge with machines, which is not the case of everybody. So for people that don't want to be turned into cyborgs (that s a pity), the singularity might be a kind of scary.
    In that case, I believe that we must make sure to create a friendly AI. And this is already being studied by the singularity institute for AI.

    ReplyDelete
  2. Your comment made me think of this:
    I am getting to wonder if we are not, actually, brainwashed by this Singularity thingy, maybe to serve the interest of its creators?

    Anyway. Still.
    What's interesting with the Singularity theory, is not the Singularity itself, but its mathematical tools that allow us to predict the evolution of Technology.
    I am an optimist so I consider that Humans will merge with non-biological technology smoothly, without almost realizing it.
    After all, it's the very nature of Humanity to transcend itself.

    So, by definition, there's no more natural destiny for Humans than to merge with machines.

    ReplyDelete