Ray Kurzweil, the Futurist, wrote The Age of Intelligent Machines between 1986 and 1989 and in it he extrapolated existing trends to make many predictions about technology. He predicted that:
by 1998 a computer would beat the world’s best chess player. In fact IBM’s Deep Blue computer beat the World Chess Champion Garry Kasparov.
He also stated that the Internet would explode not only in the number of users but in content as well, eventually granting users access “to international networks of libraries, data bases, and information services”.
Kurzweil wrote that, due to paradigm shifts, a trend of exponential growth will extend Moore’s law from integrated circuits to electromechanical computers. He predicts that the exponential growth will continue, and that in a few decades the computing power of all computers will exceed that of human brains, with superhuman artificial intelligence appearing around the same time.
Moore's Law, The Fifth Paradigm
In his most controversial prediction however Kurzweil postulates a law of accelerating return where by the improvements in technology increase exponetially to a point known as the Technological Singularity where the computer, medical, and material technology (nanotechnology) advances to enable artificial intelligence or amplification of human intelligence.
“The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine.” Ray Kurzweil The Singularity is Near pg. 9
I have read many descriptions of the Technological Singularity but the most simple is that at this point artificial intelligence can improve itself faster than humans can and thay will do exactly that, effectivly cutting us out of the loop.
Although many technologists do not support the plausibility of a sudden change such as this, the issue is worthy of study by artificial intelligence reserchers.
R is for Ray Kurzweil – the Futurist
I’ve heard many arguments against the development of artificial intelligence (Ai) and the possibility of uploading our consciousness to similar artificial environments, or at least artificially enhancing our minds and bodies. They say that our governments will not let it happen, or that the churches will be able to put sufficient pressure to bear to prevent it. I disagree. Ai will have access to sufficient computational resources to be able to “what if” its way past our societal limiters; governments, churches etc. It will know what an un-enhanced human will do long before we ourselves do – or at least it will have worked out many millions of scenarios, with solutions to preserve themselves banked for each perceived action, ready to be deployed.
Once the singularity is close, it is inevitable. As to the question of how close, to have proposed this question is itself a strong indicator that the turning point of human engineering has passed and that a human engineered limiter is no longer possible.
Am I frightened? No!
Who should be frightened? The current powerbase. In any revolution, power shifts and those who cling longest and most desperately to the old ways will suffer the worst.
Lets look at a powerbase from recent history; the monarchy. The English monarchy still exists today and although they are still wealthy from a capital perspective, they do not have either the cash flow or the power of life or death over their people. It is quite the opposite; they exist at the mercy of their people, kept on life support in a human zoo or museum for the people’s amusement.
How did the English Monarchy survive when the Russian or French did not? They divested their power to the people; they set their people free and this act of grace and trust enabled them to avoid the fate of many other monarchies that clung too desperately to their historic powers.
So who amongst us will hold the power when the inevitable singularity occurs? I think it will be those who embrace the opportunities to enhance our intelligence; it will be those who are able to free their minds.
I don’t know, and haven’t had enough time to digest the implications of these thoughts. If I hark back to the beginings of this note; I don’t have the neural capacity to “what if” my final opinion in the time it has taken to write the words from there to here!