Will Spiritual Robots Replace Humanity By 2100?

From a classic TechNetCast
Image from sorayama.net

In 1999, two distinguished computer scientists, Ray Kurzweil and Hans Moravec, came out independently with serious books that proclaimed that in the coming century, our own computational technology, marching to the exponential drum of Moore's Law and more general laws of bootstrapping, leapfrogging, positive feedback progress, will outstrip us intellectually and spiritually, becoming not only deeply creative but deeply emotive, thus usurping from us humans our self-appointed position as "the highest product of evolution". Reasonable fact or complete fiction? Expert panel assembled by Doug Hofstadter explores the issue.

Play this page as an MP3 playlist
Length : 4 Hours

Doug Hofstadter: Some technologists now believe that by 2100 the unrelenting exponential progress of science and computing power will lead to artificial intelligence that will display characteristics usually only associated with human behavior. —[1]

Ray Kurzweil: The reason that progress in the 21st century is so surprising to people is because of the exponential nature of paradigm shifts... right now paradigm shifts are really doubling every ten years. Which means the next ten years will be like 20 years... Human created technology is really the cutting edge of evolution on this planet. If we go out sufficiently into the 21st century and follow this progression of the acceleration of paradigm shifts, we see that the 21st century is actually about 20 thousand years long. We'll make 20 thousand years of progress in the 21st century at current rates of progress. That's why the future is startling. —[2]

Bill Joy: The replicating and evolving processes that have been confined to the natural world are about to become, as Bacon dreamed, realms of human endeavor. So the real risk we face, in my judgment... is that we're democratizing the ability for individuals to cause great harm. Information technology is a great democratizer. The risk of harm is extremely great because of self-replication. —[3]

Hans Moravec: People can use technologies to do ridiculous things. But on the other hand, the same technologies provide the means to build immune systems against these things. —[4]

John Holland: I don't know of a single 30-year prediction that I've seen, by scientists or novelists, that has ever come anywhere close to the actual situation, except in those cases where there is some decent theory. —[5]

Kevin Kelly: Unlike most people on this panel, I am not a scientist or technologist. My role is mostly as a social observer. I was actually going to try—maybe unlike everyone else—and answer the question. —[6]

Frank Drake: Moore's Law is 18 months, for SETI it's 235 days. That's because SETI exploits [not only] Moore's Law, but also increases in antenna size and all the rest. —[7]

Ralph Merkle: What are the potential dangers posed by artificial self-replicating, manufacturing systems? The only self-replicating systems we are familiar with are living systems. And we unconsciously assume that artificial self-replicating systems will be similar. But the machines people make bear little resemblance to living systems. Wild birds showed us that heavier than air flight was possible. Airplanes are very different from birds. The image of a 747 going feral, swooping out of the sky to clutch an unsuspecting horse in its landing gear seems incongruous. Machines don't behave in that way. They lack the wonderful adapability of living systems. —[8]

John Koza: Over the years our group has done a lot of work and this number keeps coming up—ten to the 15th. There's 10 to the 12th neurons in the brain, they operate at millisecond speed. So we have the notion of a brain second—1 BS. And 1 BS, 1 brain second, seems to be a number that a number of researchers around the world, including people in our own group here [at] Stanford, have culled out and discovered that you can do certain things [at]. —[9]

Panel Discussion: Simulating Biological Complexity through MIPS, Building Intelligent Behaviour, Nano-Technology As Offensive or Defensive Weapon, Containing Self-Replication, Requirements for Emulating Human Intelligence, Commercial Interest and Greater Good, Inherent Dangers of Self-Replication, Stopping Research and Development. —[10]

Audience Q&A: Self-Accelerating Technology, Artificial Intelligence Considered Dangerous, Ethics and Morality, Artificial Wisdom, Machine Self-Preservation, Recognizing Superior Intelligence, Regulating Machines and Scientific Advances, Will Intelligent Robots be Human?, The Body, The Spiritual and the Material, Computation and the Divine, Sharing the Benefits of New Technologies, Will Spiritual Robots Replace Cyborgs by 2100? —[11]