By now the loyal Ray Kurzweiliacs among us have accepted that within the next 30 years technology will advance at such a rate that it will produce an intelligence explosion, cause a world war, and create new variations of the human species. According to Hugo de Garis, a professor at Xiamen University in China and a leading researcher in the field of artificial intelligence, in doing so this will kill me, you, Hugo’s grandchildren, and pretty much everyone else on earth. Of course, this sounds pretty interesting to us, and we wanted to know more. So we had Bruno Bayley gave him a call.
Motherboard: OK, tell us all about these next few decades as you see them. Bad things are going to happen, eh?
Hugo de Garis: An explosion in electronic capacity. We will have a growing marriage between neuroscience and neuro-engineering. We will work out how the biological brain works, put that into a machine, then speed it up a million times with no limit on memory.
You say that like it’s a bad thing!
The IQ gap between robots and humans will close. I see a major war over the issue. Imagine a grain of sand: If it could be fully “nano-teched” you could put one bit of information on a single atom. And that bit can switch a trillion times a second. Imagine the number of atoms in a grain of sand, all switching—there is more computing ability in that grain of sand than the whole human brain by a factor of a billion or something. So imagine adding that grain of sand to a newborn baby’s brain, then that baby is no longer human, it is an “artilect.” It’s terrifying.
I wanted to ask you about these “artificial intellects.” How will these artilects cause a war?
I see humanity dividing into two, maybe three, major ideological camps. I call the people who will be opposed to building artilects “terrans”—they think humans are the number one priority. The second group are “cosmists”—they want to build artilects. They think it’s human destiny to progress to that next level. They think normal human life is pathetic.
These seems a macabre recasting of the standard AI—or even Singularity—approach.
We have racism now, but wait till you see species-ism. So you’ll have at least two murderously opposed ideologies and they will be the new capitalism vs. communism for the 21st century.
So the threat comes from the way the technology divides people?
I reject the Terminator scenario, where it’s humans versus machines. That problem can be anticipated: people will become alarmed before the machines get really smart. I see political parties being formed on either side of the debate, and the more extreme individuals and groups on both sides taking matters into their own hands—there will be acts of sabotage, assassinations, and very possibly world war. A major war in the late 21st century could result in billions, not millions, of deaths. Take into account 21st century weaponry, and the passion involved in a fight for human survival. It’s a very gloomy scenario I call “Gigadeath.”
But aren’t you making an artificial brain?
Yes. The decision about whether to build these artilects or not is binary: We either build them or we don’t. I think it would be tragic for humanity to freeze our development at the puny present level.
And the flip side of the coin? If we do proceed?
But on the other hand, if we build these artilects, humanity runs the risk of extinction—either by war or in the event that the artilects decide, for example, “Oh, this oxygen is bad for our circuitry, let’s get rid of it.” They could inadvertently wipe us out. If I walk over the carpet I am probably killing billions of bacteria with every step, but I don’t give a damn.