Hugo De Garis Falls Out With The Transhumanists

Tuesday, August 21, 2012

Hugo de Garis
 
Artificial Intelligence
Hugo de Garis has posted a controversial essay on the Humanity Plus magazine website, claiming that The Singularity Institute specifically and transhumanists in general are delusional when it comes to the rise of greater-than human artificial intelligence.
Hugo de Garis has posted a controversial essay on the Humanity Plus magazine website, claiming that The Singularity Institute specifically and transhumanists in general are delusional when it comes to the rise of greater-than human artificial intelligence.    His essay tries to explain why he feels this way, and why he has lost patience with the Transhumanists.

Professor de Garis, bases his argument on five key points:

1) "The Tail Wagging The Dog" Argument

According to de Garis, in the future where artificial intelligence systems and  massively intelligent machines can be made human friendly in such a way that any future modification they make of themselves will remain human friendly is "ridiculous, utterly human oriented, naïve and intellectually contemptible". Such an argument, he says, assumes that human beings are smart enough to anticipate the motivations of a creature trillions of trillions of times above human mental capacities. de Garis finds this view naive and incorrect.

b) The “Unpredictable Complexity" Argument

Future artificial intelligences (or to use de Garis' own wording, 'artilects'), will not use the traditional von Neumann computer architecture, with its determinism, and rigid input output predictability. According to de Garis, such systems, when they approach human-level intelligence will be too complex to understand. The only way to know how they function is to run them, but then if they perform in a human unfriendly way, it is too late. Future artificial intelligence will very probably be massively complex neural networks based on computational neuroscience models, and as a consequence, highly unpredictable. de Garis thinks that the unpredictable outcomes make it impossible that being human-friendly would be a possible programmatic result.

c) The “Terran Politician Rejection” Argument

In de Garis' prediction, outlined in The Artilect War,  the Terran (anti-artilect) politicians will not accept anything the Singularity Institute people say, because the stakes are is too high. He writes that the politicians will outlaw the creation of artilect-level AIs before they can be created. "Given this likelihood on the part of the Terran politicians, what is the point of funding the SingInst? It is pointless. Their efforts are wasted, because politically, it doesn’t matter what the SingInst says. To a Terran politician, artilects are never to be built, period!"  This is by far the weakest of de Garis' arguments.  It is about as valid as saying now that no human will ever be cloned because it will be outlawed.  Given the capability and the will, someone, somewhere will perform the act.

d) The “Unsafe Mutations” Argument

Undoubtedly, nanotechnology will be used to create smaller and smaller microprocessors, and such hardware will power the eventual greater-than-human artificial intelligence systems.  De Garis fears that  cosmic rays, will cause havoc to molecular scale circuits inside future “human friendly” AI if they are built.  Hence the risk is there that a mutated artilect might start behaving in bizarre, mutated ways that are not human friendly. Since it will be hugely smarter than humans its mutated goals may conflict with human interest. This is essentially a continuation of the previous argument.

e) “The Evolutionary Engineering Inevitability” Argument

Neuroscience is now influencing the construction of artificial intelligence systems to a great extent, including the use of “evolutionary engineering” approach, i.e. using a “genetic algorithms”.  According to de Garis, the inherent unpredictability of genetic algorithms in powerful AI systems is another existential threat.

Summing up, de Garis writes,
it’s not surprising that I have fallen out with the transhumanist community. My basic problems with their general views are succinctly outlined as follows: 
“Humanity Won’t Be Augmented, It Will Be Drowned”

The Transhumanists, as their label suggests, want to augment humanity, to extend humanity to a superior form, with extra capacities beyond (trans) human limits, e.g. greater intelligence, longer life, healthier life, etc. This is fine so far as it goes, but the problem is that it does not go anywhere near far enough. My main objection to the Transhumanists is that they seem not to see that future technologies will not just be able to “augment humanity”, but veritably to “drown humanity”, dwarfing human capacities by a factor of trillions of trillions. For example, a single cubic millimeter of sand has more computing capacity than the human brain by a factor of a quintillion (a million trillion). This number can be found readily enough. One can estimate the number of atoms in a cubic millimeter. Assume that each atom is manipulating one bit of information, switching in femtoseconds. The estimated bit processing rate of the human brain is about 10exp16 bits a second, which works out to be a quintillion times smaller.


De Garis's argument resonates strongly.  The aim of creating "friendly AGI," quite possibly is impossible, but that does not make it necessarily pointless.  In Daniel Wilson's Robopocalypse, when the AI called 'Archon' is initiated it immediately goes about eliminating humans. De Garis's argument essentially suggests that the possibility of creating such systems would be outlawed by politicians.  There are countless examples of how technology has never totally been contained by political will over the long term.  Look at stem cell science, education for women and artificial intelligence will be no different.  Denying that greater-than-human artificial intelligence will be created on these grounds is preposterous.

The motivations of intelligent and super-intelligent systems are the real question.  Will AI uplift and augment lesser human creatures, or will it ignore us altogether, as we might ignore bacteria in our own bodies, or will it stamp us out altogether as a parasite, or threat to its own existence?

Assuming that artilects will wipe us out is one opinion.  Ray Kurzweil and many Transhumanists suggest that we will merge with our technology and, by the time of the Singularity, be carried to new capabilities and possibilities along with super-intelligent AI.

Only the future will tell if the Singualarity Institute's goals are delusional, however blindly marching towards the creation of artilects without at least trying to ensure they follow some sort of Three Laws core programming is foolish too, even if it doesn't work.


SOURCE  Humanity Plus Magazine


By 33rd SquareSubscribe to 33rd Square


1 comment: Leave Your Comments

  1. Wow, De Garis makes some good points. I'm pretty sure we will handle things OK when computers start to get really interesting. Cautionary thinking like this is one of the main ways to avoid unpleasant outcomes; that and a kill switch.

    ReplyDelete