The Singularity
With Moore's Law continuing to expand computing power, and artificial intelligence software algorithms lagging behind, will the eventual loading of an AGI system onto a future super-computer pose too grave a risk to humanity? |
Moore's Law operates on the hardware side of computer development, exponentially ramping up computing power each year. Specifically, the law refers to the number of transistors on integrated circuits doubling approximately every two years.
This doubling has now led us to the stage of exascale computing and to the cusp of machines that may duplicate the processing ability of the human brain. The power required and other factors are in no way human-like yet, however the processing features may yield the projected memory and processing requirements to duplicate human brain activity.
Many supercomputers may already be at human levels in these terms, however the algorithmic methods used do not yet come close to the human brain. Projects like the Human Brain Project and the US BRAIN Initiative look to close up the software gap.
In the meantime, Moores Law is continuing, so that when (and if, admittedly) software is capable of mimicking or actually duplicating human intelligence processes, the software may be loaded onto machines that have much more processing power and memory than a human brain.
Jaan Talinn commented a few years ago at a Humanity+ UK event (from which the top image is used),
It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang. The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence. Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.Such a hardware or, computing overhang refers to a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.
Related articles |
Anders Sandberg writes, "when you run an AI, its effective intelligence will proportional to how much fast hardware you can give it (e.g. it might run faster, have greater intellectual power or just able to exist in more separate copies doing intellectual work). More effective intelligence, bigger and faster intelligence explosion."
As some would argue, this hard take-off scenario could make AGIs much more powerful than before, and present an existential risk.
As futurist David Wood points out, an example of a hardware overhang once occurred when thermonuclear bombs were being tested at the Bikini Atoll in the Marshall Islands. The projected explosive yield was expected to be from four to six Megatons, but when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum. If the scientists at the time were wrong in their estimates by a greater amount, the consequences could have been so much greater.
With the risk of AGI, are we too looking at a development that may threaten millions of people if it is unleashed? Will putting an AGI onto a Zetta-scale or Yotta-scale computer in the coming years produce a Singularity like Wood's graph below? Moreover, would such a technology spell the end of humanity?
SOURCE David Wood
By 33rd Square | Subscribe to 33rd Square |
0 comments:
Post a Comment