Ben Goertzel Artificial General Intelligence and Its Broader Implications

Thursday, January 28, 2016

Ben Goertzel Artificial General Intelligence and Its Broader Implications


Artificial Intelligence

Artificial intelligence researcher and transhumanist Ben Goertzel recently gave a lengthy talk on the potential future of artificial intelligence, and his specialty, artificial general intelligence (AGI).


According to artificial intelligence researchare, Ben Goertzel, the creator of the first AGI will have a tremendous economic advantage. The technology will grow rapidly and spread around the globe, but those who understand the AGI's system and the underlying ideas of its construction will be best prepared to lead the expansion. The AGI will push forward new scientific and technological advances as it advances, pushing forward every area of commerce.

Related articles
"The benefit of having AIs do the work that people now do is just going to be too great for people profiting from the means of production," he states. "It's unstoppable, and you are going to see this advance happen faster and faster, and I think it's a good thing."

Soon, Goertzel projects an AGI will be created that will do to the field what Sputnik did for the space program. Once a system has basic understanding of itself, and its environment, as opposed to today's Siri or Cortana, "every major company, every government in the world is going to see the era of AGI is dawning," he says. "You are going to see many, many billions of dollars going into AGI like you see now with semiconductors and neuroscience research."

Artificial General Intelligence

Part of Goertzel's conviction in the benefit of developing AGIs stems from his transhumanist beliefs. A long-time transhumanist, Goertzel believes that digital immortality is possible through the development of AGIs. "The path from human-level to transhuman AGI will be achieved mainly by AGI improving and re-engineering itself," he notes.

"Will a human-level AGI sharing human values evolve into the same sort of superintelligence as an iteratively self-improving human would?"
Goertzel also cautions his audience on the risks of creating superintelligent AGIs.  "Will a human-level AGI sharing human values evolve into the same sort of superintelligence as an iteratively self-improving human would?" he asks.

His thesis is that the probability distribution of future minds coming from AGIs with human value systems embracing substrate independence, carrying out relatively conservative self-improvement, will closely resemble the probability distribution of future minds coming from a population of humans sharing roughly the same value system, and carrying out the same relatively conservative self-improvement.

Check out the talk below.  Please note there is a very short audio problem at the beginning of the lecture, and the segment where Goertzel shows some video of David Hanson's robots is much louder than the rest of the video.



SOURCE  Singularity Videos


By 33rd SquareEmbed


0 comments:

Post a Comment