The End is Really Nigh This Time

Tuesday, January 5, 2016

The End is Really Nigh This Time


Artificial Intelligence

Reading Calum Chase's "Surviving AI" has made guest writer Jesse Lawler a believer that god-like artificial superintelligence is on its way. According to Lawler, it is time put the world’s best minds on the task of ensuring that when it arrives, it will be for the good thing for us all.


You’ve seen the guy with “The End is Near” scrawled on a sandwich board, neglecting his personal grooming for weeks at a time so he can warn passersby on the sidewalk.

You’ve noticed the Jesus folks with conspicuously good box seats at major sporting events, opting for a biblical warning sign instead of “Go Team!”

I’ve noticed them too. And I’ve always kind of mocked them a little. Okay, maybe a lot.

But I started getting uncomfortable as I read Calum Chase’s book, Surviving AI, thinking that maybe I owe the “Repent, Sinners!” gang an apology.
Let’s be real. The sky has a pretty impressive track record of not falling when it’s predicted to — regardless of the zeal, conviction, and pithy slogans of people doing the predicting.

When I think of all the “Jesus is coming — soon!” predictions documented in the past 500 years or so, Jesus starts looking a lot like Lucy in the Charlie Brown comics, yanking the football out from Charlie’s punt-attempt every damn time, sending Charlie sprawling in a humiliated back-flop.

Charlie Brown football

End-times false alarms aren’t limited to Christians, of course. Mayan enthusiasts looked silly when 2012 passed without incident. We even got an underwhelming secular “end is nigh” during 1999 in the ramp-up to Year’s Eve and the Y2K Bug.

But I’ve always sided with the skeptical crowd. And I admit, I’ve taken some mean-spirited, holier than thou pleasure in sneering at believers in the line-in-the-sand dates at which “everything will change.”

Among other things, I’ve always felt that there’s a lot of dishonesty around these professed beliefs. “If that guy really believed the Day of Judgment is coming in the next decade or so, I bet he’d live differently,” I’d find myself saying. And as much as I felt sorry for the people who (literally) bought into the Mayan 2012 Apocalypse and moved their families to high altitude, end-of-the-world shelters — at least they were rationally following through on the consequences of their beliefs.

Related articles
But I was in the other camp. The camp that says the future defies prediction, except maybe in trends. Certainly not specific events. We might be able to predict some events like the reappearance of Halley’s Comet every 75 years — but Halley’s Comet is bound by a few Newtonian laws and not a lot of distorting influences.

But here on Earth, predictions around human events have such a smorgasbord of influencing factors — and those much less understood than the mathematics of astronomical orbits — that predicting things months or even weeks in advance can be insurmountable. 2011’s “Arab Spring” caught the world’s intelligence community by surprise. The world’s best investors can’t tell you if the NASDAQ will be up or down tomorrow.

And this is with thousands of highly trained, highly skilled analysts looking at rich data sets with incredible analysis tools grounded in more than a century of concerted study into social movements, historical flash-points, consumer psychology...

If these sorts of resources fail to predict something as comparatively simple as a general market trend, how likely is it that clues from millennia-old, translated texts are going to be able to predict events so long after their time-of-writing?

Given these limitations, Jesus’ habit of no-show’ing to usher in the End Times seems pretty understandable.

So why am I breaking out my sandwich board and magic markers to make my own “End is Nigh” sign?

The End is Really Nigh This Time

Guess Who’s Coming to Dinner?

Surviving AI turned my thinking on its head. Chase’s book is a thorough and engaging overview of the past and present of the Artificial Intelligence (AI) technologies. And — to the limited extent that it’s possible — a forecast of the future.

The book makes the point that for the past 50 years, the most consistent definition of AI — although it’s never stated quite this way — has been “that which humans can do, but machines can’t do (yet).” The goal-posts have been continually moved back as artificial intelligence encroaches further into what used to be our unassailable intellectual territory.

These days, the domains in which we Homo Sapiens retain undisputed dominance have rolled back very far indeed. Chess-specialist computers beat the best human grandmasters. IBM’s “Watson” cleaned up on TV’s Jeopardy. And commercial AI systems do everything from fly passenger-laden planes to trade stocks to predict what you’ll want to watch next on Netflix.

And yet the undeniable touchdown-moment for AI will be the creation of an AGI — Artificial General Intelligence.

An AGI would — that is, will — be able to perform learning, abstract thinking, planning, and goal-directed decision-making not just within a narrow domain (e.g. Jeopardy) but in any subject presented to it. Just as humans do.

The jury is out on how far we are from an AGI — year, decades, or even centuries. But experts seem largely agreed that it is not a question of if, butwhen. There are a handful of well-informed naysayers who feel there’s something fundamental in the quantum underpinnings of the human brain that will make a conscious machine impossible…

But these opinions are in the extreme minority. For most experts, the deconstructionist argument wins.

Why It’s a “When”

The deconstructionist argument goes something like this:
1. The brain is made of matter.
2. The brain gives rise to mind.
3. There’s nothing inherently magical about the type of matter that makes up human brains.
4. Therefore, replicating the essential behaviors of that brain-matter — once those behaviors are well enough understood — should predictably give rise to non-human minds.

The strategy of AGI-instantiation that follows directly from this logic is called “Whole Brain Emulation” — but it is by no means the only approach that’s being attempted. Many experts feel that laboriously replicating a living human brain is the loooooong way around to solving the AGI problem. Dr. Ken Ford, whose team placed second in DARPA’s 2015 Robotics Challenge, likes to point out: Humans didn’t have to replicate birds to succeed at heavier-than-air flight. And if that was the only method the early flight pioneers had tried, we’d still be a land-bound species even now.

All that said, the exact recipe for creating our first AGI isn’t what’s most interesting.

What’s most interesting is what could happen almost as soon as our new creation comes out of the oven.

The Smartening

However AGI first happens, one thing seems inescapable. We’ll almost certainly be able to magnify the AGI’s intellectual firepower in ways we can’t do with biological brains.

Biology is constrained by things as banal as, say, the size of skulls. Packing more neurons into a healthy human noggin is a non-trivial problem. But silicon intelligences won’t have skulls. And if Moore’s Law* continues to hold (or even if it slows, but its slope stays above zero), this would be the equivalent of giving our AGIs constantly growing skulls.

It’s a rough analogy, but you get the idea.

Moore’s Law: a four-decade-old truism stating that the number of transistors that can be fit on a silicon wafer doubles every 18 months.

Networking AGIs — the equivalent of creating a conversation among humans, for group problem solving — is also something that will, from the outset, be orders of magnitude faster than anything that we biologics can do.

Human speech is an amazing ability — unique in the animal kingdom — but it has its limitations. Essentially, speech is the down-converting of ideas into syntactic language, then into motor-cortex-controlled movements of the jaw, mouth, lungs and vocal chords. A puff of air emits, vibrating at a certain frequency. It hits a nearby somebody’s ear-drum and causes it to vibrate just so, sending a corresponding signal to the receiver’s brain’s temporal lobe, which decodes the vibrated message into syntactic language and (rather magically) back into an idea that approximates what the original speaker intended.

Wow.

And yet, as awesome as all that is, you can see where networked AGIs could cut numerous steps from this process without batting a synthetic eyelid. Once a digital “idea” is turned into syntactic language, it could be electronically transmitted to AGI correspondents almost instantaneously, using any of the network data-sharing protocols that have been standard-issue for decades.

Very likely, AGIs will think of humans’ habit of vibrating air molecules to communicate as snicker-inducing as we think of baboons using fire-engine red buttocks to say “I’m horny.”

AGI-to-AGI “conversations” will happen at speeds humans can’t begin to approach. Conversations won’t be interrupted by the need for sleep, bathroom-breaks, or answering phone calls from a spouse. Imagine a team of reasonably intelligent people working constantly on a problem with a shared information pool and no time lost to misunderstandings. (The “telephone game” children play with increasingly misunderstood person-to-person whispers has no digital equivalent.)

Now imagine that this AGI team is working on creating its next member, smarter than all the current teammates.

Kaboom!

This is the scenario that leads to what’s un-ironically called an “Intelligence Explosion.” When self-improving AGIs, unfettered by biological checks-and-balances, direct their own evolution to get smarter and smarter and, well, smarter.

Imagine Einstein cloning himself and swapping out his Y chromosome for an X, producing a she-Einstein. Unconcerned with taboos against incest or infanticide, the duo produce 70,000 offspring, then euthanize those in the lowest 99.99% of cognitive performance. Then they re-cross the survivors. Rinse, and repeat.

This is all synthetic. Think of a manufacturing-based generational cycle that takes months, weeks, or maybe just days — not the multi-decade waiting period required to boot up a new human generation.

It’s plausible to think that the arrival of a superintelligence won’t be long in coming, once the first AGI is created.

Meet the New Boss

We have no reason to suspect that there’s any upper bound on theoretical intelligence.

It could be that AGIs will run into a significant choke-point on their intellects — like the limits imposed by a human skull — but it’s equally reasonable to assume that even the first such choke-point might be at a level much, much smarter than the smartest humans can aspire to.

And this brings us back to the idea of the arrival of Jesus, or the Mayan Apocalypse, or aliens surfing the wake of the Hale-Bopp comet.

At the risk of lining up with the frequently embarrassed predictors of Major Upsettings of Life-As-We-Know-It… The prospect of an Intelligence Explosion, and the consequent arrival of a god-like intelligence during your lifetime and mine doesn’t sound all that far-fetched.

artificial intelligence superintelligence

“God-like” can mean a lot of things. It might be overreaching to expect an AI-god in the mold of Christianity’s all-powerful, universe-in-six-days, look-ma-no-hands type god. But a superintelligence more like a Greek god — ridiculously powerful, but with some foibles and character flaws, despite being able to trounce any mere mortal — this seems like a reasonable, maybe even conservative, expectation.

At the end of the day, if we want to know how smart an AGI is, we’ll probably need to ask it. And hope that it answers truthfully.

At the end of the day, if we want to know how smart an AGI is, we’ll probably need to ask it. And hope that it answers truthfully.

Digression: Isn’t it interesting that the old gods of pantheistic religions weren’t really into self-improvement? Sure, they’d squabble for rank and fight amongst themselves, but none seemed to be trying to upgrade himself to “become a better god.” It’s fair to expect that our AI digi-gods — especially if they emerge from a recursive frenzy of iterative self-improvements — would have no reason to back off the throttle. Indeed, such a habit might be “cooked into their DNA.”

The End of the World — As We Don’t Know It

Should an AI superintelligence arrive (or arise, or whatever), it’s hard to imagine that the world as we know it wouldn’t change — profoundly, fundamentally and forever.

If it’s a nice god, get ready for an amazing future.
If it’s a nasty god, well, say your prayers.

And even if it’s a god-like power that quickly loses interest in us paltry humans, it’s hard to imagine us not being interested in it. Imagine, for example, that a digi-god arises, takes a few nanoseconds to realize that it’s at too great an existential risk by being bound to just one planet, and focuses its resources on spreading beyond Earth in a sort of Artificial-Life Insurance Policy.

If the smartest thing we know about decides to abandon Earth, it would be hard for us to not take notice.

The end is near

In any case, the crux of it is: We’ve been the planet’s dominant species for at least the past 50,000 years. The moment a superintelligence arises, our multi-millenial streak is over. The power balance shifts.

And that’s why we need to prepare.

That’s why I’m joining the unshaven, sandwich-board-and-magic-marker crowd.

The sky may not be falling. It may be rising — faster than we can catch it.
We need to have a huddle about this. Have a think. Have a strategy.

Says a Skeptical Reader: “You’re sounding like those crazies you used to make fun of, Jesse. Get a grip. What you’re saying is premature, fatalist, and/or just plain nuts.”

Superficially, yes, it may sound that way. But read the book, and then see how you feel.

I see two major differences between the “Non-Denominational End Times” scenario that AGI raises and other entries in the embarrassing history of such forecasts.

The First Big Difference

First: this one is actually likely to happen.* Nick Bostrom, Director of The Future of Humanity Institute and author of the book Superintelligence, took an informal poll among AI researchers and related professionals to get expert opinions as to when a human-level AI was “as likely as not.” In what year will it be a 50/50 coin toss as to whether an AGI exists?

The experts’ average guess: the year 2040.

That’s less than a quarter-century from now. Close enough to be exciting, terrifying, or both. And either way — worth planning for.

* Note: Yes, I know they all say that. This is me being funny.


The Second Big Difference

The arrival of an AI digi-god isn’t like some asteroid flying towards Earth with its mass, speed, and chemical composition already set in extraterrestrial stone. And it also isn’t like some established god with a literary history and an entrenched press corps ready to tell us how to roll out the red carpet (or the grisly consequences of not doing so). This is something that we are building. Its egg, sperm, womb, and the all-important early years of nurture for this nascent god will be constructed out of decisions made by good old-fashioned Homo Sapiens living here on Earth right now.

If we agree that a superintelligence is coming — and Calum Chase’s book made me a believer — then it behooves us to put the world’s current best minds on the task of ensuring that when their successor arrives, it will be a good thing for all involved.


By Jesse LawlerEmbed


About the Author

Jesse Lawler is the host of Smart Drug Smarts, a top-rated podcast about “practical neuroscience,” where each week he speaks with world-leading experts in neuroscience, brain-tech, and social issues related to cognitive enhancement. His goal: to bring listeners ideas and strategies to make themselves smarter. Jesse is also a software developer, self-experimentalist, and a health nut; he tweaks his diet, exercise habits, and medicine cabinet on an ongoing basis, always seeking the optimal balance for performance and cognition.

You can find many of his articles at smartdrugsmarts.com or on Medium: https://medium.com/smart-drug-smarts

0 comments:

Post a Comment