0003: Skeptics

This is stuff we thought / read about in preparation for the episode. We didn’t end up talking about all of this.

There are two different senses of skeptic

  • Enthusiasts vs skeptics (it will be great vs. it will suck)
    • e.g., Ray Kurzweil and Peter Diamandis vs. Bill Joy, Jaron Lanier, Elon Musk, Stephen Hawking, etc
  • Massive explosion vs skeptics (soon and big vs. not soon and/or not big)
    • Never going to happen
    • Maybe, but it will take almost forever, 1000 years
    • We have a while, 70+ years
    • It won’t matter, not the big impact, there are bigger things … Climate vs AI

Terms

  • hard take-off … take events out of control so quickly that we’ll want to have done our homework beforehand
  • slow take-off
  • probabilistic vs explanatory models
  • connectionism, computationalism, and robotics

Readings:

Jaron Lanier … “It’s true, in some far future situation, we’re going to transition. . . . I think it’s very hard to predict a year.” Approximately when? “I think we’re in pretty safe territory if we say it’s within this century.”

Paul Allen … wrote “the singularity isn’t near” article … “By the end of the century,” he wrote, “we will still be wondering if the singularity is near.”

“I think we are going to start to interconnect as a human species in a fashion that is intimate and magical,” Diamandis told me. “What I would imagine in the future is a meta-intelligence where we are all connected by the Internet [and] achieve a new level of sentience. . . . Your readers need to understand: It’s not stoppable. It doesn’t matter what they want. It doesn’t matter how they feel.”

Charlie Stross … This is my take on the singularity: we’re not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we’re going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs “intelligently”. But it will be the intelligence of the serving hand rather than the commanding brain, and we’re only at risk of disaster if we harbour self-destructive impulses.

More Charlie Stross … Any way you cut these three ideas, they don’t provide much in the way of referent points for building a good life, especially if they turn out to be untrue or impossible (the null hypothesis). Therefore I conclude that, while not ruling them out, it’s unwise to live on the assumption that they’re coming down the pipeline within my lifetime.

John Horgan … neuroscience appears to be making “antiprogress” — the more information we acquire, the less we seem to know.

Yann LeCun, the head of Facebook’s AI lab, summed it up in a Google+ post back in 2013: “Hype is dangerous to AI. Hype killed AI four times in the last five decades. AI Hype must be stopped.”

trying to balance the need to start some preliminary work on a threat that looms way off in the distance versus the risk of engendering so much hype that it starts a giant backlash.

John McCarthy once told me that when people ask him when human level AI will be achieved he says between five and five hundred years from now.

When I read the articles about skeptics, I see them making two points over and over again. First, we are nowhere near human-level intelligence right now, let alone superintelligence, and there’s no obvious path to get there from here. Second, if you start demanding bans on AI research then you are an idiot.

The difference between skeptics and believers isn’t about when human-level AI will arrive, it’s about when we should start preparing.

Advertisements