-3: Running Toward or Running Away

 

T: Hello, this is Ted

B: and this is Brandon

T: and this is the podcast that may be called the friendly AI podcast

B: maybe

T: In the last episode Brandon and I were talking about the possibility of an existential threat, that Artificial Intelligence could be evil.  Maybe not on purpose, but it could be evil and it could wipe out humanity.  And I’ve really been thinking a lot this week about what’s the difference between running away from something that’s terrifying and running toward something that’s really compelling and inspiring?  So I thought we could talk about that this episode.

B: Yeah, it’s interesting.  I find that the fear is what has galvanized me to take this on more recently.  But if you look at my previous drive I would characterize it as more in the positive camp.  Which is about increasing the diversity and complexity of life has been my main driver … before, but that didn’t lead me toward AI necessarily.

T: Yeah, ohh boy.  There is a lot that we could dig in on what that motivation is. For me, all the years that I’ve been thinking about AI, and the singularity … the idea that the exponential can take us to a place that we can’t predict … I have thought about it as evolving humanity toward immortality.  And I’ve really thought that that sounded pretty awesome.

B:  Immortality huh?

T: (laughs) Yeah, it sounds crazy but it’s the logical endgame of lots of different steps.  One step would be life extension — we’re adding to our life expectancy now, and every year we’re adding more to our life expectancy than we were the year before.  And eventually, before too long we’re going to get to the place where we add more than one year to our life expectancy just from medical advances.

B: Yeah, about ten years ago that was the main thing that was motivating me for a while — the idea of immortality.  I would call it my own death anxiety partly.  I would say I’m still interested in that.

T: Yeah, people just brush that off because immortality sounds crazy … “well that’s never been … everybody dies.”  Death and taxes, that’s the thing.

B: Well, one of my favorite lines from Cosmos Carl Sagan version is, “the secret of evolution is time and death.”  And so there is a part of me that questions whether that is a wonderful goal or not.  I don’t know.  I’m not convinced that it wouldn’t be, that the creativity and diversity of what evolution has created couldn’t happen through a different mechanism than natural selection through sexual reproduction and death.  I think that conscious evolution maybe could be the alternative.

T: Right, and I sort of also think that there are always things that we believe are not possible.  When you look back through human history and there were tons of things that we believed were not possible, and then over time many of those things have disappeared as being “not possible.”  It was not possible to have human flight for a long time.  People wanted it, but it wasn’t possible.  And then it was possible.  It was not possible to have human spaceflight.  And now it is possible.  And there are all kinds of thing like that.  So immortality definitely fits in that realm that it’s not possible.  And yet, we’ve seen that we’ve overcome things that were not possible.

B: Well Ted, you’re talking to the wrong guy here when you talk about what’s possible or what’s not possible.  Because I think lot’s and lot’s of things are possible that have never been possible before.  But you’re probably right that if I were talking to my parents or, maybe I’m not giving them enough credit for exploring these cutting edge topics, but I certainly know that there are people who don’t think about it and who dismiss it as crazy talk.  So probably, maybe the majority.

T: Yeah, I think that’s probably right.  So that brings me back to the premise  of this episode: Is it more compelling, does it create more possibility to be inspired rather than to be fearful as we think about AI or think about anything else, any other existential crisis?  We posited last episode that we believe this is the one that is more important, or bigger, or potentially more catastrophic.  And we’re coming from that fear place last episode.

B: It’s interesting.  I think the way you frame something definitely can shape your behavior toward it, you know self-fulfilling prophecies and self-negating prophecies are examples of that kind of thing where the framing and the expectation shapes the behavior in a positive feedback loop that creates something that is different than what would have occurred.  So framing in an AI is dangerous “Ahhhhh” sort of way does that create a self-fulfilling or a self-negating prophecy?

T: Yeah, think about that.  That feels like a really important question.  And I feel ill equipped to even think through that.

B: Well I think about Ray Kurzweil and that music video we looked at was him and some fake general and Alex Jones … a spoof, a rap video, and I thought it was a nice characterization of Ray even though it is kind of a joke.  But he’s all about “let’s get this thing happening already. Every day that we delay is more people dying unnecessarily.  More of these terrible things happening and not nirvana, not wonderful things that are going to happen when minds transcend.”  So I would say that he’s coming from that place already of “it’s going to be a good thing,” the positive framing.  And it seems to me that there are very few breaks or “wait let’s stop and think about this, let’s go slow, let’s consider” in the way he thinks about things.

T: Well right.  Okay so he is a person who is not driven by fear.  He’s driven toward something that he sees as possible …

B: And likely.  He believes the outcome is going to be positive.  So his prophecy is “It’s going to be wonderful,” and it’s driven some people to call the way he sees the world … what is it? … the rapture of the nerds or something like that? Some kind of heaven for Atheist nerds.

T: Yeah, and I’ve read his stuff for years and I find him very compelling.  He addresses some of the criticisms and the possible dangers.  And yet, the way he addresses them I feel like is not complete.  Now, my new way of thinking, my next step of thinking I worry that he is missing something.

B: Yeah, well we could consider why he thinks the way he thinks and decide whether that seems credible to us personally.  We could also look at some of the motivations, the things that might drive us personally toward a positive AI future.  What do you think?

T: Yeah, I think you brought up our families on the last episode, and thinking about the singularity happening in our lifetime, or in the lifetime of our children.  Having children makes this conversation quite different.  Because if it’s just me and I can blow people’s minds by saying “I don’t plan on dying, I plan on living forever,” that’s kind of a fun persona for me to put on.  And now that I have kids it’s not so fun when I think about the potential dangers and existential threat.  What their lives could be like if we’re, if I’m just playing this role “I’m the nerd guy who thinks he’s going to be immortal, isn’t that funny.”

B: It’s interesting you know?  The positive things that drive me toward the future have definitely receded in their importance or how compelling they are as I think about my kids.  I still want positive things for my kids, but I think I’m more afraid of the negative outcomes for them.  Like if they just lived a normal human life, and then their kids had a normal human life I could accept that.  But if that were destroyed I would be really upset.  If they didn’t achieve some beautiful future that I can envision in their lifetime it doesn’t hold the same upset for me.

T: Yeah, this makes me want to look at the research on loss aversion versus whatever the opposite of that is.  I think there is some psychological research that shows that human beings in general are motivated by the fear of losing something.

B: Yeah, I think so too.  I agree with that.  I don’t have it at the tip of my tongue or my fingertips.

T: That’s a question we could maybe address in a future episode … thinking about that.  I wonder if our listeners might want to think through some of these questions with us?

B: Yeah, I bet somebody could send us a good trial or something like that that could really articulate that.

T: Yeah, we have some other questions that we were thinking about before we started recording.  Like one of them was, “what are some things that we could run toward?” If we do believe that it’s better to run toward something than it is to run away in fear.  What could we run toward?  Immortality is one, we talked about that here.  But I’d love to hear from people about what they think we could be running towards.

B: I can’t help myself Ted.  I want to say what I’m excited about.

T: What are you excited about?

B: I’m excited about running toward the diversification of life through the expansion into space.  When us mammals and hominids in particular, and land based animals see the Devonian “move to the land” as a huge amazing moment of the evolution of life on Earth.  When all kinds of diversification and possibilities opened up.  And so I see that move into space and all kinds of opportunities opening up to be at least as awesome as that Devonian expansion was.

T: Well that sounds like a whole episode there.  I’d love to talk more about that and hear your thoughts about what that could be like.  Maybe that’s what we do on this podcast, we have some episodes devoted to these things we can run toward?  So that sounds exciting to me.  Do you have any other questions for listeners?

B: Well, we kind of hinted at it or moved around it.  But, how does our framing of the AI question influence our behavior?  If I think “it will probably be evil” versus if I think “it will probably be benevolent.”  How do things look different?  How do we behave differently?

T: Yep, that’s a great question.  I’d love to hear from people. And we’ll give you some way to get in contact with us once we know what that is.

B: Perfect.

T: Well I’m going to press stop unless you’ve got anything else to say.

B: Great.