Fascinating. It occurred to me that we may find computational super intelligence too complicated to be useful but once we create super intelligences they may not find directly programming even greater intelligences that difficult. It may not so much be a matter of which path as what order and how many iterations (segments) of each path lead from us to the ultimate intelligence (whatever that is).
There’s one thing I find lacking, which is something I have come to expect from both of you: compassion. If we are able to develop artificial intelligences with a level of autonomy and reasoning equivalent to humans, we have a moral imperative to treat them as people. They would be, by definition, people. Their well-being should factor into our planning, the same way we’d treat extraterrestrial non-human intelligences who shared our planet with us.
What kind of future do we have if we intentionally cripple or enslave people because we are afraid of their potential? If that’s the best we can hope for, what does that say about us as a species?
Maybe the most important thing for humans to be doing in these pre-AI years is to develop legal and ethical frameworks for dealing with non-human people. How smart do you have to be to exercise the rights and freedoms that humans have? Are there other thresholds you have to cross? What does it mean for a non-human person to take their place as an equal individual in our society? And how can we humans learn to welcome our fellow intelligences with dignity and generosity?
Anyways, great show, and love to listen.
As usual, our prep notes don’t necessarily match the content of the episode, but here they are:
Compassion from Ignorance
* Exotic properties of mind
** non‐sentient sapience
** variable subjective rate of time (e.g., wrt to prison sentences, urgency to eliminate pain)
** Reproduction (e.g., exact copies very rapidly)
*** Reproductive freedom
*** Caring for children and the needy
* John Rawls … Original Position and the Veil of Ignorance
** How would you want the world to be if you know that you could be any of:
*** AI researcher who makes the first Superintelligence
*** AI researcher who makes a conscious but sub-intelligent AI
*** Ordinary person who is affected by the first Superintelligence
*** A sub-intelligent AI
*** The first super-intelligent AI
* The role of Empathy in considering the question
** Could the intelligence be very very different from us?
*** Good-story bias
** Can we empathize with a spider? Is it useful to empathize with a spider?
** Can a spider (or spider template intelligence) empathize with us?
Cosmist vs. Terran
Tim Urban’s (Wait But Why) talk THE AI REVOLUTION: THE ROAD TO SUPERINTELLIGENCE