0049: After On by Rob Reid
Read After On by Rob Reid, before you listen or because you listen.
Read After On by Rob Reid, before you listen or because you listen.
We need better language to talk about these difficult technical topics. See https://concerning.ai/2017/03/31/0039-we-need-more-sparrow-fables/ for notes.
Do we need to do philosophy on a deadline? Can AI help make us better humans?
Nick Bostrom’s Superintelligence Fiction from Liu Cixin: The Three Body Problem The Dark Forest Death’s End We’re a lot more beautiful if we try. (5:41) The Upward Spiral (9:45) Are we getting any wiser? (12:43) What are we trying for? To continue an aesthetic lineage. (13:55) Kurzweil. When the machines tell us they are human, […]
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0026-2016-09-18.mp3 These notes won’t be much use without listening to the episode, but if you do listen, one or two of you might find one or two of them helpful. Lyle Cantor‘s comment (excerpt) in the Concerning AI Facebook group: Regarding the OpenAI strategy of making sure we have more than one superhuman AI, this is […]
Fiction is fun. And, we can’t rely on it to help us figure out what’s going to happen.
Ben’s frustrated with us. Let’s see if we can figure out why.
How might we get to superintelligence? This episode explores some possible paths, or maybe simply directions.
Existential risk – One where an adverse outcome would either annihilate Earth- originating intelligent life or permanently and drastically curtail its potential