0067: The OpenAI Charter (and Assassination Squads)
We love the OpenAI Charter. This episode is an introduction to the document and gets pretty dark. Lots more to come on this topic!
We love the OpenAI Charter. This episode is an introduction to the document and gets pretty dark. Lots more to come on this topic!
Some believe civilization will collapse before the existential AI risk has a chance to play out. Are they right?
Timeline For Artificial Intelligence Risks Peter’s Superintelligence Year predictions (5% chance, 50%, 95%): 2032/2044/2059 You can get in touch with Peter at HumanCusp.com and Peter@HumanCusp.com For reference (not discussed in this episode): Crisis of Control: How Artificial SuperIntelligences May Destroy Or Save the Human Race by Peter J. Scott http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0060-2018-01-21.mp3
There are understandable reasons why accomplished leaders in AI disregard AI risks. We discuss what they might be. Wikipedia’s list of cognitive biases Alpha Zero Virtual Reality recorded January 7, 2017, originally posted to Concerning.AI http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0058-2018-01-07.mp3
Great voice memos from listeners led to interesting conversations.
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0026-2016-09-18.mp3 These notes won’t be much use without listening to the episode, but if you do listen, one or two of you might find one or two of them helpful. Lyle Cantor‘s comment (excerpt) in the Concerning AI Facebook group: Regarding the OpenAI strategy of making sure we have more than one superhuman AI, this is […]
Some things we talked about: Companies developing narrow AI, not giving one thought about AI safety, because just getting the thing to work at all is really hard. Self-driving cars and how fast they’re progressing Difference between Open AI and MIRI in approach Weaponized offense and defense Eliezer’s thought about that missing the point http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0025-2016-09-04.mp3
Existential risk – One where an adverse outcome would either annihilate Earth- originating intelligent life or permanently and drastically curtail its potential