Listener Feedback this episode
Mostly a listener feedback episode. Lots of great stuff here!
We need better language to talk about these difficult technical topics. See https://concerning.ai/2017/03/31/0039-we-need-more-sparrow-fables/ for notes.
Listener Voicemail & Comments Eric’s voicemail Evan’s comment (Our interview with Evan: ep 0011: Evan Prodromou, AI practitioner (part 1), ep 0012: Evan Prodromou, AI practitioner (part 2) John’s comment Ted got the author’s name wrong Predictably Irrational by Dan Ariely (25:12) Moving on from Feedback into what’s going to get us from here to there Instantaneous Judgement (Stimulus-Response) Reinforcement […]
Wind up your propeller hats! This one is a doozy. Hopefully someone can explain it to me (Ted).
Too time constrained for show notes this time. If you want to send us notes to be added here, please do it! Best place to reach is is the Concerning AI group on Facebook. All of the listener feedback in this episode comes from that group. Thank you all! Subscribe in Overcast, iTunes, or through […]
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0026-2016-09-18.mp3 These notes won’t be much use without listening to the episode, but if you do listen, one or two of you might find one or two of them helpful. Lyle Cantor‘s comment (excerpt) in the Concerning AI Facebook group: Regarding the OpenAI strategy of making sure we have more than one superhuman AI, this is […]
Human augmentation may be a way for humans to advance on par with non-biological beings (AIs), but do ethical guidelines make that less likely to happen?
Ben’s frustrated with us. Let’s see if we can figure out why.