0004: Prioritization of Risks

Our notes compiled in prep for recording this episode. Much of this we didn’t talk about.

Outline
– Define existential threat
– Diagram
– Examples … simulation off, dark forest,
– Climate Change
– Scale, Tractability, Impact, Fit
– Is Concerning AI on track?
– Next episode:
– Paths from here to AGI … Connectionist (Neural nets), Computationalist (Symbolic), Robotics (embodied) … Uploading … big data … other paths

– nanotechnology
– nuclear holocaust
– we are a simulation that can be shut down http://m.space.com/30124-is-our-universe-a-fake.html
– badly programmed super intelligence
– doomsday virus
– grey goo
– something unforeseen
– physics disasters
– naturally occurring diseases (HIV as contagious as common cold)
– asteroid or comet
– runaway global warming
– Superintelligence would make the progress curve for nanotechnology much steeper, thus shortening the period of vulnerability between the development of dangerous nanoreplicators and the deployment of adequate defenses. By contrast, getting nanotechnology before superintelligence would do little to diminish the risks of superintelligence. The main possible exception to this is if we think that it is important that we get to superintelligence via uploading rather than through artificial intelligence. Nanotechnology would greatly facilitate uploading [39].

– Talk about expected value of future generations

Ep 0004
– Prioritization of Risk
– Climate Change vs. AI explosion
– worst case scenarios (sudden catastrophic)
– inertia of the system, how perturbable by small pushes
– like overpopulation on Mars? Andrew Ng

Cambridge Center for Study of Existential Risk http://cser.org says “greatest threats” to the human species are man-made; they are artificial intelligence, global warming, nuclear war and rogue biotechnology.

Future of humanity institute gave 19% chance of human extinction before 2100
– 5% molecular nanotechnology weapons
– 5% superintelligent AI
– 4% wars

Nick Bostrom paraphrased … how many other people are already studying it?  Because the more people studying it the less difference a few extra minds make studying it.

From 80,000 hours … in brief, assess causes using the following four factors:
– Scale – If we made significant progress on this problem, would the world become a little better, or much better?
– Tractability – How easy is it to make progress on this problem?
– Neglectedness – How many resources are already being dedicated to tackling this problem?
– Personal fit – Given your skills, resources, knowledge, connections and passions, how likely are you to excel in this area?

Low hanging fruit.

Examples of well studied topics:
– Peace vs War
– Climate change

http://www.existential-risk.org/faq.html
– If the field of existential risks mitigation has suffered from neglect and apathy, it is probably not because the topic is gloomy. Rather, part of the explanation might be because the topic can seem silly and/or impersonal. The topic can seem silly because the fact that there has never been an existential catastrophe makes the possibility of one seem far-fetched, because the biggest existential risks are all rather speculative and futuristic, because the topic has been besieged by doom-mongers and crackpots, and because there is as yet no significant tradition of serious scholars and prestigious institutions doing careful high-quality work in this area. The topic can seem impersonal because there are no specific identifiable victims — no heart-rending images of child casualties, for example. The main dangers seem to be abstract, hypothetical, and non-imminent, and to be the responsibility of nobody in particular.

Bostrom … would you have warning https://www.youtube.com/watch?v=h9LaSfq64E8

Bostrom … survey … median 50% chance at 2040 or 2050 (depending on group) … https://www.youtube.com/watch?v=aheywElK_U8

2013 Martin Rees
Denial of Catastrophic Risks
– We’ve survived earthquakes for thousands of years but we don’t have a long timebase to evaluate these human generated threats.
– We’re entering an era when a few individuals could, via error or terror, trigger societal breakdown.
– True, it is hard to quantify the potential “existential” threats from (for instance) bio- or cybertechnology, from artificial intelligence, or from runaway climatic catastrophes.
– I don’t know because we haven’t done the studies yet.  Bio, cyber, nano, and artificial intelligence, climate and environmental runaway risks.

2002 Nick Bostrom
Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards
Faculty of Philosophy, Oxford University
http://www.nickbostrom.com/existential/risks.pdf
– scope, intensity, and probability
– Existential risk – One where an adverse outcome would either annihilate Earth- originating intelligent life or permanently and drastically curtail its potential

– nanotechnology
– nuclear holocaust
– we are a simulation that can be shut down http://m.space.com/30124-is-our-universe-a-fake.html
– badly programmed super intelligence
– doomsday virus
– grey goo
– something unforeseen
– physics disasters
– naturally occurring diseases (HIV as contagious as common cold)
– asteroid or comet
– runaway global warming
– Superintelligence would make the progress curve for nanotechnology much steeper, thus shortening the period of vulnerability between the development of dangerous nanoreplicators and the deployment of adequate defenses. By contrast, getting nanotechnology before superintelligence would do little to diminish the risks of superintelligence. The main possible exception to this is if we think that it is important that we get to superintelligence via uploading rather than through artificial intelligence. Nanotechnology would greatly facilitate uploading [39].

2008 Eliezer Yudkowsky
Artificial Intelligence as a Positive and Negative Factor in Global Risk
http://intelligence.org/files/AIPosNegFactor.pdf

Hawking’s AMA Answers (wrt AI)
– You’re right: media often misrepresent what is actually said. The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.

– It’s clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.

– The latter. There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.

– An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.

– You’re right that we need to avoid the temptation to anthropomorphize and assume that AI’s will have the sort of goals that evolved creatures to. An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.

audio at the end about terrifying AI:
http://www.thetalkingmachines.com/blog/2015/7/2/solving-intelligence-and-machine-learning-fundamentals

Advertisements