We haven’t found strong arguments on the “Don’t be worried. Here’s why …” side of things. We know the arguments must exist, but we can’t find them (send them to us!). So, what to do? Make some arguments up, that’s what!
- Size matters (can’t get to big and be a “self” so no reason for AI to wipe us out to use our atoms)
- There is no “they.” Augmentation means there is no AI outside us, but that humanity becomes AI through augmentation.
Random notes we used to organize our thoughts, not necessarily in the episode:
Jerry Kaplan … we are framing AI wrong. The main reason people build AI software is to automate tasks, not to create general intelligence. Therefore we are going to have some economic disruptions (Humans Need Not Apply). Asserts that worrying about the robot apocalypse is missing the point of what AI is about. “A reminder that AI systems don’t need red laser eyes to be dangerous.”
Moore’s law is over. All exponentials die. We’ve never yet seen one that doesn’t come to an end.
All progress comes from the human population explosion and that is over, so the rate of progress is actually slowing down.
There will be many and they will keep each other in check. This one only helps us if we assume there are other intelligences in the universe and that the AI from here won’t be able to take over everybody.
They could but they won’t want to:
* There is no they because it will be us.
* It will be better than us and we wouldn’t kill us.
* As long as ants aren’t bothering us, we tend to leave them alone. This might also be true for how superhuman AIs treat us.
They fragment when they get too big, so they have less incentive to take over everything.
They could use all of our matter without disorganizing us.
Ted: It will be us.
Brandon: They fragment when they get too big.