The AI Does Not Hate You – A story about Rationalists

The first thing that’s useful to know about Tom Chivers’ book The AI Does Not Hate You is that most of the book is not about AI. This is a book about the Rationalist movement and their figureheads.

Let me save you some time by summarizing the AI narrative of the book.
The Rationalists believe that we might be able to create an AI that is much smarter than the smartest human in the next 50 to 100 years. When this happens they believe that there is a significant chance that AI will either make us immortal or wipe us all off the face of the earth. Immortality in this context should not be seen as ensuring that our bodies can keep going for all eternity, but perhaps by extracting your brain and storing it in an external system that is much easier to preserve than our bodies.

It’s unlikely that the AI’s ultimate goal would be to extinct humans. Most likely we would simply be in the way of the actual goal that it’s trying to achieve. We might be using up resources that it feels could also be used to achieve their assigned task in a bigger, better, more efficient way. The recurring comparison to an overly focused AI causing problems and perhaps even extinction is the broom that is bewitched to fill up the cauldron in Disney’s the sorcerer’s apprentice. The broom fills up the cauldron but doesn’t stop when it’s full, completely flooding the place. When Mickey chops up the broom to try and stop it the little bits of broom all turn into individual complete brooms, now flooding the place even quicker than the one broom was before.

Eliezer Yudkowsky, one of the key figures of the Rationalist movement, came up with the summary that inspired the title of the book: “The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else”. At the early stages of the movement, Yudkowsky set out to explain what AI is and why he feels AI poses an existential threat. While writing he felt that he had to explain a lot of underlying or slightly related concepts first. Whatever we think about him, he certainly wasn’t work-shy, and he just started at the beginning. He wrote a series of blog posts, now called the Sequences. The total volume is significantly bigger than the combined books of the Lord of the Rings trilogy.

Rationalists believe that trying to mitigate the risk of an AI killing us all is worth a lot of time and money. They come to this conclusion by evaluating the risk in a very rational way (this won’t surprise you), but they make a few assumptions that I personally wouldn’t make. The most important one is that they assign the same value a potential future human life as to the life of someone living today. They argue that the number of potential future lives that could be “lost” if we go extinct in the next century or so is huge and because of that, even if there is only a tiny chance that AI might kill us all, it is worth a lot if we can take steps towards preventing extinction.
To increase the number of potential future lives that could be lost as much as possible they assume that we will be able to not just live on earth, but also on many other planets and space ships.
I must admit that this was enough for me to mentally discard the Rationalist movement as “slightly out of touch with reality”. I’d choose to invest in other things like climate change to just mention one that is top of mind.

Chivers spends a lot of time discussing if the Rationalists are a cult. Personally, I don’t care. Whether or not the Rationalists are a cult has nothing to do with the risk that AI poses to humanity and they don’t seem to be forcing their view onto anyone. In fact, based on this book I can only conclude that they feel that most of the world isn’t smart enough to understand, so there’s no point trying to convince them.
Chivers himself comes across as a bit of a Rationalist fanboy who gets to play along with some people in the movement in many places in the book, while he positions himself as the less nerdy, more streetwise outsider in other places.

Many Rationalists are polyamorists and as long as it’s consensual that is completely up to them. I don’t even want to know. Especially not if I’m reading a book about AI. Chivers however also discusses some cases where people in the movement were accused of sexual abuse and abuse of power, only to very quickly dismiss these cases as being “no worse than in other communities”.
This was almost enough to make me stop reading the book.

Leaving a book half-read annoys me even more than reading about Chivers dismissing the abuse accusations and the discrepancy between the title of the book and its contents, so I did finish the book in the end. I did learn a bit about AI and more than I bargained for about Rationalists. There are some interesting bits in the book about human brains and biases and there’s an interesting explanation about that fact that a lot of arguments are disagreements about labels, rather than disagreements about content.
However, if you are interested in AI I would recommend that you pick a different book. If you want to learn about the Rationalists from someone who loves the movement and their ideas, I can highly recommend this book.The AI does not hate you

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.