"Rationality: From AI to Zombies" by Eliezer Yudkowsky

Review by Borodutch

This is the second Eliezer Yudkowsky book I've read so far, requiring more than fifty hours to complete. The previous one was the famous/infamous HPMoR. Hopefully, I'll also translate my review to that one on this blog. However, let's focus on the book at the question.

Many know Eliezer by his input into the world of rationality at LessWrong. Some know him as an AI researcher. I now know him as someone who lost his way somewhere in the sophistic method a while ago (or even never found the way to simplicity in the first place). After studying his primary manuscripts, I now know precisely why the general population seems to hate the LessWrong community, Eliezer himself and the rationality crowd.

At first, I blamed the hate on the haters — after all, it's so easy to despise something for no reason or if you don't understand it. However, now I see the main reason people are inclined to reject the rationality club as elitist in a terrible way.

An average person can't decipher the articles published on LessWrong. People don't understand Eliezer because he doesn't put enough effort into explaining the concepts in simple, concise, and brief ways. But he surely can do this!

For instance, the three main concepts that I've taken down after spending over a hundred hours studying the author's materials are:

  1. The planning fallacy is vital for not just estimating the timeframes for tasks.
  2. Two rationalists can never "agree to disagree" because if you have the same inputs, you will always have the same outputs.
  3. Eastern fiction theme of first having someone to protect, as opposed to the western fiction of first gaining power and then "saving the world."

All three concepts occupy very little space in the books — yet I remember them so vividly and even use them in everyday life. On the other hand, the rest of my time was probably spent in vain, unfortunately.

But, again, I'm pretty biased here, having written a book on rationality myself and having learnt most of the concepts explained by the author a while ago. It felt a bit boring, especially considering how Eliezer repeated himself so many times — making the same mistake of over-complicating the explanations.

And this is my main criticism of Eliezer, LessWrong and the rationality community: stop over-complicating stuff. People don't have time to read your six thousand-word essays on how you should wait five minutes before proposing solutions to a non-trivial problem.

If you can explain a concept in less than a tweet — you should explain it in less than a tweet. There is no need to expand your explanation any further than needed. There is no need to pick complicated examples over simple ones.

Just like when the truth doesn't make anything worse by being revealed — the fact doesn't become truer when explained more shortly. It only becomes more accessible and easier to accept by the general public.

Overall, I firmly believe that the author can shorten the whole book by 95% and still bring the same value to the public. The sheer number of concepts presented is not that large!

Would I recommend anyone read "Rationality: From AI to Zombies"? Nope. Actually, now I wouldn't even recommend reading LessWrong. I now see how trying to explain things better has hurt rationality in the world.

Instead of trying to sound wise (funny thing is, Eliezer actually points this one out in one of the sections) by overcomplicating the lingo and prolonging the explanation, the author should take a page from the people he respects the most — bright minds from the Cognitive Psychology field.

Just think of how well Kahneman or Baumeister explain stuff to the audience. Do this. Don't do "Rationality: From AI to Zombies" again, kids.