The Best Books I’ve Read This Year


Maybe 15 years ago I read William Gibson’s Neuromancer and hated it. At some point after that my brain started confusing Gibson with Neal Stephenson, with the result that I would not touch any of Stephenson’s books, until this year when finally an overwhelming number of recommendations accumulated. I relented and picked up “Anathem”, expecting confused and unfinished sentences, when instead I was hooked right from the start.

“Anathem” is set on a world that is home to an ancient system and culture of secular[1] monasteries, dedicated to science and philosophy, and carefully sealed from the rest of civilization. Most of the plot plays out in those monasteries, slowly revealing their weird culture, such as how even the monasteries are strictly separated into parts that open every ten years, every hundred years, and every thousand years, respectively.

The civilizations outside the monasteries are volatile, sometimes reverting from a modern culture to medieval barbarism within decades, it seems, something that’s never explained. The book starts off with a tenner monk interviewing somebody from the outside world, in preparation for the opening of the tenner-part of the monastery, leading off with a question from a script for making contact, “Do your neighbors burn one another alive?”

During the time the monastery is open, a significant and shocking discovery is made, the nature of which would be a spoiler if you haven’t read it, but it took me from appreciating the world Stephenson had built, and being curious about its many intricacies to being totally engrossed in the plot and longing for explanations, which Stephenson, through his characters’ philosophical discussions, does provide.

I proceeded to read “Cryptonomicon” and “Seveneves”, but out of those three “Anathem” was by far the strongest book for me.

Zero to One

This is a book about entrepreneurship and innovation, by Peter Thiel, one of PayPal’s founders. He puts out a lot of ideas, not all of which I agree with, but many of which I found good food for thought.

One of Thiel’s central theses is that only monopolies can innovate. Businesses in competition have too much on their hands just to survive and cannot do long-term development:

In business, money is either an important thing or it is everything. Monopolists can afford to think about things other than making money.

A monopoly is not necessarily a huge business. Most successful start-ups begin very small but they’re the only ones providing a particular service. In their domains, they are monopolies.

To get there, according to Thiel, you need a definite outlook on the future—having an idea of where things are going. He believes that most of Western society today is indefinite, explaining the prevalence of finance:

Finance epitomizes indefinite thinking because it’s the only way to make money when you have no idea how to create it.

But in an indefinite world, people actually prefer unlimited optionality; money is more valuable than anything you could possibly do with it.

We place far too much importance on “well-rounded” education:

a definite person determines the one best thing to do and then does it. Instead of working tirelessly to make herself indistinguishable, she strives to be great at something substantive—to be a monopoly of one.

This resonates with me, having had to waste time at school studying parochial subjects such as Austrian history and geography or Latin, when I would have been better off just doing math and English, and having the rest of my time for playing with computers.

Another central theme in the books is secrets. A secret is something you know, but nobody else yet knows, or believes. It’s an essential ingredient to a start-up. Discovering secrets might seem daunting, but Thiel is optimistic:

There’s plenty more to learn: we know more about the physics of faraway stars than we know about human nutrition.

Thing Explainer

Bill Gates wrote a wonderful review of Randall Munroe’s “Thing Explainer”, a book that uses illustrations and only the one thousand most used English words to explain difficult concepts and products.

The language is not just a gimmick. Avoiding difficult terminology makes Munroe reduce the inferential distance to his readers, making even difficult concepts easy to grasp. His explanation of turbofan jet engines—sky boat pushers—is much better than the one on the Wikipedia page.

Sometimes the reduction of vocabulary makes banal statements utterly hilarious, like when, describing “bags of stuff inside you”, he writes about the mouth:

It’s where air goes in and out, food goes in, and words come out. Note: Some people don’t like it when you make words come out while you’re putting food in.

Aside from being entertained, I learned quite a few things. For example, in nuclear reactors, there is a room below the core:

If there are problems and everything is on fire, the special metal can get so hot that it starts moving like water. Sometimes, it can get hot enough to burn a hole through the floor. If that happens, this room is here so the watery metal can fall down and then spread out over the floor.

It’s good if the metal can spread out, since when it’s all close together, it keeps making itself hotter. If this room ever gets used, it means everything has gone very, very wrong.

I read the Kindle edition on my iPad, which worked very well, except that the page explaining operating rooms—rooms for helping people—was broken. I hope that’ll get fixed in an update soon.

Rationality: From AI to Zombies

This book is a compilation of Eliezer Yudkowsky’s writings on the topics of rationality and artificial intelligence. It contains a lot of wisdom. As far as I know all of it can be found online, but the book provides a nice structure.

I have highlighted and annotated more passages in this book than in any other by far. This is but a small selection.

Yudkowsky is big on separating the map, your beliefs, from the territory, reality. A lot of “philosophy” and deep-sounding “wisdom” vanishes when you do, like some people’s love of mystery:

But ignorance exists in the map, not in the territory. If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself. A phenomenon can seem mysterious to some particular person. There are no phenomena which are mysterious of themselves. To worship a phenomenon because it seems so wonderfully mysterious is to worship your own ignorance.

The ontological argument is just one weird application of this confusion, treating the map (your idea of God) like territory that just lacks existence. It’s as if the shape I drew into the Pacific Ocean part of my globe were a full-fledged continent whose only drawback was that it doesn’t exist.

This carries over to other topics, like in his discussion of quantum mechanics (he’s a firm proponent of the Many-worlds “interpretation” of QM, as he should be):

Quantum physics is not “weird.” You are weird. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality’s, and you are the one who needs to change.

If we want our map to reflect the territory as best we can, we better use words whose boundaries are boundaries in the real world, too. Hence we should be careful about how we define words:

Your brain doesn’t treat words as logical definitions with no empirical consequences, and so neither should you. The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity. Or block inferences of similarity; if I create two labels I can get your mind to allocate two categories.

Many chapters on cognitive biases and how to combat them. For example, when confronted with a problem, hold off on proposing solutions, because

Once you can guess what your answer will be, you have probably already decided. If you can guess your answer half a second after hearing the question, then you have half a second in which to be intelligent. It’s not a lot of time.

Some beliefs are very important to us, but yet we might be wrong. How can we even begin to evaluate those beliefs dispassionately?

You would advise a religious person to try to visualize fully and deeply the world in which there is no God, and to, without excuses, come to the full understanding that if there is no God then they will be better off believing there is no God. If one cannot come to accept this on a deep emotional level, one will not be able to have a crisis of faith. So you should put in a sincere effort to visualize the alternative to your belief, the way that the best and highest skeptic would want you to visualize it.


it is wisest to regard our past selves as fools beyond redemption— to see the people we once were as idiots entire. […] As long as we are making excuses for the past, trying to make it look better, respecting it, we cannot make a clean break.

This obviously applies to non-religious beliefs, too.

A lot of discussion about politics:

Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back—providing aid and comfort to the enemy.

This is why I have a hard time watching documentaries on politicized issues. They are usually done by a proponent of one side of the issue, and they will lie through their teeth to make their side look good. In reality, it’s rarely the case that one proposed solution has no downsides:

If you defend yourself, you may have to kill. If you kill someone who could, in another world, have been your friend, that is a tragedy. And it is a tragedy. The other option, lying down and dying, is also a tragedy. Why must there be a non-tragic option? Who says that the best policy available must have no downside?

Somewhat related to this is a post by Scott Alexander, whose blog you should definitely read.

Something that I worry about, too, in the chapter Superstimuli and the Collapse of Western Civilization:

If people have the right to be tempted—and that’s what free will is all about—the market is going to respond by supplying as much temptation as can be sold.

Paul Graham also wrote about this.

A big thing Yudkowsky is wrong about is Bayesianism and positivism. The central idea of positivism:

Above all, don’t ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry.

As David Deutsch points out, this very idea, that only beliefs about physical observables are in any way relevant, is itself a belief that is not about physical observables, shooting itself in the foot.

The Bayesian comes through here:

Rationality is not for winning debates, it is for deciding which side to join.

Sometimes, all sides are wrong, and only a new idea will help, yet Bayesianism is very quiet on the topic of the generation of new hypotheses. Again, see David Deutsch for more arguments why science is not Bayesian.

Becoming Steve Jobs

Walter Isaacson’s biography was badly received by the cognoscenti, so I didn’t bother reading it and can’t draw any comparison’s to Brent Schlender and Rick Tetzeli’s “Becoming Steve Jobs”, but I can definitely recommend the latter.

This is not a collection of anecdotes about Jobs, although it does have its fair share. It’s the story of a person who learned and changed.

The story starts with the visionary product creator who led the development of the Apple II and the Macintosh but failed to command the then much larger company Apple, ultimately being fired by the board. With NeXT he created yet another technological masterpiece but failed to bring the company to success, for a variety of reasons, while at the same time helping Pixar become what it is today. He emerged having learned from his failures, taking the helm at Apple again, but this time succeeding spectacularly, having become the Steve Jobs he’s remembered for now.

What stayed most with me from this book was Jobs’ ability to completely change his mind, very quickly, and damn the sunk costs:

On the car ride over to the prototype hangar, Johnson told Steve that he thought they’d gotten it all wrong. “Do you know how big a change this is,” Steve roared. “I don’t have time for this. I don’t want you to say a word to anyone about this. I don’t know what I think of this.” They sat for the rest of the short ride in silence.

When they arrived at the hangar, Steve spoke to the assembled group: “Well,” he said, “Ron thinks we’ve designed our stores all wrong.” Johnson waited to hear where this line of thought would go. “And he’s right,” said Steve, “so I’m going to leave now and you should just do what he’s going to tell you to do.” And Jobs turned around and left.

  1. Stephenson doesn’t help by calling the world outside the monasteries “saecular”, when it clearly isn’t secular. It takes a while to get used to his invented words here, but it’s worth it.  ↩

Polish History

Often, when working on private branches with long histories, it so happens that a commit somewhere in between breaks the build, or some tests don’t pass anymore. If you know which commit it is you can fix it with git-rebase. But maybe you don’t know. Maybe you aren’t even sure whether all the commits in your history build and pass the tests.

git-polish-history to the rescue!

Let’s say you are on a private feature branch with quite a few commits that you are unsure about. The last one that’s been pushed is last-pushed-commit. To check whether your code compiles and passes all the tests you can run make check, but you don’t want to do that manually for each commit, so let’s automate it:

git polish-history --test="make check" start last-pushed-commit

git-polish-history will now go through all commits since last-pushed-commit and run make check on them. On the first one that fails it will stop and tell you to fix the problem. You can do this any way you want as long as you commit your changes. Typically you’d amend the last commit. Once you’re done, you do

git polish-history continue

and the process continues. It will stop again whenever a commit fails the test, when a merge conflict occurs, or when it’s done. If you want it to do the committing of your changes automatically, use

git polish-history continue --automatic

Here’s a video with a demonstration: