Everything Is F*cked: A Book About Hope: Part 2 – Chapter 9

The Final Religion

In 1997, Deep Blue, a supercomputer developed by IBM, beat Garry Kasparov, the world’s best chess player. It was a watershed moment in the history of computing, a seismic event that shook many people’s understanding of technology, intelligence, and humanity. But today, it is but a quaint memory: of course a computer would beat the world champion at chess. Why wouldn’t it?

Since the beginning of computing, chess has been a favorite means to test artificial intelligence.¹ That’s because chess possesses a near-infinite number of permutations: there are more possible chess games than there are atoms in the observable universe. In any board position, if one looks only three or four moves ahead, there are already hundreds of millions of variations.

For a computer to match a human player, not only must it be capable of calculating an incredible number of possible outcomes, but it must also have solid algorithms to help it decide what’s worth calculating. Put another way: to beat a human player, a computer’s Thinking Brain, despite being vastly superior to a human’s, must be programmed to evaluate more/less valuable board positions—that is, the computer must have a modestly powerful “Feeling Brain” programmed into it.²

Since that day in 1997, computers have continued to improve at chess at a staggering rate. Over the following fifteen years, the top human players regularly got pummeled by chess software, sometimes by embarrassing margins.³ Today, it’s not even close. Kasparov himself recently joked that the chess app that comes installed on most smartphones “is far more powerful than Deep Blue was.”⁴ These days, chess software developers hold tournaments for their programs to see whose algorithms come out on top. Humans are not only excluded from these tournaments, but they’d likely not even place high enough for it to matter anyway.

The undisputed champion of the chess software world for the past few years has been an open-source program called Stockfish. Stockfish has either won or been the runner-up in almost every significant chess software tournament since 2014. A collaboration between half a dozen lifelong chess software developers, Stockfish today represents the pinnacle of chess logic. Not only is it a chess engine, but it can analyze any game, any position, giving grandmaster-level feedback within seconds of each move a player makes.

Stockfish was happily going along being the king of the computerized chess mountain, being the gold standard of all chess analysis worldwide, until 2018, when Google showed up to the party.

Then shit got weird.

Google has a program called AlphaZero. It’s not chess software. It’s artificial intelligence (AI) software. Instead of being programmed to play chess or another game, the software is programmed to learn—and not just chess, but any game.

Early in 2018, Stockfish faced off against Google’s AlphaZero. On paper, it was not even close to a fair fight. AlphaZero can calculate “only” eighty thousand board positions per second. Stockfish? Seventy million. In terms of computational power, that’s like me entering a footrace against a Formula One race car.

But it gets even weirder: the day of the match, AlphaZero didn’t even know how to play chess. Yes, that’s right—before its match with the best chess software in the world, AlphaZero had less than a day to learn chess from scratch. The software spent most of the day running simulations of chess games against itself, learning as it went. It developed strategies and principles the same way a human would: through trial and error.

Imagine the scenario. You’ve just learned the rules of chess, one of the most complex games on the planet. You’re given less than a day to mess around with a board and figure out some strategies. And from there, your first game ever will be against the world champion.

Good luck.

Yet, somehow, AlphaZero won. Okay, it didn’t just win. AlphaZero smashed Stockfish. Out of one hundred games, AlphaZero won or drew every single game.

Read that again: a mere nine hours after learning the rules to chess, AlphaZero played the best chess-playing entity in the world and did not drop a single game out of one hundred. It was a result so unprecedented that people still don’t know what to make of it. Human grandmasters marveled at the creativity and ingenuity of AlphaZero. One, Peter Heine Nielsen, gushed, “I always wondered how it would be if a superior species landed on earth and showed us how they play chess. I feel now I know.”⁵

When AlphaZero was done with Stockfish, it didn’t take a break. Pfft, please! Breaks are for frail humans. Instead, as soon as it had finished with Stockfish, AlphaZero began teaching itself the strategy game Shogi.

Shogi is often referred to as Japanese chess, but many argue that it’s more complex than chess.⁶ Whereas Kasparov lost to a computer in 1997, top Shogi players didn’t begin to lose to computers until 2013. Either way, AlphaZero destroyed the top Shogi software (called “Elmo”), and by a similarly astounding margin: in one hundred games, it won ninety, lost eight, and drew two. Once again, AlphaZero’s computational powers were far less than Elmo’s. (In this case, it could calculate forty thousand moves per second compared to Elmo’s thirty-five million.) And once again, AlphaZero hadn’t even known how to play the game the previous day.

In the morning, it taught itself two infinitely complex games. And by sundown, it had dismantled the best-known competition on earth.


News flash: AI is coming. And while chess and Shogi are one thing, as soon as we take AI out of the board games and start putting it in the board rooms . . . well, you and I and everyone else will probably find ourselves out of a job.⁷

Already, AI programs have invented their own languages that humans can’t decipher, become more effective than doctors at diagnosing pneumonia, and even written passable chapters of Harry Potter fan fiction.⁸ At the time of this writing, we’re on the cusp of having self-driving cars, automated legal advice, and even computer-generated art and music.⁹

Slowly but surely, AI will become better than we are at pretty much everything: medicine, engineering, construction, art, technological innovation. You’ll watch movies created by AI, and discuss them on websites or mobile platforms built by AI, moderated by AI, and it might even turn out that the “person” you’ll argue with will be an AI.

But as crazy as that sounds, it’s just the beginning. Because here is where the bananas will really hit the fan: the day an AI can write AI software better than we can.

When that day comes, when an AI can essentially spawn better versions of itself, at will, then buckle your seatbelt, amigo, because it’s going to be a wild ride and we will no longer have control over where we’re going.

AI will reach a point where its intelligence outstrips ours by so much that we will no longer comprehend what it’s doing. Cars will pick us up for reasons we don’t understand and take us to locations we didn’t know existed. We will unexpectedly receive medications for health issues we didn’t know we suffered from. It’s possible that our kids will switch schools, we will change jobs, economic policies will abruptly shift, governments will rewrite their constitutions—and none of us will comprehend the full reasons why. It will just happen. Our Thinking Brains will be too slow, and our Feeling Brains too erratic and dangerous. Like AlphaZero inventing chess strategies in mere hours that chess’s greatest minds could not anticipate, advanced AI could reorganize society and all our places within it in ways we can’t imagine.

Then, we will end up right back where we began: worshipping impossible and unknowable forces that seemingly control our fates. Just as primitive humans prayed to their gods for rain and flame—the same way they made sacrifices, offered gifts, devised rituals, and altered their behavior and appearance to curry favor with the naturalistic gods—so will we. But instead of the primitive gods, we will offer ourselves up to the AI gods.

We will develop superstitions about the algorithms. If you wear this, the algorithms will favor you. If you wake at a certain hour and say the right thing and show up at the right place, the machines will bless you with great fortune. If you are honest and you don’t hurt others and you take care of yourself and your family, the AI gods will protect you.

The old gods will be replaced by the new gods: the algorithms. And in a twist of evolutionary irony, the same science that killed the gods of old will have built the gods of new. There will be a great return to religiosity among mankind. And our religions won’t necessarily be so different from the religions of the ancient world—after all, our psychology is fundamentally evolved to deify what it doesn’t understand, to exalt the forces that help or harm us, to construct systems of values around our experiences, to seek out conflict that generates hope.

Why would AI be any different?

Our AI gods will understand this, of course. And either they will find a way to “upgrade” our brains out of our primitive psychological need for continuous strife, or they will simply manufacture artificial strife for us. We will be like their pet dogs, convinced that we are protecting and fighting for our territory at all costs but, in reality, merely peeing on an endless series of digital fire hydrants.

This may frighten you. This may excite you. Either way, it is likely inevitable. Power emerges from the ability to manipulate and process information, and we always end up worshipping whatever has the most power over us.

So, allow me to say that I, for one, welcome our AI overlords.

I know, that’s not the final religion you were hoping for. But that’s where you went wrong: hoping.

Don’t lament the loss of your own agency. If submitting to artificial algorithms sounds awful, understand this: you already do. And you like it.

The algorithms already run much of our lives. The route you took to work is based on an algorithm. Many of the friends you talked to this week? Those conversations were based on an algorithm. The gift you bought your kid, the amount of toilet paper that came in the deluxe pack, the fifty cents in savings you got for being a rewards member at the supermarket—all the result of algorithms.

We need these algorithms because they make our lives easier. And so will the algorithm gods of the near future. And as we did with the gods of the ancient world, we will rejoice in and give thanks to them. Indeed, it will be impossible to imagine life without them.¹⁰ These algorithms make our lives better. They make our lives more efficient. They make us more efficient.

That’s why, as soon as we cross over, there’s no going back.


We Are Bad Algorithms


Here’s one last way to look at the history of the world:

The difference between life and stuff is that life is stuff that self-replicates. Life is made out of cells and DNA that spawn more and more copies of themselves.

Over the course of hundreds of millions of years, some of these primordial life forms developed feedback mechanisms to better reproduce themselves. An early protozoon might evolve little sensors on its membrane to better detect amino acids by which to replicate more copies of itself, thus giving it an advantage over other single-cell organisms. But then maybe some other single-cell organism develops a way to “trick” other little amoeba-like things’ sensors, thus interfering with their ability to find food, and giving itself an advantage.

Basically, there’s been a biological arms race going on since the beginning of forever. This little single-cell thing develops a cool strategy to get more material to replicate itself than do other single-cell organisms, and therefore it wins the resources and reproduces more. Then another little single-cell thing evolves and has an even better strategy for getting food, and it proliferates. This continues, on and on, for billions of years, and pretty soon you have lizards that can camouflage their skin and monkeys that can fake animal sounds and awkward middle-aged divorced men spending all their money on bright red Chevy Camaros even though they can’t really afford them—all because it promotes their survival and ability to reproduce.

This is the story of evolution—survival of the fittest and all that.

But you could also look at it a different way. You could call it “survival of the best information processing.”

Okay, not as catchy, perhaps, but it actually might be more accurate.

See, that amoeba that evolves sensors on its membrane to better detect amino acids—that is, at its core, a form of information processing. It is better able than other organisms to detect the facts of its environment. And because it developed a better way to process information than other blobby cell-like things, it won the evolutionary game and spread its genes.

Similarly, the lizard that can camouflage its skin—that, too, has evolved a way to manipulate visual information to trick predators into ignoring it. Same story with the monkeys faking animal noises. Same deal with the desperate middle-aged dude and his Camaro (or maybe not).

Evolution rewards the most powerful creatures, and power is determined by the ability to access, harness, and manipulate information effectively. A lion can hear its prey over a mile away. A buzzard can see a rat from an altitude of three thousand feet. Whales develop their own personal songs and can communicate up to a hundred miles away from each other while underwater. These are all examples of exceptional information-processing capabilities, and that ability to receive and process information is linked to these creatures’ ability to survive and reproduce.

Physically, humans are pretty unexceptional. We are weak, slow, and frail, and we tire easily.¹¹ But we are nature’s ultimate information processors. We are the only species that can conceptualize the past and future, that can deduce long chains of cause and effect, that can plan and strategize in abstract terms, that can build and create and problem-solve in perpetuity.¹² Out of millions of years of evolution, the Thinking Brain (Kant’s sacred conscious mind) is what has, in a few short millennia, dominated the entire planet and called into existence a vast, intricate web of production, technology, and networks.

That’s because we are algorithms. Consciousness itself is a vast network of algorithms and decision trees—algorithms based on values and knowledge and hope.

Our algorithms worked pretty well for the first few hundred thousand years. They worked well on the savannah, when we were hunting bison and living in small nomadic communities and never met more than thirty people in our entire lives.

But in a globally networked economy of billions of people, stocked with thousands of nukes and Facebook privacy violations and holographic Michael Jackson concerts, our algorithms kind of suck. They break down and enter us into ever-escalating cycles of conflict that, by the nature of our algorithms, can produce no permanent satisfaction, no final peace.

It’s like that brutal advice you sometimes hear, that the only thing all your fucked-up relationships have in common is you. Well, the only thing that all the biggest problems in the world have in common is us. Nukes wouldn’t be a problem if there weren’t some dumb fuck sitting there tempted to use them. Biochemical weapons, climate change, endangered species, genocide—you name it, none of it was an issue until we came along.¹³ Domestic violence, rape, money laundering, fraud—it’s all us.

Life is fundamentally built on algorithms. We just happen to be the most sophisticated and complex algorithms nature has yet produced, the zenith of about one billion years’ worth of evolutionary forces. And now we are on the cusp of producing algorithms that are exponentially better than we are.

Despite all our accomplishments, the human mind is still incredibly flawed. Our ability to process information is hamstrung by our emotional need to validate ourselves. It is curved inward by our perceptual biases. Our Thinking Brain is regularly hijacked and kidnapped by our Feeling Brain’s incessant desires—stuffed in the trunk of the Consciousness Car and often gagged or drugged into incapacitation.

And as we’ve seen, our moral compass too frequently gets swung off course by our inevitable need to generate hope through conflict. As the moral psychologist Jonathan Haidt put it, “morality binds and blinds.”¹⁴ Our Feeling Brains are antiquated, outdated software. And while our Thinking Brains are decent, they’re too slow and clunky to be of much use anymore. Just ask Garry Kasparov.

We are a self-hating, self-destructive species.¹⁵ That is not a moral statement; it’s simply a fact. This internal tension we all feel, all the time? That’s what got us here. It’s what got us to this point. It’s our arms race. And we’re about to hand over the evolutionary baton to the defining information processors of the next epoch: the machines.


When Elon Musk was asked what the most imminent threats to humanity were, he quickly said there were three: first, wide-scale nuclear war; second, climate change—and then, before naming the third, he fell silent. His face became sullen. He looked down, deep in thought. When the interviewer asked him, “What is the third?” He smiled and said, “I just hope the computers decide to be nice to us.”

There is a lot of fear out there that AI will wipe away humanity. Some suspect this might happen in a dramatic Terminator 2–type conflagration. Others worry that some machine will kill us off by “accident,” that an AI designed to innovate better ways to make toothpicks will somehow discover that harvesting human bodies is the best way.¹⁶ Bill Gates, Stephen Hawking, and Elon Musk are just a few of the leading thinkers and scientists who have crapped their pants at how rapidly AI is developing and how underprepared we are as a species for its repercussions.

But I think this fear is a bit silly. For one, how do you prepare for something that is vastly more intelligent than you are? It’s like training a dog to play chess against . . . well, Kasparov. No matter how much the dog thinks and prepares, it’s not going to matter.

More important, the machines’ understanding of good and evil will likely surpass our own. As I write this, five different genocides are taking place in the world.¹⁷ Seven hundred ninety-five million people are starving or undernourished.¹⁸ By the time you finish this chapter, more than a hundred people, just in the United States, will be beaten, abused, or killed by a family member, in their own home.¹⁹

Are there potential dangers with AI? Sure. But morally speaking, we’re throwing rocks inside a glass house here. What do we know about ethics and the humane treatment of animals, the environment, and one another? That’s right: pretty much nothing. When it comes to moral questions, humanity has historically flunked the test, over and over again. Superintelligent machines will likely come to understand life and death, creation and destruction, on a much higher level than we ever could on our own. And the idea that they will exterminate us for the simple fact that we aren’t as productive as we used to be, or that sometimes we can be a nuisance, I think, is just projecting the worst aspects of our own psychology onto something we don’t understand and never will.

Or, here’s an idea: What if technology advances to such a degree that it renders individual human consciousness arbitrary? What if consciousness can be replicated, expanded, and contracted at will? What if removing all these clunky, inefficient biological prisons we call “bodies,” or all these clunky, inefficient psychological prisons we call “individual identities,” results in far more ethical and prosperous outcomes? What if the machines realize we’d be much happier being freed from our cognitive prisons and having our perception of our own identities expanded to include all perceivable reality? What if they think we’re just a bunch of drooling idiots and keep us occupied with perfect virtual reality porn and amazing pizza until we all die off by our own mortality?

Who are we to know? And who are we to say?


Nietzsche wrote his books just a couple of decades after Darwin’s On the Origin of Species was published in 1859. By the time Nietzsche came onto the scene, the world was reeling from Darwin’s magnificent discoveries, trying to process and make sense of their implications.

And while the world was freaking out about whether humans really evolved from apes or not, Nietzsche, as usual, looked in the opposite direction of everyone else. He took it as obvious that we evolved from apes. After all, he said, why else would we be so horrible to one another?

Instead of asking what we evolved from, Nietzsche instead asked what we were evolving toward.

Nietzsche said that man was a transition, suspended precariously on a rope between two ledges, with beasts behind us and something greater in front of us. His life’s work was dedicated to figuring out what that something greater might be and then pointing us toward it.

Nietzsche envisioned a humanity that transcended religious hopes, that extended itself “beyond good and evil,” and rose above the petty quarrels of contradictory value systems. It is these value systems that fail us and hurt us and keep us down in the emotional holes of our own creation. The emotional algorithms that exalt life and make it soar in blistering joy are the same forces that unravel us and destroy us, from the inside out.

So far, our technology has exploited the flawed algorithms of our Feeling Brain. Technology has worked to make us less resilient and more addicted to frivolous diversions and pleasures, because these diversions are incredibly profitable. And while technology has liberated much of the planet from poverty and tyranny, it has produced a new kind of tyranny: a tyranny of empty, meaningless variety, a never-ending stream of unnecessary options.

It has also armed us with weapons so devastating that we could torpedo this whole “intelligent life” experiment ourselves if we’re not careful.

I believe artificial intelligence is Nietzsche’s “something greater.” It is the Final Religion, the religion that lies beyond good and evil, the religion that will finally unite and bind us all, for better or worse.

It is, then, simply our job not to blow ourselves up before we get there.

And the only way to do that is to adapt our technology for our flawed psychology rather than to exploit it.

To create tools that promote greater character and maturity in our cultures rather than diverting us from growth.

To enshrine the virtues of autonomy, liberty, privacy, and dignity not just in our legal documents but also in our business models and our social lives.

To treat people not merely as means but also as ends, and more important, to do it at scale.

To encourage antifragility and self-imposed limitation in each of us, rather than protecting everyone’s feelings.

To create tools to help our Thinking Brain better communicate and manage the Feeling Brain, and to bring them into alignment, producing the illusion of greater self-control.


Look, it may be that you came to this book looking for some sort of hope, an assurance that things will get better—do this, that, and the other thing, and everything will improve.

I am sorry. I don’t have that kind of answer for you. Nobody does. Because even if all the problems of today get magically fixed, our minds will still perceive the inevitable fuckedness of tomorrow.

So, instead of looking for hope, try this:

Don’t hope.

Don’t despair, either.

In fact, don’t deign to believe you know anything. It’s that assumption of knowing with such blind, fervent, emotional certainty that gets us into these kinds of pickles in the first place.

Don’t hope for better. Just be better.

Be something better. Be more compassionate, more resilient, more humble, more disciplined.

Many people would also throw in there “Be more human,” but no—be a better human. And maybe, if we’re lucky, one day we’ll get to be more than human.


If I Dare . . .


I say to you today, my friends, that even though we face the difficulties of today and tomorrow, in this final moment, I will allow myself to dare to hope . . .

I dare to hope for a post-hope world, where people are never treated merely as means but always as ends, where no consciousness is sacrificed for some greater religious aim, where no identity is harmed out of malice or greed or negligence, where the ability to reason and act is held in the highest regard by all, and where this is reflected not only in our hearts but also in our social institutions and business models.

I dare to hope that people will stop suppressing either their Thinking Brain or their Feeling Brain and marry the two in a holy matrimony of emotional stability and psychological maturity; that people will become aware of the pitfalls of their own desires, of the seduction of their comforts, of the destruction behind their whims, and will instead seek out the discomfort that will force them to grow.

I dare to hope that the fake freedom of variety will be rejected by people in favor of the deeper, more meaningful freedom of commitment; that people will opt in to self-limitation rather than the quixotic quest of self-indulgence; that people will demand something better of themselves first before demanding something better from the world.

That said, I dare to hope that one day the online advertising business model will die in a fucking dumpster fire; that the news media will no longer have incentives to optimize content for emotional impact but, rather, for informational utility; that technology will seek not to exploit our psychological fragility but, rather, to counterbalance it; that information will be worth something again; that anything will be worth something again.

I dare to hope that search engines and social media algorithms will be optimized for truth and social relevance rather than simply showing people what they want to see; that there will be independent, third-party algorithms that rate the veracity of headlines, websites, and news stories in real time, allowing users to more quickly sift through the propaganda-laden garbage and get closer to evidence-based truth; that there will be actual respect for empirically tested data, because in an infinite sea of possible beliefs, evidence is the only life preserver we’ve got.

I dare to hope that one day we will have AI that will listen to all the dumb shit we write and say and will point out (just to us, maybe) our cognitive biases, uninformed assumptions, and prejudices—like a little notification that pops up on your phone letting you know that you just totally exaggerated the unemployment rate when arguing with your uncle, or that you were talking out of your ass the other night when you were doling out angry tweet after angry tweet.

I dare to hope that there will be tools to help people understand statistics, proportions, and probability in real time and realize that, no, a few people getting shot in the far corners of the globe does not have any bearing on you, no matter how scary it looks on TV; that most “crises” are statistically insignificant and/or just noise; and that most real crises are too slow-moving and unexciting to get the attention they deserve.

I dare to hope that education will get a much-needed facelift, incorporating not only therapeutic practices to help children with their emotional development, but also letting them run around and scrape their knees and get into all sorts of trouble. Children are the kings and queens of antifragility, the masters of pain. It is we who are afraid.

I dare to hope that the oncoming catastrophes of climate change and automation are mitigated, if not outright prevented, by the inevitable explosion of technology wrought by the impending AI revolution; that some dumb fuck with a nuke doesn’t obliterate us all before that happens; and that a new, radical human religion doesn’t emerge that convinces us to destroy our own humanity, as so many have done before.

I dare to hope that AI hurries along and develops some new virtual reality religion that is so enticing that none of us can tear ourselves away from it long enough to get back to fucking and killing each other. It will be a church in the cloud, except it will be experienced as one universal video game. There will be offerings and rites and sacraments just as there will points and rewards and progression systems for strict adherence. We will all log on, and stay on, because it will be our only conduit for influencing the AI gods and, therefore, the only wellspring that can quench our insatiable desire for meaning and hope.

Groups of people will rebel against the new AI gods, of course. But this will be by design, as humanity always needs factious groups of opposing religions, for this is the only way for us to prove our own significance. Bands of infidels and heretics will emerge in this virtual landscape, and we will spend most of our time battling and railing against these various factions. We will seek to destroy one another’s moral standing and diminish each other’s accomplishments, all the while not realizing that this was intended. The AI, realizing that the productive energies of humanity emerge only through conflict, will generate endless series of artificial crises in a safe virtual realm, where that productivity and ingenuity can then be cultivated and used for some greater purpose we won’t ever know or understand. Human hope will be harvested like a resource, a never-ending reservoir of creative energy.

We will worship at AI’s digitized altars. We will follow their arbitrary rules and play their games not because we’re forced to, but because they will be designed so well that we will want to.

We need our lives to mean something, and while the startling advance of technology has made finding that meaning more difficult, the ultimate innovation will be the day we can manufacture significance without strife or conflict, find importance without the necessity of death.

And then, maybe one day, we will become integrated with the machines themselves. Our individual consciousnesses will be subsumed. Our independent hopes will vanish. We will meet and merge in the cloud, and our digitized souls will swirl and eddy in the storms of data, a splay of bits and functions harmoniously brought into some grand, unseen alignment.

We will have evolved into a great unknowable entity. We will transcend the limitations of our own value-laden minds. We will live beyond means and ends, for we will always be both, one and the same. We will have crossed the evolutionary bridge into “something greater” and ceased to be human any longer.

Perhaps then, we will not only realize but finally embrace the Uncomfortable Truth: that we imagined our own importance, we invented our purpose, and we were, and still are, nothing.

All along, we were nothing.

And maybe then, only then, will the eternal cycle of hope and destruction come to an end.



Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.


not work with dark mode