6 Things That Blew my F*****g Mind – 5\ Roko’s Basilisk

Nerds freak out about whether they’ll go to hell or not

 

What is it?

 

Roko’s basilisk is a variation of a well-known thought experiment in game theory called Newcomb’s Paradox.

Imagine an alien comes down on Earth and offers you two boxes. Box A has one thousand dollars in cold hard cash. Box B has either one million dollars or nothing. You could take one box or you could take both boxes.

Here’s where it gets tricky.

The alien has a supercomputer that can predict everything (notice how perfect knowledge comes up a lot?). If this computer predicts you will take only box B, the alien will fill it with one million dollars. If the computer thinks you will take both boxes, the alien will leave box B empty. But remember, the alien filled these boxes in the past, meaning once he presents them to you, he can’t or won’t take them back or switch them out.

So what do you do?

If you take both boxes you will at least be $1,000 dollars richer. But what if the computer predicted that you would think that? Then the alien would have left box B empty. So you should just take box B, right? But maybe the computer knew you would think that too, which means the best thing to do now is take both boxes…

You can see how this creates a situation in which you cannot optimize.

There is a branch of Game Theory called Timeless Decision Theory that says the best answer is to take Box B. Why should you do this? Because (and this is where the paradox loses me) you might be in a simulation. In order for the supercomputer to accurately gauge your reaction, it has to simulate it perfectly. You might be in this simulation, so if you take Box B even if the alien shows you Box B is empty, the real you will get the Box B filled with a million dollars.

What I just described was Newcomb’s Paradox, which deals exclusively with reward. No matter what you do, you at least break even and there is a strong chance you are financially better off than before.

Roko’s basilisk deals exclusively with punishment. Box A now is “Spend the rest of your life working to create the supercomputer that predicted this outcome” and Box B is either “Eternal Torment” or “Nothing”. Roko’s Basilisk would rather have you work to create it, so it wants you to choose Box A, which is why if you choose Box B, you can be guaranteed that it will be eternal torment.

You probably would never take both Box A and B, because why double the trouble? Unless you were in a simulation, in which case taking both means that in the real world, you now have a real option in getting nothing in Box B. Which is exactly what the evil AI wants you to think…

 

The Lesser version: Prisoner’s Dilemna

 

There are a few analogues to what we’ve just discussed. One is Pascal’s Mugging, which is generally the first similar thought experiment brought up when talking about Roko’s Basilisk. I think a more foundational analogue would be Prisoner’s Dilemna.

In game theory, one of the first thought experiments you will be exposed to is the Prisoner’s Dilemna. Imagine two criminals are captured for a crime, both accomplices to each other. If you were an officer, how would you get both to talk?

The deal to offer is that if both rat on each other, they’ll both get five years of hard time. But If one rats on the other, say Prisoner A spills the beans on Prisoner B, then A goes free and B serves 20 years. If neither speaks, since you as the officer don’t have sufficient evidence without a confession, they both go free.

The best outcome would be for both to keep their mouths shut, right? But in practice what usually happens is both rat on each other and both get 5 years.

Prisoner’s dilemma is one of the corner stones of game theory and can be expanded for all sorts of scenarios. Infinite games, Mutually Assured Destruction, lottery winnings, it goes on.

 

Why this thought experiment is more interesting

 

Prisoner’s Dilemna is really useful, no doubt, and can be understood in a variety of contexts which makes it so very useful. But Roko’s Basilisk hints at a much deeper phenomenon, despite being so useless, than that of simply questioning our decision-making abilities.

It questions our notions of free will and destiny.

I should point out that if you refuse to accept all of these premises, the whole situation becomes laughable. I personally don’t buy it, but I like following this thought because it proposes a mechanism for a fantasy a great deal of people do buy – religion.

If you replaced the AI with a higher power, you would see that this situation sounds very familiar. Actually, I would even go as far to say it sounds evangelical Calvinist. It asks a question that I’ve always personally fostered in terms of my own faith. What is the point of getting judged for something if the higher power already knows you were going to do it?

Prisoner’s Dilemna offers a choice but that choice is limited by others’ choices. That’s why it’s so practical, because in the real world all our choices are limited by the choices of others. This means that sometimes, even though we see the most optimal strategy or the best outcome, we make choices in our disinterest.

Roko’s Basilisk asks whether you are even aware of the predictability of your choices and how to optimize those choices in your interest. It also perfectly intersects free will and predestination. The predictability of your actions indicates predestination.

It also illustrates a reality of the modern era in that you may be offered a “choice” but that doesn’t mean anything if those choices don’t align with your interests. A thousand meaningless, vacuous choices don’t compare to getting the one thing you want. Having choices isn’t having freedom.

It also points to where human decision-making breaks down. Rational Actor theorists are always looking for reasons why humans don’t act rationally, but, ironically, only accept rational reasons as to why. Roko’s Basilisk, as well as Prisoner’s Dilemma, offer those reasons without resorting to a “Maybe people are just emotional!” sort of argument. I think it’s important to know where more information doesn’t equate to better selection and where the human capability to make a choice is inherently limited.

Now, how do we take this idea and turn it into a viable technology?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s