6 Things that Blew My F*****g Mind – 6\ Godel’s Incompleteness Theorems

Obviously. I’ve written about this a few times.

What is it?

Godel’s theorem is mathematical theorem that shows that self-referential statements inevitably occur in a system of axioms, leading any system of axioms to either be incomplete but consistent or complete but inconsistent.

This is, of course, an informal introduction.

I like this explanation:

The proof of Gödel’s Incompleteness Theorem is so simple, and so sneaky, that it is almost embarrassing to relate. His basic procedure is as follows:

  1. Someone introduces Gödel to a UTM, a machine that is supposed to be a Universal Truth Machine, capable of correctly answering any question at all.
  2. Gödel asks for the program and the circuit design of the UTM. The program may be complicated, but it can only be finitely long. Call the program P(UTM) for Program of the Universal Truth Machine.
  3. Smiling a little, Gödel writes out the following sentence: “The machine constructed on the basis of the program P(UTM) will never say that this sentence is true.” Call this sentence G for Gödel. Note that G is equivalent to: “UTM will never say G is true.”
  4. Now Gödel laughs his high laugh and asks UTM whether G is true or not.
  5. If UTM says G is true, then “UTM will never say G is true” is false. If “UTM will never say G is true” is false, then the sentence G is false (since G = “UTM will never say G is true”). So if UTM says G is true, then G is in fact false, and UTM has made a false statement. So UTM will never say that G is true, since UTM makes only true statements.
  6. We have established that UTM will never say G is true. So “UTM will never say G is true” is in fact a true statement. So G is true (since G = “UTM will never say G is true”).
  7. “I know a truth that UTM can never utter,” Gödel says. “I know that G is true. UTM is not truly universal.”

Think about it; it grows on you …

With his great mathematical and logical genius, Gödel was able to find a way (for any given P(UTM)) actually to write down a complicated polynomial equation that has a solution if and only if G is true. So G is not at all some vague or non-mathematical sentence. G is a specific mathematical problem that we know the answer to, even though UTM does not! So UTM does not, and cannot, embody a best and final theory of mathematics …

Although this theorem can be stated and proved in a rigorously mathematical way, what it seems to say is that rational thought can never penetrate to the final ultimate truth

This is a deep theorem, and it takes a while to understand. Or, at least it did for me. But once you grasp the underlying logic, it opens so many doors to epistemological thought.

 

The lesser version: the Singularity

 

Generally, when the topic of computers and AI and the limits of knowledge come up, the Singularity inevitably gets mentioned.

For the uninitiated, the Singularity is a popular future scenario posited by futurologist Ray Kurzweil, who claims that technology progresses in an exponential fashion. Kurzweil specifically points to the number of transistors on an integrated circuit doubling every two years, i.e. Moore’s law, and claims that at one point the technology will advance so rapidly that we will not be able to fathom what occurs after this point.

This point is referred to as a Singularity.

What happens after this point? Do we become immortal, transhumanist gods once we merge with our robot overlords? Are we subjected to endless torture ala I Have No Mouth and Yet I Must Scream as Elon Musk and Stephen Hawking fear? Does the artificial general intelligence opt out of a raw deal and end itself, denying us that knowledge forever? Can it do a backflip?

Who knows. And honestly, this is futile thought line to follow. Let’s not forget that predicted exponential curves often end up looking like s-curves. Unfortunately, reality is bound by, I don’t know, physical laws.

Image result for exponential curve s curve

Courtesy of the Technium

There is also the fact that we as human beings are notoriously bad at predicting anything. Read the Black Swan or the Signal and the Noise if you disagree.

The other reason the Singularity is utterly uninteresting to me is that it is entirely empirical and does not explain why anything like an artificial general intelligence would arise. It is descriptive.

Descriptions can be useful, but not as useful as an explanatory thought. Godel’s theorem is explanatory because it points to self-reference as the reason for a limit to knowledge, rather than the singularity which uses empirical data and past trends to postulate a point in which we may hit a limit to our knowledge.

 

Why this thought experiment is more interesting

 

If you are reading this with any source of self-awareness, you might’ve asked yourself, why is he talking about logic in one section and artificial intelligence in another?

One reason I think Godel’s theorems are so much more important that a limited concept like the singularity is their prevalence in so many other fields.

For example, consider the Turing-Church thesis. This thesis shows that the computability of any Turing complete computer (only type of computer we have) is equivalent to a human mind with infinite resources. This is contentious and controversial, but let’s follow this thought.

What this would imply is that the human mind should be subject to Godelian constraints as a computer is as shown in the previous example, i.e. it might be consistent but it would be incomplete. But the human mind can grasp its own consistency (self-awareness), thus seems to circumvent Godelian constraints.

Notice, this seems to imply constraints on artificial intelligence. Programming in Godelian constraints allows us human minds capable of understanding the self-referential nature of these statements to control and handle our supposed AI overlords.

This is why I think Godel’s theorems are more interesting than the so-called singularity – they describe similar phenomenon but one offers a mechanism and thus a possible solution to contain ill-intended consequences.

My first contribution to this idea was expounded upon when I discussed the failings of science. I noticed that Godel does not provide any method to show when such self-referential statements would arise. I proposed a method of study to discover moments of emergent self-reference and lengths of time in between such moments.

My second contribution here is to dispute that the human mind is consistent. But what the mind seems to be is consistent or complete within the universe. This is one idea Godel does not follow through on – how does the completeness or consistency of a subsystem effect the greater system?

Are we subject to Godelian constraints without realizing it? Do we not have unanswerable questions like “Is there a God?” or “Does Free Will Exist?” Could these be questions imposed by higher powers as Godelian constraints?

Is the best way to circumvent a Godelian constraint to take a leap of faith?

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s