Skip to content

Long Bets – Passing the Turing test by 2029

Long Bets [ 1: By 2029 no computer - or "machine intelligence" - will have passed the Turing Test. ]

Interesting bet… and I think we could build the software to pass the test today!

Back when I was in college I took an interest in Artificial Intelligence (AI). The then-current body of knowledge that went under the label “AI” was clearly a very log way from simulating intelligence. In thinking about the problem I came up to three main obstacles:

  1. The amount of compute power needed.
  2. The amount of storage needed.
  3. The amount of accessible information needed for learning.

First we can estimate (very roughly) the amount of compute power needed to achieve “intelligence” simply by looking at the human brain. The estimate has a high side, and a low side (with a very large range between). The low side goes something like:

  • There are ~1 billion effective neurons (there are ~10 billion neurons, but most look like wiring).
  • Each neuron has an effective “fan out” of ~10 connections (there can be thousands of connections, but given the connections were made almost by chance, as a guess most are not useful).
  • Each neuron can in effect make ~10 “decisions” per second (a neuron can fire up to something like a thousand times per second, but given the neuron is an analog device, you need a fair number of pulses to add up to a “decision”).
  • The human mind is really nature’s first try at intelligence. It is reasonably likely that the human mind is not built very efficently, and intelligence could be achieved with perhaps 1/10 the processing power of the human brain.

Taken together the above gives a (very!) rough estimate of the minimum compute needed for intelligence:

( 10 * 10 * 1 billion / 10 ) = 10 billion “decisions” per second

These “decisions” are essentially binary in nature, and are (very) roughly equivalent to the bits flowing through a computer CPU. So the equivalent rate for a CPU would be something like the number of bits in a machine word multiplied by rate at which the CPU processes instructions.

In the late 1970′s the then-largest current machines were vastly slower than the low-side estimate for needed processing power.

Storage was another problem. If each fan out from a neuron was roughly equvalent to a bit of storage, then we needed a computer with perhaps 1 to 100 billion bits of storage.

In the late 1970′s the then-current machines had vastly smaller storage.

Learning was another problem. Humans take decades to achieve full intelligence, and during that time are exposed to quite a lot of experiences. We don’t want to take decades educating an experimental AI, only to find that we got the design wrong and have to start over. We need a huge amount of available information to feed to our AI for each successive experiment.

In the late 1970′s … well, you get the picture. At this point I dropped my interest in AI, figuring it would be decades before we would have just the raw computing power needed.

A few years ago, I remembered this old exercise, and was slightly shocked to realize that just the generic PCs in my study were near or above that old estimate.

I strongly suspect that we already have what we need support AI. We have multi-gigahertz CPUs and multi-gigabyte memories. We have a vast pool of information accessible via the Internet.

Of course we don’t have a clue about how to write the software….