Category Archives: Artificial Intelligence

It’s Evolution Baby!

Transcript of podcast below.

I’d like to throw some large numbers at you. How many transistors are there in Intel’s latest 18 core Xeon Chip? The answer is 5.5 billion. How many neurons are there in the human brain? There’s a staggering 86 billion of them. It’s worth noting that the part of our brain that does most of the thinking contains about 16 billion neurons, all the other neurons do specialist tasks like dealing with hearing, vision and speech, and so on.

Now comparing an Intel Xeon chip to the human brain is a little strange given that there’s no direct equivalence between a transistor and a neuron. On the other hand 5.5 billion transistors on a tiny CPU is quite a feat of engineering although it’s not a lot compared to the 86 billion neurons in the human brain.

Now consider this. It took Intel 44 years of design evolution to produce a 5.5 billion transistor chip, starting back in 1971 with the 2,300 transistor Intel 4004. It took humans about 30 million years to evolve an 86 billion neuron brain. Computer chip evolution is proceeding at a pace that living organisms should be quite jealous of.

Computer chips will undoubtedly become much more sophisticated in the coming decades but what of the human brain? Well there’s not much evidence that we’re going to get a lot more intelligent unless some dramatic evolutionary change occurs. In fact there’s some evidence that the human brain is actually shrinking.

If we accept that human beings might not get brainier, what about the computer chip? The co-founder of Intel, Gordon Moore published a paper in 1965 stating that the number of transistors on a chip would double every two years. Whilst Moore’s law isn’t an accurate prediction of transistor count these days, if it were true someone would design a computer chip with more transistors than the human brain by 2023.

Raw transistor counts don’t mean much without software that does something, and whilst artificial intelligence software can do some quite clever things, no-one’s close to creating a virtual human brain, capable of the vast number of simultaneous tasks that the human brain has to cope with.

Whilst I don’t think human beings will be using computer chips to augment their intelligence in the very near future, I’m guessing that it’ll be inevitable given the pace of change in computer sophistication and complexity.

When that day arrives, posthuman evolution will begin, and given the rate of evolution of computer components, posthuman evolution may be exponentially faster than anything we’ve seen before.

The Turing Test

Transcript of podcast below.

In 1950 Alan Turing published a paper that examined the possibilities of whether a computer might be able to think, or at the very least convince a human observer that it was capable of human response.  His question about whether a computer could fool humans, into thinking that the computer was human, became known as the Turing Test.

Since the publication of that paper various attempts have been made to design programmes that emulated human speech, and general communications patterns, to convince an observer that the computer was, in fact, human.

At an event organised by the University of Reading, in 2014, a computer programme called Eugene Goostman convinced a panel of judges that it was broadly human and not a computer. It’s worth noting that whilst the panel was convinced, the wider world was not, leaving various Turing type competitions and tests to continue.

So if we’re getting closer to mimicking human expression, can any computer think? I don’t think we’re there yet but there are signs that computers can perform some thought like functions very efficiently indeed. As an example, let’s take chess.

Garry Kasparov was ranked as world chess number 1 for all but 3 months of his career. There was a point where he was considered undefeatable; such was his prodigious grasp of the tactics and grand strategy of the game of chess. Undefeatable? IBM decided to put that idea to the test.

IBM built a computer called Deep Blue and challenged Garry Kasparov to two chess matches, under tournament conditions. Deep Blue lost the first match, but won the second. It was the first time a computer had beaten a reigning world champion. Deep Blue was no ordinary computer though. It was a highly specialised chess computer, capable of investigating hundreds of millions of potential chess moves at a time.

Now here’s the strange thing. After the loss against Deep Blue Garry Kasparov accused Deep Blue of thinking – and went further to propose that real humans were feeding chess moves to the computer. IBM’s stance was that a computer bug had led Deep Blue to make a number of creative chess moves, that Kasparov was not expecting, thus convincing Kasparov that he was competing against a human, not a computer.

All in all it’s worth considering whether Deep Blue was the computer that came the closest to passing the Turing Test, by convincing one of the most brilliant chess players of all time that humans, not a computer, had beaten him, and all because the computer programme that ran on Deep Blue was flawed.

It’s worth thinking about that point: perhaps it’s flaws, not perfection that differentiate us from computers, and it may be the replication of those flaws that will be the key to a successful “human like” computer, capable of passing the Turing test.