The term “artificial intelligence” was coined in 1955 by John McCarthy as the title of a workshop at Dartmouth College in 1956. But AI as we know had already been around for a few years by then. McCulloch and Pitts published their original paper on neural nets in 1943. Wiener’s Cybernetics appeared in 1948. Turing’s seminal Mind paper in which he defined the Turing Test was published in 1950 and he was giving talks on the topic on the BBC around the same time. Ashby invented the homeostat in the late 1940s and published Design for a Brain in 1952.
There was something in the air.
But we often don’t notice that a thing is a thing until it has a name.
The aim of AI was pretty obvious in those early days, which was to create a synthetic brain. But AI turned out to be harder problem that people had hoped.
This was due partly to the tendency to remove topics from the domain of AI once they have been solved. After all, a computer originally was a person who carried out long mathematical calculations. Now electronic computers are very good at arithmetic, but no one has considered that capability to be AI for decades.
Hubert Dreyfus, a prominent critic of AI, famously opined in the 1960s that a computer would never play a decent game of chess. Indeed, it took a couple of decades before Garry Kasparov lost a match to Deep Blue in 1997. Kasparov said that though we can construct chess programs stronger than any human player, the programs don’t play chess the way humans do.
Deep Blue was very good at playing chess – and useless at anything else. It didn’t particularly suggest any useful practical ways forward outside chess-playing programs, and was certainly a long way from GOFAI.
The dominant paradigm of artificial intelligence research that Dreyfus was attacking in the 1960s was symbolic AI. Of course, the pioneers were hoping that symbolic AI could “produce general, human-like intelligence in a machine” as Wikipedia puts it. In his 1985 book, Artificial Intelligence: the Very Idea, John Haugeland, a student of Dreyfus, coined the term GOFAI (“Good Old-Fashioned Artificial Intelligence”) for symbolic artificial intelligence.
There was often the sense that AI, by which was meant GOFAI, was always 20 years away. It might even seem as though there was no progress at all in AI. Nevertheless, there were some who kept the dream alive.
I first came across the term “Artificial General Intelligence” (AGI) around the turn of the century via the SL4 and AGI email lists. Most of the discussion was at little more than the hobbyist level and tended to deal largely in hypotheticals. But there were some people, such as Ben Goertzel, Peter Voss and the odd academic like Marcus Hutter, who were trying to do more substantive work.
I remember during my MBA in 2005 doing an assignment about AGI. Though the idea of AGI is very exciting (ideas are easy), there just was not much there in reality. AGI definitely seemed 20 years off, even in the mid-2000s.
So what changed with artificial intelligence?
It never stopped being a good idea, so it was bound to come back into fashion sooner or later.
Goertzel and his collaborators have organized an AGI conference every year since 2008. It has no doubt helped to increase awareness of the idea outside its original niche. And AGI does now appear in mainstream sources.
There was also Moore’s Law. GPUs. Improved tools. Better algorithms. Bigger datasets. The deep learning renaissance. Potential solutions to the grounding problem and the frame problem. Google becoming the most successful company in history through AI. More and more time, money and resources being poured into A(G)I research creating a virtuous circle: the more research dollars a company spends on A(G)I, the more successful, the more money the company has to spend on A(G)I research.
IBM Watson won Jeopardy in 2011, 6 years ago now. It doesn’t seem to have done much since except come up with a some exotic, but tasty, fusion recipes. But while Watson at heart might be little more than a clever answer engine that uses smart search technology, it’s easier to see how the underlying principles of the system might be applied to domains outside Jeopardy than was the case for Deep Blue.
A much greater breakthrough was AlphaGo. DeepMind had already achieved a coup by using deep reinforcement learning to master a variety of Atari games. Chess might have fallen to AI in 1997, but the ancient game of Go was far subtler and more complex. It managed to capture some ineffable quality of the human imagination.
Back in 2015, the best Go programs struggled to compete with even an average club player. (In truth, it takes years to become even an averagely competent club player.) It would take decades before a program could challenge the game’s masters – if a program ever could.
Just as Dreyfus was wrong when he said that a program would never play a good game of chess, so people were wrong when they said a program would never play a good game of Go. And AlphaGo beat the best players in the world not by having clever heuristics and algorithms programmed into it along with an opening book and endgame knowledge, but by playing millions of games against itself and learning from its mistakes.
DeepMind, apart from having the bally Union Jack on it, is an explicitly AGI company. Shane Legg did his PhD on superintelligence with Marcus Hutter, and then went to work at the Gatsby Unit at UCL, out of which DeepMind was born.
They are really trying and explicitly to achieve AGI. As are the Google Brain group and Ray Kurzweil’s team at Google. (DeepMind was later bought by Google, they said, for £640M, or £8M for each employee).
It’s still easy to look at deep learning and say it’s a long way from anything much like real AGI. And that’s true enough for a lot of traditional applications that use massive labelled datasets. But it’s not as though we could do that stuff 10 years ago, or even last year. It’s a very dynamic field.
So much work is going to get done in the next 5 years, in every aspect of A(G)I, including lots of areas that have barely been looked at over the years. There’s going to be a lot more people having ideas about A(G)I in the next 3 years than in the previous 3 (maybe even 30). Those heuristic and algorithms are going to continue getting better.
But AGI is hard.
It feels as though we are somewhere we weren’t 20 years ago. But I’ll make a prediction: We will have something that looks like AGI by 2025. It might be domain-limited, it might know a lot about, say, for instance, APIs (there’s a Part 2 of this article for you coming up!), but not so much about other things. But it is still going to have a big impact on our economy and society.
As for when we might something like HAL from Kubrick’s 2001: A Space Odyssey, in the movie he tells us he was activated not just in 1992, but on 12 January 1992. So I’m going to go out on a limb here and say that I predict that a system equivalent to HAL will be activated at 03:14:06 UTC on Tuesday, 19 January 2038.
After that all, all bets are off.