It is easy to be impressed by all the buzz surrounding artificial intelligence these days, especially the hot new field of deep learning. Not a week goes by without some breakthrough announcement from some AI lab or other. A few days ago, Google's DeepMind announced the creation of an AI program that can learn to play old Atari video games from the 80s as well as or better than a professional video game player. We are left with the impression that great advances are being made. But, as I explain in this article, nothing could be further from the truth.
Same Old AI Dressed in a New Suit
The problem with all the hoopla surrounding deep learning is that it is not really a new science. It has been around for decades. As others have noted, the reason that it has not made the news before is that, in order to train deep neural networks, one must have access to a huge number of labeled samples. Large repositories of labeled data did not become available until the advent of social networks like FaceBook or Twitter and search giants like Google or Baidu. In addition, the cheap and powerful computer hardware needed to process this enormous amount of data was not built until fairly recently. But the main reason that deep learning is old is that, in spite of claims to the contrary, it is not a new paradigm intended to replace symbolic AI, the bankrupt, "baby boomer" AI model of the last century. On the contrary. Deep learning is just GOFAI with lipstick on. Here is why.
The kind of deep machine learning that has been making the news lately is called supervised learning because it requires that the neural network trainer identifies each chunk of data, or sample, by attaching a label (i.e., a symbol) to it. Notice right off the bat that the intelligence is not in the neural network but in the trainer. If presented with thousands of pictures of cats, the machine automatically learns to map certain images to the cat symbol. This works even though the machine has no idea what a cat is. Of course, we humans do not need labels in order to learn to recognise patterns and objects. So if biological plausibility is a requirement for true AI (highly probable), it is a sure bet that true AI is not going to come from the mainstream anytime soon. Some in the business may want to argue that there is work being done on unsupervised deep neural networks but rest assured that, for all intents and purposes, unsupervised learning is nonexistent.
The point I am making here is that deep neural networks are really symbol generators. A DNN is just a huge, hierarchical collection of old-fashioned if-then rules. Hundreds or even thousands of tiny little rule processors work together to contribute to the activation of a label. What AI researchers have done is create a machine that can generate these rules automatically by looking at labeled pictures. Paradigms die hard, don't they?
In the Nature paper describing their game-playing neural network, celebrity AI scientist Demis Hassabis and his colleagues at Google's DeepMind offices in London declared:
"The work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks."This sounds a bit like chest beating and at least one deep learning expert has already complained. The question is, did Google really achieve a breakthrough in AI or is all this just hype? What did Google really accomplish? As amazing at this sounds, all Google did was find an automatic way to create an old-fashioned rule-based expert system. Some of the positional brittleness were removed with the use of convolutional neural networks but, after training, all they have left is a purely reactive system, i.e., a dumb, one-track minded automaton wearing blinders and executing rules in the form: if you detect X, do Y.
Is that intelligence, I hear you ask? In a sense, yes, of course. But it is a rather limited and brittle form of intelligence. The human brain also has a similar type of automaton that performs simple or routine (but important) tasks for us whenever our attention is focused on something else. It is called the cerebellum. It handles such things are walking, maintaining posture, balance, etc. But this is not the kind of intelligence you will trust to drive you to work every morning. It would not know what to do in the event of a new situation for which it has received no training. In fact, it is completely blind to new situations. But even worse than that, it has no understanding whatsoever of what it is doing and why. Certainly, this technology can and will be useful for many applications such as factory automation and surveillance but, in the end, it is really a glorified expert system, a dumb intelligence.
Another Red Herring
It would have been more impressive if Google had announced that they had found a solution to the age-old credit assignment problem. Essentially, it is hard for a reinforcement learning program to determine which of its preceding actions caused it to receive a reward or a punishment. Deep neural networks do not offer a solution. Google's program gets around the problem by playing video games where the cause is immediately followed by the reinforcement signal. It did poorly playing Ms. PacMan for this reason. Another problem with this kind of rule-following neural networks is that they have no inherent ability to change their focus or attention. All the rules are active all the time and are always waiting for their chance to fire. As a result, if the system is trained to perform multiple tasks, those tasks must not have patterns in common because that would create a conflict of attention which could then cause a motor conflict.
In conclusion, let me say that I am impressed with the ability of Google's DeepMind algorithm to learn relatively complex tasks using only reinforcement signals. I am impressed because it is a useful algorithm and it is amazing that it works as well as it does. It is a sign that machines will one day be able to perform much more complex tasks as well as or better than humans. But I think that deep learning is yet another red herring on the road to true AI. It is going to be a costly success in the end because it is leading the AI community in the wrong direction. Mainstream AI has reached a point where its tricks are too good for its own good. But fortunately (or unfortunately, depending on one's perspective) for the world, mainstream AI is not the be-all of AI research.
Google's DeepMind Masters Atari Games
From Pixels to Actions: Human-level control through Deep Reinforcement Learning
No, a Deep Learning Machine Did Not Solve the Cocktail Party Problem