In this article, I argue that mainstream artificial intelligence is about to enter a new AI winter because, in spite of claims to the contrary, they are still using a representational approach to intelligence, aka symbolic AI or GOFAI. This is a criticism that Hubert Dreyfus has been making for half a century to no avail. I further argue that the best way to get rid of the representationalist baggage is to abandon the observer-centric approach to understanding intelligence and adopt a brain-centric approach. On this basis, I conclude that timing is the key to unlocking the secrets of intelligence.
The World Is its Own Model
Hubert Dreyfus is a Professor of philosophy at the University of California, Berkeley. Dreyfus has been the foremost critic of artificial intelligence research (What Computers Still Can't Do) since its early days. The AI community hates him for it. Here we are, many decades later, and Dreyfus is still right. Drawing from the work of famed German philosopher, Martin Heidegger and the French existentialist philosopher, Merleau-Ponty, Dreyfus's argument has not changed after all those years. Using Heidegger as a starting point, he argues that the brain does not create internal representations of objects in the world. The brain simply learns how to see the world directly, something that Heidegger referred to as presence-at-hand and readiness-to-hand. Dreyfus gave a great example of this in his paper Why Heideggerian AI Failed and how fixing it would require making it more Heideggerian (pdf). He explained how roboticist Rodney Brooks solved the frame problem by moving away from the traditional but slow model-based approach to a non-representational one:
The year of my talk, Rodney Brooks, who had moved from Stanford to MIT, published a paper criticizing the GOFAI robots that used representations of the world and problem solving techniques to plan their movements. He reported that, based on the idea that “the best model of the world is the world itself,” he had “developed a different approach in which a mobile robot uses the world itself as its own representation – continually referring to its sensors rather than to an internal world model.” Looking back at the frame problem, he writes:Deep Learning's GOFAI Problem
And why could my simulated robot handle it? Because it was using the world as its own model. It never referred to an internal description of the world that would quickly get out of date if anything in the real world moved.
By and large, the mainstream AI community continues to ignore Dreyfus and his favorite philosophers. Indeed, they ignore everyone else including psychologists and neurobiologists who are more than qualified to know a thing or two about intelligence and the brain. AI's biggest success, deep learning, is just GOFAI redux. A deep neural network is actually a rule-based expert system. AI programmers just found a way (gradient descent, fast computers and lots of labeled or pre-categorized data) to create the rules automatically. The rules are in the form, if A then B, where A is a pattern and B a label or symbol representing a category.
The problem with expert systems is that they are brittle. Presented with a situation for which there is no rule, they fail catastrophically. This is what happened back in May to one of Tesla Motors's cars while on autopilot. The neural network failed to recognize a situation and caused a fatal accident. This is not to say that deep neural nets are bad per se. They are excellent in controlled environments, such as the factory floor, where all possible conditions are known in advance and humans are kept at a safe distance. But letting them loose in the real world is asking for trouble.
As I explain below, the AI community will never solve these problems until they abandon their GOFAI roots and their love affair with representations.
The Powerful Illusion of Representations
The hardest thing for AI experts to grasp is that the brain does not model the world. They have all sorts of arguments to justify their claim that the brain creates representations of objects in the world. They point out that MRI scans can pinpoint areas in the brain that light up when a subject is thinking about a word or a specific object. They argue that imagination and dreams are proof that the brain creates representations. These are powerful arguments and, in hindsight, one cannot fault the AI community too much for believing in the illusion of representations. But then again, it is not as if knowledgeable thinkers, such as Hubert Dreyfus, have not pointed out the fallacy of their approach. Unfortunately, mainstream AI is allergic to criticism.
Why the Brain Does Not Model the World
There are many reasons. I'll just list a few as follows.
- The brain has to continually sense the world in real time in order to interact with it. The perceptions only last a short time and are mostly forgotten afterwards. If the brain had a stored (long-term) model of the world, it would only need to update the model occasionally. There are not enough neurons in the brain to store a model of the world. Besides, the brain's neurons are too slow to engage in any complex computations that an internal model would require.
- It takes the brain a long time (years) to build a universal sensory framework that can instantly perceive an arbitrary pattern. However, when presented with a new pattern (which is almost all the time since we rarely see the same exact thing more than once), the cortex instantly accommodates existing memory structures to see the new pattern. No new structures are learned. A neural network, by contrast, must be trained with many samples of the new pattern. It follows that the brain does not learn to create models of objects in the world. Rather it learns how to sense the world by figuring out how the world works.
- The brain should be understood as a complex sensory organ. Saying that the brain models the world is like saying that a sensor models what it senses. The brain builds a huge collection of specialized sensors that sense all sorts of phenomena in the world. The sensors are organized hierarchically. They are just sensors (detectors) that respond directly to specific sensory phenomena in the world. For example, we may have a high level sensor that fires when grandma comes into view but it is not a model of grandma. Our brain cannot model anything outside of it because our eyes do not see grandma. They just sense changes in illumination. To model something, one must have access to both a subject and an object. An artist can model something by looking at both the subject and the painting. The brain must sense things directly. It only has the signals from its senses to work with.
The most crippling mistake that most AI researchers make is that they try to understand intelligence from the point of view of an outside observer. Rather, they should try to understand it from the point of view of the intelligence itself. They need to adopt a brain-centric approach to AI as opposed to an observer-centric approach. They should ask themselves, what does the brain have to work with? How can the brain create a model of something that it cannot see until it learns how to see it?
Once we put ourselves in the brain's shoes, so to speak, representations no longer exist because they make no sense. They simply disappear.
Timing is the Key to Unsupervised Learning
The reason that people like Yann LeCun, Quoc Le and others in the machine learning community are having such a hard time with unsupervised learning (the kind of learning that people do) is that they do not try to "see" what the brain sees. The cortex only has discrete sensory spikes to work with. It does not know or care where they come from. It just has to make sense of the spikes by figuring out how they are ordered. Here is the clincher. The only order that can be found in multiple sensory streams of discrete signals is temporal order: they are either concurrent or sequential. Timing is thus the key to unsupervised learning and everything else in intelligence.
One only has to take a look at the center-surround design of the human retina to realize that the brain is primarily a complex timing mechanism. It may come as a surprise to some that we cannot see anything unless there is motion in the visual field. This is the reason that the human eye is continually moving in tiny movements called microsaccades. Movements in the visual field generate precisely timed spikes that depend on the direction and speed of the movements. The way the brain sees is completely different from the way computer vision systems work. They are not even close.
New AI Winter in the Making
Discrete signal timing should be the main focus of AI research, in my opinion. It is very precise in the brain, on the order of milliseconds. This is something that neurobiologists and psychologists have known about for decades. But the AI community thinks they know better. They don't. They are lost in a lost world of their own making. Is it any wonder that their field goes from one AI winter to the next? Artificial intelligence research is entering a new winter as I write but most AI researchers are not aware of it.
Mark Zuckerberg Understands the Problem with DeepMind's Brand of AI
Why Deep Learning Is a Hindrance to Progress Toward True AI
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut