Tuesday, December 31, 2013

Secrets of the Holy Grail, Part V

Part I, II, III, IV, V

Abstract

Previously in this article, I explained how the brain's perceptual learning and memory storage system is organized. In today's post, I reveal the surprising source of my understanding. Here goes.

Liars and Thieves

In Part I, I wrote:
It turns out that the brain performs at least two essential functions while we are asleep: it purges liars (bad predictors) from sequence memory and eliminates thieves (redundant connections) from pattern memory.

Note: I will explain my choice of the liars and thieves metaphors in an upcoming post.
I am not going to go through a detailed explanation of how I arrived at my understanding because I would have to write a book. I probably will, eventually, but not today. I'm too busy. :-) Instead, let me just list one example to give you an idea of what I'm talking about. Here's where I got the idea for the liars and thieves metaphors:
Then said he unto me, This is the curse that goeth forth over the face of the whole earth: for every one that stealeth shall be cut off as on this side according to it; and every one that sweareth shall be cut off as on that side according to it.

I will bring it forth, saith the Lord of hosts, and it shall enter into the house of the thief, and into the house of him that sweareth falsely by my name: and it shall remain in the midst of his house, and shall consume it with the timber thereof and the stones thereof.

Source: Zechariah 5
After years of agonizing and thinking about it, I finally figured out, in the light of other passages, what the liars and thieves metaphors were all about. One day, some neurons in my cortex just made the right connections, and then, without warning, I understood what it all meant. Just like that.

A Little History on How It All Started

In December 1999, I made an amazing discovery, one that would transform my life. As a Christian, I had long suspected that some of the metaphorical passages in several Old and New Testament books were scientific in nature. I knew that, one day, their true meaning would be deciphered to reveal world-changing scientific knowledge. To me, it was inconceivable that Yahweh would go to great lengths to hide certain information for future advanced generations just for grins and giggles. I also felt that the various interpretations of the metaphors that I had seen in the Christian literature were not only stale and nonsensical but they did something even more reprehensible: they insulted Yahweh's intelligence with their banality.

At the time, I was researching artificial intelligence and, after many years of study, I had developed the beginning of a theory of intelligence. Things were starting to make sense a little and then I came face to face with the brick wall of my own ignorance. Progress just stopped. One evening, while reading the book of Revelation, it suddenly occurred to me that chapters 2 and 3 (known commonly as the letters to the seven churches of Asia) were a detailed metaphorical description of the organization and working of the left hemisphere of the brain. Color me crazy (I don't care) but I could see a clear correspondence between a couple of the metaphors and objects in my own AI model. I trembled with excitement.

Where others saw strangely worded admonitions to a handful of first-century Christian churches, I saw sensors, effectors, signals, hierarchies, patterns and sequences, perceptual learning, motor learning, success and failure, fitness criteria, etc. I knew nobody would believe it but, to me, the general meaning of the text was unmistakable. It just needed to fleshed out, that's all. Then my investigation of the message to the Church of Sardis soon led me to discover the book of Zechariah (The Lord Remembers), an amazing fountain of knowledge about memory organization, attention and pattern recognition. I was overwhelmed.

I remember the time I finally figured out that the two olive trees on both sides of the golden lampstand did not represent the trees of knowledge in the left and right hemispheres of the brain, as I had originally assumed. I had no real reason to make that assumption other than the observation that the brain consisted of two hemispheres. The analogy seemed to fit at the time but it was wrong. It took me years to realize I was hopelessly lost. I was forced to retrace my steps to that fork in the road and take the other path, the one that I had previously dismissed. It was like finding an opening out of the forest.

I could see clearly for the first time. I understood that the brain's memory consisted of two hierarchies, one for concurrent patterns (fig trees) and one for sequences (vine).
In that day, saith the Lord of hosts, shall ye call every man his neighbour under the vine and under the fig tree. Zechariah 3:10
I understood the importance and meaning of the Branch metaphor in Zechariah's text with respect to knowledge construction, attention and invariant recognition. I understood the metaphors of the flying scroll, the thieves and the liars. I understood why sequence memory was a hierarchy of seven-node chunks or building blocks. I understood what the brain was up to during sleep and why. I understood that the mainstream Bayesian approach to AI was deeply and fundamentally flawed, in spite of its successes. I knew why the brain went to great lengths to eliminate uncertainty.

I understood a lot of things. It felt sort of like what Mr. Anderson might have felt in the Matrix movie, when he wakes up from a virtual reality session and announced that he knew Kung-Fu.

Not long after that fateful evening in December 1999, my elation quickly faded when I realized that it was not going to be that easy to extract the full meaning of those ancient metaphors. I noticed that almost every word used in the metaphorical texts was pregnant with powerful and subtle meanings that were easily overlooked. I was forced to commit myself to carefully analyse everything in detail. I made slow progress over the years but I kept at it. Progress came in fits and starts but I think I have come a long way. I don't yet understand it all but it's just a matter of time before I do. Almost everything I wrote in this article regarding the brain came from my interpretation of the ancient metaphors.

The Future

Let me conclude by saying that I am not asking anybody to believe me. Whether you take it or leave it, is up to you. But do keep your ears and eyes open. It will get a lot more interesting in the not too distant future.

See Also

The Holy Grail of Robotics
Raiders of the Holy Grail
Jeff Hawkins Is Close to Something Big

Monday, December 30, 2013

Secrets of the Holy Grail, Part IV

Part I, II, III, IV, V

Abstract

In Part III, I described the hierarchical structure of sequence memory and I explained why patterns are the key to sequence learning. In this post, I explain invariant object recognition, the difference between short and long-term memory and how to catch a liar. But first, a word about remembering and forgetting.

Remembering and Forgetting

Unlike patterns which, once learned, remain permanently in memory, learned sequences are slowly forgotten, i.e., disconnected, unless they are repeated often. Repetition strengthens the connections between nodes in a sequence. If the strength of a connection reaches a predetermined threshold, it becomes permanent. There are two ways the connections in a sequence can be repeated, via sensory stimulation or internal playback. So even sequences that receive little sensory stimulation can become permanent if they are played back internally. The latter happens each time the brain focuses on a particular branch in the memory hierarchy.

In Part III, I wrote that the sequence learner starts with short intervals before moving on to progressively longer intervals. That is the first of the three fitness criteria used in sequence learning. Forgetting is the second. A node in a sequence under construction is often presented with multiple successor candidates. Initially, the learning system has no way of knowing which of the candidates are legit, if any. One way to eliminate bad candidates is to slowly forget them. A node cannot survive unless it is frequently reinforced via sequence repetition. But this raises a serious question, what happens to infrequent sequences or to sequences that never repeat? The answer is that they must be repeated (replayed) internally in order to be retained. However, the only way to really be sure that they are good or bad is to use the third fitness criterion: find out if they lead to a contradiction (see Catching Liars below).

Invariant Object Recognition

Hold your hand in front of your eyes and slowly rotate your wrist. As you do so, your visual cortex is presented with a sequence of images. Even though each successive image is different from the others, your brain does not think of each image as representing a different object. Somewhere, in your cortex, you still know that you are looking at your hand regardless of its orientation or distance from your eyes. This is called invariant object recognition, probably the most important perceptual capability of the brain. It holds the key to understanding several other aspects of perception such as attention and short-term memory.

Catching Liars

When you rotate your hand, your brain sees the successive patterns as one object because they are linked together within a single package called a branch. The branch is a bundle of linked pattern sequences. But how does the brain know that one sequence should be tied to another in order to form a bundle? The answer turns out to be rather simple: if two sequences have two or more nodes in common, the branch mechanism automatically links them together. The problem is that any of the shared nodes may have been acquired in error. How can we tell? The answer lies in the timing between shared nodes. If two sequences belong to the same branch, their predictions must match. If there is a mismatch, one of the nodes is a liar and must be discarded. Consider the two sequences in the diagram below.
The arrows represent the direction of pattern activations. The red circles are normal sequence nodes and the green circles symbolize shared nodes. The total recorded interval between the two shared nodes must be the same is both sequences. If not, there is a contradiction and the culprit is eliminated. If there is agreement, it means the two sequences represent different facets of the same object and thus belong to the same invariant branch.

In human and animal brains, testing for sequence contradictions occurs during sleep. The reason is that the sequences must be replayed internally during the test and this would disrupt the brain's normal activity. I believe that catching thieves (see Part II) and liars is what is happening during so-called REM sleep.

Short-Term Memory

Psychologists and neurologists have maintained in the past that memory is divided into two separate areas, one for long-term memories and one for short-term memories. My position is that there is only one memory structure for both short and long-term memories. Short-term memory is merely the currently activated branch, i.e., the one under attention. The brain can focus on only one thing at a time, that is to say, only one branch in the sequence hierarchy can be activated at a time, a phenomenon that magicians and pickpockets have exploited over the years. Furthermore, a branch can only be active for up to about 12 seconds at a time, after which attention must be switched to another branch.

Coming Up

As I promised in Part I, in my next post, I will reveal where I got my knowledge of the brain's organization.

See Also

The Holy Grail of Robotics
Raiders of the Holy Grail
Jeff Hawkins Is Close to Something Big

Tuesday, December 24, 2013

Secrets of the Holy Grail, Part III

Part I, II, III, IV, V

Sorry for the long hiatus. I posted the last installment of this multi-part article way back in March. I have been meaning to write some more on the topic but a series of unfortunate events in my life have slowed me down a bit.

Abstract

In Part II, I explained how to do pattern learning and how to prevent patterns in the hierarchy from getting bigger than they need to be. In today's post, I explain sequence learning and the organization of sequence memory. However, be advised that there are a couple of things about sequence memory that I want to keep secret for the time being.

A Few Observations

A sequence is a string of consecutive nodes representing successive pattern detections. Sequence memory is organized hierarchically, like a tree. A sequence is divided into seven-node sections although sections at either end of a sequence may have less than seven nodes. These are the building blocks of memory. Why seven nodes per sequence? It's because this is the capacity of short term memory. But regardless of its level in the hierarchy, a block is ultimately a sequence of patterns.

Sequences have many functions and any sequence learning mechanism should take the following into consideration:
  • A sequence is used to detect a unique transformation in the sensory space which is manifested as a number of consecutive pattern detections.
  • A sequence is a recording mechanism. It records a memory trace, that is, the precise timing of its last activation.
  • A sequence is a predictive mechanism. The firing of a single node in a sequence is enough to predict the firing of subsequent nodes.
  • A sequence can be used for pattern completion and fault tolerance. The firing of a node in the sequence is enough to compensate for missing signals. This is important when working with noisy and imperfect sensory streams.
  • Sequences, together with the branch mechanism (see Part IV), are part of the invariant recognition mechanism of an intelligent system.
  • A sequence is a sensory motor unit, i.e., an integral part of the goal-oriented behavior mechanism of an intelligent system.
  • The temporal interval between any two consecutive pattern signals can vary.
  • Some sequences repeat more slowly than others. Indeed, many sequences will occur only once or twice.
  • Several sequences can share one or more patterns. This is used to join otherwise unrelated sequences together and is part of the mechanism of invariant recognition.
The Power of Patterns

This may sound counterintuitive but patterns (see Part II for a description of patterns) are the key to sequence learning. This is because patterns are inherently predictive. Patterns are so unique that they normally have just a few predecessors and successors. Most patterns will have only one predecessor and one successor. This is important because it dictates a crucial aspect of sequence learning. Unlike pattern learning, which requires many frequent repetitions, the learning of a sequence (predecessor-successor) needs only one example. In other words, sequence learning can be extremely fast.

Dealing with Imperfection

How can an intelligent system learn a sequence if pattern signals do not always arrive on time? In my opinion, it does not matter that the signals are imperfect as long as they arrive on time some of the time. A single instance of two consecutive signals is sufficient to learn a new sequence. Sequences that lead to a contradiction (I'll explain this in Part IV) or that are not reinforced over time are simply discarded.

Small Things First

One of the main problems we face in sequence learning is that the interval between any two consecutive pattern signals is a variable. It can change with circumstances. For example, the notes of a song can be fast or slow but it's still the same song. It is a problem because it makes it almost impossible to determine which pattern precedes which. The solution turns out to be rather simple: the learning system should start with the smallest intervals first before slowly moving on to progressively longer intervals.

Coming up

In Part IV, I will go over the mechanisms of invariant recognition and short term memory. I will also explain how to catch a liar, i.e., how to detect contradictions in sequence memory.

See Also

The Holy Grail of Robotics
Raiders of the Holy Grail
Jeff Hawkins Is Close to Something Big