Tuesday, September 27, 2011

Physicists Don't Know Shit

Was it Just a Guess?

The story about the faster-than-light neutrino experiment conducted by CERN physicists is starting to really get under my skin. What this fiasco really shows is that physicists do not really know what they're talking about. They need to answer the following questions:

- Is there a speed limit, yes or no?
- If yes, why?
- If no, why not?
- If yes, why is c the limit?

Obviously, physicists are either not sure or they don't really understand why there is a speed limit in Einstein's relativity. Or maybe Einstein did not provide a good enough explanation to convince them one way or another. Was it just a guess all along with no actual understanding as to why? It sure seems that way, doesn't it? Otherwise, this neutrino experiment would never have made the news. Physicists would know that it was either a mistake or it wasn't. Why all the fuss?

Deep Ignorance

From my vantage point, there is no question that physicists don't really understand motion. If you ask any physicist to explain why a body in inertial motion remains in motion, you'll come face to face with abject ignorance if not outright superstition. Is this what we pay our physicists for? Ignorance? Aren't they supposed to understand these things? Their ignorance is deep and in your face and yet, they feel free to conjure up all sorts of Star Trek voodoo physics whenever they feel like it; what with time travel, parallel universes, wormholes, cats that are both dead and alive and all that other nonsense?

The Truth about Motion

Now, if the CERN neutrino physicists really understood motion, they would know that, not only is c the fastest possible speed in the universe, it is also the slowest possible speed. Nothing can move faster or slower than c. It is the only possible speed. Why? Because the universe is discrete. Read Physics: The Problem with Motion if you're interested in having a real understanding of motion. You don't understand motion even if you think you do.

See Also:

Why Einstein's Physics Is Crap
Nothing Can Move in Spacetime
Physics: The Problem with Motion
Why Space (Distance) Is an Illusion
How Einstein Shot Physics in the Foot
Sitting on Mountain of Crap, Wasting Time

Saturday, September 24, 2011

CERN Shenanigans

The Con Artists Are at it Again

Unless you've been living under a rock the last few days, you probably already heard the news that faster than light neutrinos are threatening to disprove Einstein's theory of relativity. I have a very cynical view of this crap. I say crap because that's precisely what it is. I could be wrong but my view is that the crackpots and con artists at CERN and elsewhere are facing the real prospect of seeing their multi-billion dollar budgets slashed due to the current worldwide financial meltdown. This is their way of saying, "look folks, we're doing really exciting physics right now. So keep the money flowing in." Later, they'll have some press conference in which they'll announce that it was all a false alarm and that Einstein's physics is still the best thing that ever happened to humanity. Yeah, right. There is plenty wrong with Einstein's physics but not being able to go faster than the speed of light is not it. You can bet the farm on this one.

No field of science gets away with more in-your-face crackpottery and outright deception than the physics community. Those jackasses are making more money selling crap to an unsuspecting public than any other group in the history of science. If those clowns really understood what they were talking about, they would know that, not only is it impossible to travel faster than the speed of light in a vacuum, it is also impossible to move slower. That's right, there is only one speed in the universe and that speed is c. I just wanted to chime in and let you know what I think.

See Also:

Why Einstein's Physics Is Crap
Nasty Little Truth About Spacetime Physics
Nothing Can Move in Spacetime
Physics: The Problem with Motion
Why Space (Distance) Is an Illusion
How Einstein Shot Physics in the Foot
Sitting on Mountain of Crap, Wasting Time

Sunday, September 18, 2011

The Amazing Power of Concurrent Pattern Recognition

The Key to Sequence Learning

In view of what I wrote in my previous post, some of my readers may be wondering what it is exactly that I discovered. Without giving away the secret, let me say that the key to learning sequences of sensory events is to come up with the right method for concurrent pattern recognition. Once you've got that figured out, then sequence learning becomes a breeze. In fact, sequence learning becomes just a recording process in which each recorded event is a recognized pattern of concurrent signals. And as with any recording process, precise and correct timing is paramount.

That being said, this is not what makes sequence learning easy. What makes it easy is that, with the right concurrent pattern recognizer, there is no need to search the sequence space for good or valid sequences. In other words, every sequence is a good sequence. Just record them all and let the Branch mechanism sort them out.

This is all I can divulge at this time. Later.

See Also:

I Was Wrong About Pattern and Sequence Learning

Wednesday, September 14, 2011

I Was Wrong About Pattern and Sequence Learning

Major Revision

In view of my recent decision to use my understanding of the brain to raise funds for my research, forgive me for not revealing too much about my newly formulated theory of sensory learning and memory formation. It turns out that three of my main assumptions about sensory learning were incorrect. I am revising Rebel Cortex to reflect my new understanding. Here's what I was wrong about.

Pattern Learning

Previously, I claimed that pattern learning should be done in conjunction with sequence learning. Well, I was wrong about that. Pattern learning occurs independently of sequence learning. However, there is a trick to it. It's an amazingly simple trick, once you know what it is. But it is not an easy one to figure out: kind of like a needle in the haystack sort of thing. I still maintain that Numenta's approach, the one that calls for using a hierarchy of patterns starting with small areas of the visual field, is wrong. Heck, it's not even in the ball park of being right. Again, as I've explained in the past, patterns only exist at the bottom or entry level of the memory hierarchy. The upper levels consist only of sequences and sequences of sequences.

Quick prediction: All of Jeff Hawkins' money, entrepreneurial skills and atheistic convictions will not prevent Numenta's eventual demise. Bummer.

Signal Separation

I claimed many times that sensory signals should first go through a 10-stage signal separation layer that uses fixed time scale correlations to separate signals from their streams. I was only partially right about that. There is only a need for a single, fixed-time-scale separation stage.

Sequence Learning

I wrote on several occasions that the primary fitness criterion for sequence learning was frequency. I based this on the observation that sequences are often repeated; therefore the nodes in a sequence will often have the same frequency. However, this is only partially true. It turns out that the human brain can learn to remember a sequence of events even if it occurred only once. While frequency is important to the long term retention of memorized sequences, this is not how we learn sequences. Hint: it's simpler than you think, much simpler.

However, this does not exonerate the use of Bayesian or Markovian statistics in sequence learning. While these approaches do result in useful applications, they erect an insurmountable barrier to achieving the ultimate goal of AI research: building a machine with human-like intelligence and behavior.

Coming Up

Well, that's all I'm going to say about this topic, for now. Like I said, I intend to reveal all at a future date, but not before I make some cash for my research. I'm still busy writing code for the Rebel Speech Recognition project. I may be ready to release a demo sooner than I anticipated. Stay tuned.

See Also:

Rebel Speech Update, September 11, 2011
Rebel Speech Recognition
Rebel Cortex
Invariant Visual Recognition, Patterns and Sequences
Rebel Cortex: Temporal Learning in the Tree of Knowledge

Sunday, September 11, 2011

Rebel Speech Update, September 11, 2011

Breakthrough

All right. This is going to be short because I don't want to tip my hand. Right now, I'm both angry and excited at the same time. I'm angry at myself for not being true to my own convictions regarding the underlying simplicity of the brain. I have always strongly believed that the principles that govern the functioning of the brain are orders of magnitude simpler than almost everybody in the AI business would be willing to believe. Yesterday, while writing code for Rebel Speech, it occurred to me that my approach was way too complicated to be the correct one. I reasoned that one or more of my assumptions had to be wrong. I decided to review my understanding of the sensory cortex and something amazing happened. It was almost as if the scales fell off my eyes and I was able to see clearly for the first time. I suddenly realized that sensory learning in the tree of knowledge (TOK) is simple almost to the point of absurdity. I am laughing with excitement as I write this post.

Two things triggered this epiphany in my mind. The first was my growing realization that the ancient Biblical metaphorical texts about the brain (yes, I am a Christian and I believe that the Bible uses clever metaphors to explain the organization and working of the brain) did not support part of my current design for Rebel Cortex. I was forcing some of the metaphors to fit my own erroneous understanding of certain brain functions rather than the other way around. I could kick myself in the ass for having been so dumb, I swear. The second was my inability to explain a couple of bewildering results (see previous post) I was getting from my recent experiments with the Rebel Speech program.

Coming Up

I cannot reveal the full nature of my breakthrough at this time. All I can say is that you can expect a working Rebel Speech demo program within a month or two, time permitting. Hang in there.

See Also:

Rebel Speech Recognition
Rebel Cortex
Invariant Visual Recognition, Patterns and Sequences
Rebel Cortex: Temporal Learning in the Tree of Knowledge

Thursday, September 8, 2011

Rebel Speech Update, September 8, 2011

Signal Separation Blues

Well, it seems that I may have to revisit my signal separation hypothesis. I can't get it to work properly. I get the best results by completely bypassing the separation layer and feeding the sensory signals directly to the bottom level of the tree of knowledge (TOK), aka hierarchical memory. If you remember, I wrote a while back that sensory signals must first go through a fixed time scale separation layer before they are fed to the TOK where they are organized according to a variable time scale. At this point, I'm thinking that the problem may be due to some bug in my code (believe me, this shit is complicated) that is preventing the separation layer from doing what it's supposed to do.

Patterns and Sequences

For now, I am putting signal separation on the back burner and moving full speed ahead on coding the TOK. Learning sequences of patterns, while easy in principle, is a pain in the ass to program. And I am not even talking about the task of implementing a spiking neural network framework and all the classes required for a variety of neurons and synapses. And let's not forget the workhorse thread that is running the parallel simulation underneath and the management module that is required for making and severing connections without causing a fatal runtime exception. Luckily, I pretty much had most of the neural stuff done from my work on Animal. Precisely timing events in a Von Neumann serial computer is a daunting task.

Surprisingly enough, it turns out that, during learning, the system must rely on deterministic timing. In other words, a pattern is considered recognized only if all of its input signals arrived concurrently. Contrast this with the probabilistic Bayesian learning approach used in Numenta's HTM and the probalistic Hidden Markov Model used in most commercial speech recognition programs. As I have written previously, in the Rebel Cortex intelligence hypothesis, probability only comes into play during actual recognition when multiple sequences and branches of the hierarchy compete in a winner-take-all struggle to become active.

Deterministic timing does not mean, however, that two concurrent signals must always occur concurrently. It means that they must occur concurrently often enough to be captured by the learning mechanism and recognized whenever they reoccur. Only the learning mechanism needs to be deterministic, not the signals and certainly not the act of recognition.

One of the problems that I am wrestling with has to do with determining how many sequences should be created in memory. I am considering limiting the number of branches that a sensory input can have to a fixed preset value. I don't think this is the way to do it but it will have to do for the time being. I got more pressing issues to deal with.

Results

Nothing lifts one's spirit more than obtaining good results. Although I have only worked on coding the bottom level of the TOK (it's the hardest part), I am already getting tantalizing glimpses of glorious things to come. Patterns form quickly when I repeatedly speak the numbers 1 to 10 in the microphone. This is encouraging because I am using only 4 audio amplitude levels and only the lower 24 frequencies out of a spectrum of 512 (11 Khz sampling rate). One peculiar thing that I noticed is that the learned sequences rapidly taper off. What I mean is that one of the patterns in a sequence will have say, 25 concurrent inputs, but the number quickly decreases to 3 or 4 in the other patterns. Another thing is that the learned sequences contain at most 5 or 6 patterns. I am not entirely sure but I think that the learning mechanism may be automatically limiting the pattern sequences to short phonemes. It's kind of scary when you don't fully understand what your own program is doing. I seriously need to write some code in order to graphically represent the sequences in real time. This would give me a visual understanding of what's going on.

Coming Up

The next task on my agenda is to implement the upper levels of the memory hierarchy and the branch mechanism proper. This will be the really fun part because, once that is working properly, I will know whether or not Rebel Speech is correctly recognizing my utterances. It's exciting. Stay tuned.

See Also:

Rebel Speech Recognition
Rebel Cortex
Invariant Visual Recognition, Patterns and Sequences
Rebel Cortex: Temporal Learning in the Tree of Knowledge