Sunday, November 19, 2017

I Refuse to Work With Materialists And Atheists On AGI

I need neither their money nor their expertise. What I got, no one can take away from me. I have received something special and it is not for sale. I know who my source of knowledge is. If you are a materialist, don't even read my blog. I just thought I'd come right out and say this for the record.

The Curse of Materialism or Why People Like Jeff Hawkins and Geoffrey Hinton Will Never Figure Out AGI

Neurons
I Just Realized Something Funny

Last night, while trying to understand the reasons that Jeff Hawkins, the founder of Numenta, arrived at his location hypothesis (which he erroneously believes is the secret to strong AI or AGI), it occurred to me that materialists like Hawkins will never figure out AGI. Mainstream AI researchers all work under the assumption that the physical brain is all there is to the mind and consciousness. It is a crippling delusion that forces them to conflate conscious/spiritual experiences with physical, cause-effect intelligence. I bursted out laughing. I find the whole thing irresistibly hilarious.

The Curse of Materialism

Jeff Hawkins is an avowed materialist and atheist. So are probably over 99% of professional AI researchers. They are extremely proud of it and will denigrate and blacklist everyone (including yours truly) who does not believe as they do. It is a particularly dangerous religion because these are people who are hoping to create superintelligent machines to worship. What is funny to me is that they don't realize that they shot mainstream AGI research in the foot with their own gun. As a Christian, I think the irony is exquisite. I'm still laughing as I write.

Hawkins came up with his location hypothesis because he is convinced that the 3D vista that he sees in front of his eyes is somehow represented physically in the circuitry of the brain. In other words, Hawkins believes that the brain models the world. He is a GOFAI scientist whether or not he realizes it. This is precisely the type of AI that the late Professor Hubert Dreyfus railed against: The world is its own model. Unfortunately, Dreyfus's words fell onto deaf ears. The entire mainstream AI community is laboring under the curse of materialism.

There Is No 3-Dimensional Model of the World in the Brain

The materialist sees a 3D world in front of him. But since he is convinced that the mind is the brain and that everything he experiences is also in the brain, he is forced to conclude that the brain maintains a 3D model of the world. It is a powerful illusion. This is why Hawkins believes that the brain must have special circuitry to generate a location signal.

It is true that we see a 3D vista but there is no 3D vista in the brain or anywhere else. It is supernatural. The brain only works with neuronal spikes. We consciously experience distance but there is no distance in the brain. Distance, space and volume are not physical properties. They are abstract entities that are part of the spirit or soul or whatever you want to call the non-physical entity that allows us to be conscious of certain physical processes in the brain. No, they are not magic. They are part of the Yin-Yang reality that we exist in. If you think distance, space and volume are physical entities, just ask yourself, what are they made of? What are their constituents?

But don't tell any of this to the likes of Jeff Hawkins and Geoffrey Hinton. They are liable to have an apoplectic fit. I will not belabor the point. This is something that requires deep thinking in order to undo the damage done to your minds by your upbringing in a world of lies and deception. Please read Why We Have a Supernatural Soul if you are interested.

PS. I am still laughing, hahaha...HAHAHA...hahaha... Sorry.

See Also:

A Critique of Numenta’s Location Hypothesis
The World Is Its Own Model or Why Hubert Dreyfus Is Still Right About AI
Why We Have a Supernatural Soul
Ex-Google Executive Registers First Church of AI With IRS

Thursday, November 16, 2017

A Critique of Numenta's Location Hypothesis

Why I Respect Numenta

I have always had respect for Numenta. Over the years, under the leadership of their maverick founder and chief architect, Jeff Hawkins, they have steadfastly maintained that deep learning was not the way to achieve artificial general intelligence (AGI). They insisted that imitating the brain was the right way forward, that intelligence was based on the timing of sensory signals and that learning in the brain consisted mainly of making new synaptic connections, not modifying connection weights. They did it while the deep learning hype was in full swing. They never flinched even in the face of overt hostility from the mainstream AI community. They had a healthy, think-outside-the-box attitude. As a rebel, I admired that. Lately, however, and apparently reacting to pressure from the AI community to show some serious results, the folks at Numenta seem to have lost their way. Their latest offering, the so-called location hypothesis, misses the mark. Worse, there is no demo program to support the theory.

The Universal Invariant Recognition Problem

One of the most difficult problems in AI is universal invariant recognition. The human brain has the seemingly magical ability to recognize an object regardless of its position and orientation in the field of view. Deep learning experts tried to solve the problem by using brute force. That is, they train the network with millions of images in the hope of covering every possible situation. However, this approach will invariably leave holes that can lead to spectacular failures. So they (Yann LeCun et co) came up with a partial solution, a technique called convolution that gave the network a degree of translation invariance. Even then, deep neural nets can still be fooled by adversarial examples. It turns out that they can fail catastrophically if a previously learned pattern is modified by an imperceptibly small number of pixels. In other words, deep neural nets are not universally invariant. Some in the AI community (e.g., DeepMind) have been promoting deep learning as a stepping stone toward AGI. They are sorely mistaken. Others (e.g., Geoffrey Hinton and Yann LeCun) seem to be more aware of its limitations.

The Location Hypothesis

Jeff Hawkins and his team at Numenta believe they may have found the secret of universal invariance. They are proposing that the brain somehow generates a special signal that specifies the location of an object under observation and the location of its features relative to the object. The idea seems to be that, by knowing the position of an object relative to its features, the brain can compensate for positional differences and solve the problem of invariant recognition. They write:
We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data.
...
A key component of our theory is the presence in each column of a signal representing location. The location signal represents an “allocentric” location, meaning it is a location relative to the object being sensed. In our theory, the input layer receives both a sensory signal and the location signal. Thus, the input layer knows both what feature it is sensing and where the sensory feature is on the object being sensed. The output layer learns complete models of objects as a set of features at locations. This is analogous to how computer-aided-design programs represent multi-dimensional objects.
This article by Hawkins explains Numenta's approach in an easy to read style. While I admire the courage and willingness of Numenta to attack a hard problem head on, I must say that I am disappointed with this hypothesis.

Why Is the Location Hypothesis Flawed?

There are several reasons as follows.
  • As I have argued on many occasions, neurons are slow and there is very little time and energy in the brain for fancy calculations. Maintaining a location reference for visual objects is a particularly complex task. This is especially true if it is a 3-dimensional reference location which it would have to be if the sensed object is in a 3-dimensional world. The system would have to determine, not only the location of the object relative to the viewer but also the location of a reference point relative to the object itself. Is it in the middle of the object or somewhere else? This is not an easy task. And this is not even taking into account the fact that the brain must somehow detect the boundaries of the object under observation while excluding all the other objects in the scene.
  • A location signal is necessarily encoded with spikes (discrete pulses). A spike, by itself, has no information content other than its time of arrival. How many spikes would it take to encode a continually changing location vector in 3D space? The answer is: a lot. Again, there is no time for this in the brain. The highest spiking frequency is about 1000 Hz and the brain only has about a 10 millisecond window to process each sensory input. There is not enough time to encode even a 1-dimensional location for each input signal.
  • Let us suppose, for argument's sake, that the brain uses a single connection for each possible location. This would require millions of connections per feature. This is clearly out of the question.
I have other objections but these three should suffice to show that Numenta's location hypothesis is not biologically plausible.

A New Memory Model

I am proposing a new memory model based on spike timing. The model assumes that the brain perceives and learns by detecting many minute changes in its sensory space. I hypothesize that the brain uses branches in its hierarchical sequence memory to detect complex objects in the world regardless of their locations or orientations. A branch is a top-level node in the sequence hierarchy that is activated when it receives enough signals from lower level nodes to trigger a recognition. This memory model has the ability to instantly sense and understand complex objects in the environment, even objects that it has never encountered before.


There are two hierarchies, one for pattern detection (not shown) and one for sequences. Sequence memory is where actual object recognition happens. It receives discrete signals from pattern memory. Pattern neurons learn to detect a huge number of small elementary patterns such as lines, edges, dots, etc. Signals from pattern neurons are fed directly to the bottom or entry level of the sequence hierarchy. Pattern signals are stitched together in sequence memory to form any complex object.


As an example of sequence processing, consider the horizontal motion of a short vertical line or edge across the retina. This would result in multiple pattern neurons generating a series of spikes (one at a time) separated by a short interval. This series of event can be captured by an indefinitely long structure of connected nodes at the bottom level of the sequence hierarchy. I call these long structures "vines" to distinguish them from the shorter "sequences". The nodes in the vines would fire in succession as the line/edge moves horizontally in a given direction. There are many such sequence structures in sequence memory that capture various movements or other form of changes in the environment. The important thing to note here is that the interval between nodes in a vine is not fixed but can vary over time.

How the Brain Does Invariant Object Recognition

Obviously, the brain must have a simple and energy efficient solution that does not require lengthy calculations. Recognition must happen quickly and accurately using uncertain sensory information. How does the brain do it? I propose that the brain has a way to pool multiple concurrent sequences together to form branches that can detect any arbitrarily complex moving object. Recognition is based on a competitive, winner-take-all process. Only the branches that receive enough signals will trigger a recognition event.

Like almost everybody who has attempted to design a sequence hierarchy for AI, I used to think that a higher-level sequence was just a mechanism that served to join two or more non-overlapping sequences at a lower level. It took me years to figure out that I was wrong. It turned out that the main function of the sequence hierarchy is not to manage sequence storage but to find as many fixed temporal correlations between multiple co-occurring sequences as possible. Here is how it works.

It would be too inefficient to test every node in a vine with every other node in sequence memory. The brain uses a divide and conquer approach. Every vine is divided into multiple seven-node sequences. Why seven? It is a compromise. Less than seven would consume too much energy while more than seven would result in sluggish performance.


Let me come out on a limb and claim that these short sequences are implemented in the brain as cortical columns. In addition to serving as a mechanism for ordering pattern activations, they can also record their activities by retaining a trace (both time and speed) of their last activation in their minicolumns. The seventh node of every sequence can be connected to nodes in an upper level to form higher level vines. These are, likewise, divided into sequences which, in turn can send connections to an upper level. I happen to know the sequence hierarchy has 20 levels. How I know this and how vines are constructed are topics for a future article. The important thing to notice here is that upper sequences are just mechanisms that connect lower level sequences that are temporally related. They essentially bind a number of patterns together to form a single complex object.

A top level sequence is what I call a branch in the sequence hierarchy. It is a complex object detector. It is also the brain's mechanism of attention: only one branch can be "awake" at a time. During recognition, signals from pattern memory quickly travel (via the seventh nodes of many sequences) all the way up the sequence tree as far as they can go. A top level sequence will trigger a recognition event as soon as it receives enough signals from lower levels to account for the overall activation of only two of its nodes. This recognition event is invariant to the actual activation states at the lower level sequences. What matters is that enough signals reach the top.

Partial activation of more than two nodes is acceptable as long as the required overall amount is reached. This is how the brain handles uncertainty. It means that it takes relatively few sensory signals to trigger a recognition. Even partial occlusions can trigger a recognition. This, combined with the variable intervals of the sequences, is the reason that we can recognize faces and animals in the clouds, different handwritings or fonts, highly stylized art, etc. When a top level sequence is triggered, it sends a recognition signal via feedback pathways all the way back down to pattern memory where pattern neurons are also triggered, thus correcting any incomplete or corrupt pattern information.


Note: In a future article, I will explain how sequence learning is done using spike timing, among other interesting things. I may also have a demo program (one never knows) to support my claims. Stay tuned and be patient.

See Also:

Invariant Recognition of Visual Objects (Frontiers Media)
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World (Frontiers Media)
Unsupervised Machine Learning: What Will Replace Backpropagation
Fast Unsupervised Pattern Learning Using Spike Timing
Fast Unsupervised Sequence Learning Using Spike Timing

Sunday, November 5, 2017

Occult Physics Will Blow Your Mind (Repost)

Note: I am reposting this because it keeps materialists, atheists and other undesirables off my back. Enjoy.

Abstract

According to ancient occult physics, the electron is not elementary but consists of four subparticles. We exist in an immense 4-dimensional sea of energy arranged like a crystal lattice. This means unlimited clean energy, free for the taking once we learn how to tap into the lattice. The entire history of the universe is being recorded in the lattice. Ancient megalithic societies may have used this knowledge to transport huge quarried stones weighing 1000 tons or more. This is the first in a series of articles that I am writing on occult physics. I cannot promise that I will ever publish them all but, if or when I do, I can guarantee that they will blow everyone's mind.

Sacred Scientific Knowledge Hidden in Plain Sight

Many years ago, I stumbled on an amazing discovery. It occurred to me that a few ancient occult texts contained revolutionary scientific secrets about the fundamental principles of the physical universe. The secrets can be found in the books of Isaiah, Ezekiel and Revelation. They are written in an obscure metaphorical language that sounds nothing like science. However, once one understands the meaning of some of the metaphors, things begin to fall into place. At one point in my research, I became frightened and stopped thinking about it for a long time. I had concluded that the potential harm to humanity that this knowledge could unleash if it fell in the wrong hands was just too great.
Assyrian Lamassu or Human-Headed Winged Bull - Southern Iraq
Most ancient societies recorded their sacred wisdom in precisely chosen metaphors that only the initiates understood. The Sumerians, Babylonians, Assyrians and Egyptians thought that certain occult sciences were so powerful that they erected huge symbolic stone monuments to preserve them for posterity while keeping their true meaning hidden from the masses.
Two Human-Headed Winged Bulls - Iran
Although the Biblical symbols are not identical to the ones found in Mesopotamia, the many similarities are striking. Both use images of wings, discs (wheels), bulls, lions, eagles, hands, feet and faces to symbolize various aspects of the sacred knowledge.

Sumerian Anunnaki Winged God and Disc
For whatever reason, historians and archaeologists love to associate ancient occult symbology with mythology and religious superstition but they could not be more wrong. It is almost as if some hidden power is hellbent on preventing mankind from learning about their glorious past. None other than Isaac Newton, the father of modern physics, was convinced that there was secret knowledge encrypted in the Bible and in other ancient mythological writings. (Sources: What Was Isaac Newton's Occult Research All About? and Top 10 Crazy Secrets of Isaac Newton).

In my opinion, the Biblical seraphim and cherubim are occult descriptions of fundamental particles of matter and their properties (Sir Isaac would have jumped for joy if he had known about this). I believe this knowledge was known to ancient megalithic societies in Mesopotamia, Egypt, South America and elsewhere because it was the basis of the technology that they used to lift and transport huge cut stones weighing 1000 tons or more. I believe that a mastery of this knowledge will unleash an era of free unlimited clean energy and super fast transportation.

Stone of the Pregnant Woman - Baalbek, Lebanon
What follows is a short summary of the strange "living creatures" mentioned in the Bible and my interpretations. Note: I will not go into what I believe to be potentially dangerous aspects of this research.

Seraphim - Photons

Seraphim (singular, seraph) is a plural hebrew word that means the shining or burning ones. They are mentioned in the books of Revelation and Isaiah. They symbolize pure energetic particles and their properties. I have identified them as photons. There are 4 types of seraphim and each one has a different face property: man, lion, bull or eagle. One of the seraphim (the one with the bull's face) is responsible for electric phenomena and the other three for magnetic phenomena. The face of each seraph is associated with one of the 4 spatial dimensions (degree of freedom) of the cosmos. Each face has 2 possible states or orientations, forward or backward. It is more or less equivalent to what quantum physicists call the "spin angular momentum" of a particle, except that there really is no spin.

In all, the seraphim can have 8 possible orientations or spin states, 2 for each face. Two of the orientations, the ones associated with the face of a bull, determine whether or not the particle is involved with a positive or negative electric field. The other 6 states are responsible for magnetic phenomena.

Every seraph has energy properties which are symbolized by 6 wings. Unlike cherubim (explanation below), seraphim have no bodies or mass. Two of the wings of a seraph are used for motion, two are associated with its face and two with its feet. Yes, all matter particles have a property called feet (bull or calf hooves) which allow them to move in one direction of the 4th dimension at the speed of light. Wings, feet and hands are powerful metaphors the meaning of which I cannot expand on at this time. I will explain them further in future articles.

The Sea of Crystal - Zero-Point Energy

The most amazing thing about seraphim is that they are the constituents of an enormous 4-dimensional "sea of crystal" or "sea of glass" in which the normal matter of the universe exists and moves. It is a sea of wall-to-wall energetic particles (photons), lots of it, arranged as a stationary 4-dimensional lattice. We are totally immersed in it like fish in water and nothing could move without it. In fact, the entire visible universe is continually moving in the lattice in one dimension (bull) at the speed of light. As matter moves in the lattice, it leaves traces in it. In other words, the entire history of the universe is continually being recorded in the lattice down to the minutest details. Ancient Hindu and Buddhist societies were aware of this recording medium which was called the Akasha. Modern theosophists call it the Akashic records.

The closest analog to the lattice in modern physics is the so-called zero-point energy field that physicists believe permeates space but have no idea what it is made of or what its purpose is. Physicist Richard Feynman is reported to have said that "there is enough energy inside the space in an empty cup to boil all the oceans of the world." Gravitational, electric and magnetic phenomena are caused by the motion of matter in the lattice. Again, one day soon, in the not too distant future, society will learn how to tap into the lattice for unlimited clean energy production and super fast transportation. Current forms of transportation and energy production will become obsolete.

Cherubim - Quarter Electrons

Cherubim (singular, cherub) are symbolic winged creatures that modern theologians wrongly associate with angelic beings that fly around and do God's will. The Hebrew word cherubim is derived from the Assyrian term chiribu or kirubi which was the mystical name given to the representation of a winged bull or lion with a man's head. Various types of cherubim are mentioned in the Bible but my research is concerned strictly with the 4 cherubim (living creatures) in chapters 1 and 10 of the Book of Ezekiel. In chapter 10, verse 14, Ezekiel clearly equates the Hebrew word cherub with the face of a bull. He said nothing about angels.

Each living creature or cherub has 4 faces and 4 wings. Each also has a human body, 4 human hands and the feet of a bull. Having 4 faces means that a cherub has both electric and magnetic properties. All four cherubim move together in unison without turning, in any of the 4 dimensions.

My interpretation will come as a surprise. In my view, the cherubim are the 4 particles that comprise the electron or the positron. Yes, the electron is not an elementary particle as the Standard Model of particle physics would have us believe. Each cherub has 1/4 the charge of the electron. But this is not as surprising as it sounds. Physicists have known for some time that the electron is not truly elementary but they are a conservative and highly political bunch. Rather than come out and acknowledge the composite nature of the electron, they have taken to calling its constituent particles, quasiparticles instead. They also use the term quarter electron when they are feeling more liberal.

The 4 human hands of a cherub are special properties that confine them to stay and move together as one particle: they hold onto each other. The body of a cherub is a special kind of energy that physicists call mass. Each cherub also has a wheel or disc associated with it. The 4 wheels act as one wheel and move precisely with the 4 cherubim. In my interpretation, the wheel represents the electric field of the electron.

Coming Soon

In future blog articles, I will explain how particles move in the lattice and how the electric field of a charged particle works.

See Also:

Ezekiel 1: The Four Living Creatures, the Four Wheels and the Crystal Firmament
Ezekiel 10: The Four Cherubim and the Four Wheels
Isaiah 6: The Four Seraphim
Revelation 4: The Four Beasts and the Sea of Crystal
Physics: The Problem With Motion
There Is Only One Speed in the Universe, the Speed of Light. Nothing Can Move Faster or Slower

Saturday, October 28, 2017

The Biggest Crime of the Plutocracy Is that They Stole the Capital

The Crime

The biggest crime of the plutocracy is that they stole the capital (which represents the wealth of the earth) from the people and have gotten away with it for centuries. Almost all of the ills of society, from homelessness to rampant crime, can be attributed to this institutionalized thievery.

The Fear

The biggest fear of the plutocracy is that, one day, the masses will realize that they have been robbed and that they are slaves in a slave system. The ruling elite is fighting teeth and nails to prevent that from happening. They use censorship, surveillance, violence, wars and divisive propaganda to weaken us, confuse us and keep us in the dark.

Give Us What Is Ours

No, I do not believe in any form of socialism because socialism destroys the free market. I just believe that we, the people, should be given what is ours, the capital. Time is running out. We must wake up before artificial intelligence and automation eliminate all jobs and the world is turned into a welfare society condemned to survive on handouts from a criminal minority.

Monday, October 16, 2017

The Gravitational Wave Fraudsters Strike Again

The Speed of Gravity Scam

Today, the LIGO scammers announced that they have detected the first gravitational waves from the collision of two neutron stars that occurred 130 million years ago. They are pushing it as a big event in astronomy because this is the first time, they claim, that a gravitational wave detection was accompanied by the observation of light by both space and land-based observatories. A weaker gravitational wave was also detected by another interferometer in Italy called VIRGO. Although none of these experiments are designed to falsify the gravitational wave prediction, as good science would dictate, the dual detection would corroborate (not prove) the Einsteinian prediction that gravity propagates at the speed of light. The problem with this pseudoscientific prediction is that the planets and the galaxies behave as if gravity was instantaneous, just as Newton assumed. Why is it pseudoscientific? Only because the speed of gravity cannot be measured: the Einsteinian results are identical to the Newtonian results. This much was acknowledged by one of their own, relativity physicist Steven Carlip (see links below). In other words, we are swimming in an ocean of super expensive pseudoscience.

The Dual Detection and Triangulation Scam

But how do they know that both the EM signals and the so-called "gravitational wave" signals originated from the same collision? They don't, of course. They are lying through their teeth. They assume that both the LIGO and VIRGO signals were from the same collision and they use this assumption to calculate the part of the sky where the collision would have occurred if the assumption was true. Here is how Jennifer Chu of MIT news put it (emphasis added):
Though the LIGO detectors first picked up the gravitational wave in the United States, Virgo, in Italy, played a key role in the story. Due to its orientation with respect to the source at the time of detection, Virgo recovered a small signal; combined with the signal sizes and timing in the LIGO detectors, this allowed scientists to precisely triangulate the position in the sky. After performing a thorough vetting to make sure the signals were not an artifact of instrumentation, scientists concluded that a gravitational wave came from a relatively small patch in the southern sky.
Once they get a fix, they can look to see if light telescopes made an observation in the same area. It is pure luck. If there was no EM signal, they would ascribe the supposed collision to small black holes instead. This is not science. This is full blown crackpottery and scientific fraud to the tune of billions of dollars of our money.

We Must Put a Stop to the Lies and the Thievery of Big Science

This crap has been going on for many decades. We, the people are paying for this and we must rise up and put a stop to it. The fraudsters must be debunked and made to recant their fraudulent claims. But even if the people are currently helpless to do something about it, there is no doubt in my mind that justice will prevail soon, sooner than most suspect.

PS. I corrected a mistake in the article that Andreas brought to my attention.

See Also:

Aberration and the Speed of Gravity (Steven Carlip)
Does Gravity Travel at the Speed of Light? (Steven Carlip)
Why LIGO Is a Scam
Why Steven Carlip Is Mistaken about the Speed of Gravity or Why LIGO Is Still a Scam
LIGO Is a Billion-Dollar Scam Based on Bullshit Physics
Physics Nobel Prize Awarded to Crackpots and Frauds for Detection of Gravity Waves

Saturday, October 7, 2017

They Are Running Scared

Fear Is in the Air

UN Opens New Office to Monitor AI Development and Predict Possible Threats

The evil powers that control the world are really worried about what will happen after the arrival of advanced artificial intelligence. They should be. One of the first consequences of AGI is that the public will quickly realize that they have been robbed for centuries by a bunch of thieves in high places. Our economic systems are really slave systems. This is the reason that people are afraid that machines will take their jobs. If we had a just economic system, they would welcome intelligent robots and they would be delighted to have them do their work for them.

There is no question that we need a more equitable system and we need it fast in order to avoid disaster. But the power-hungry psychos would rather have global wars and widespread violence than give up their ill-gotten riches. They know they got a big problem on their hands and they are already pushing bullshit solutions such as "eco-economies" based on climate change alarmism and universal basic income (UBI). These "solutions" are, of course, designed to allow the jackasses to hold on to most of the wealth of the planet. Not all of us are deceived, however. After all, why should the unemployed masses get a subsistence handout from the government while the equally unemployed Mark Zuckerbergs of the world continue to live in decadent luxury. What makes them so special? Many, including yours truly, will rebel.

Of course, the greatest fear of the ruling elite is that an unknown enemy might take early control of AGI and use it to hurt their livelihoods and threaten their personal safety. This is the reason that modern societies have been turned into surveillance police states. We are being watched.

Revolution is in the air. Get ready to live in interesting times.

Tuesday, October 3, 2017

Physics Nobel Prize Awarded to Crackpots and Frauds for Detection of Gravity Waves

This Pisses Me Off

Nobel Prize for frauds. These three jackasses should be in jail instead.
Rainer Weiss, Barry Barish, and Kip Thorne, have been awarded the 2017 Nobel Physics Prize for their gravitational wave work with the LIGO project. This bothers the hell out of me because gravitational waves are based on bullshit science. They are based on the pseudoscientific Einsteinian prediction that gravity propagates at the speed of light. The problem is that the prediction cannot be tested by definition because the GR results are identical to the Newtonian results which assume that gravity is instantaneous.

General Relativists did some fancy math works to pretend that they are doing science. And they get away with it. They tell us with a straight face that gravity propagates at c even though it acts as if it were instantaneous just as Newton assumed. This kind of crackpottery from mainstream science is so pathetic and so blatantly fraudulent, it is not funny. The assholes are spending billions of dollars of our money on total crap and we have no say in the matter.

Time Travel Crackpot

Let us not forget that Kip Thorne (a good friend of Stephen Hawking, the crackpot in the wheelchair) is the Star Trek voodoo physicist who is convinced that time travel is possible even though every physicist worth their diploma knows that a time dimension makes motion impossible. This is the reason that nothing can move in spacetime and that Sir Karl Popper called spacetime, "Einstein's block universe in which nothing happens" (source: Conjectures and Refutations, Karl Popper).

We, the people, must wake up and demand accountability from mainstream science. We are being taken to the cleaners by charlatans.

See Also:

LIGO Is a Billion-Dollar Scam Based on Bullshit Physics

Yoshua Bengio Is a Backpropagation Crackpot

Backpropagation in the Brain?

Yoshua Bengio gives an entire presentation (YouTube) in which he defends the hypothesis that backpropagation, the learning method used in deep neural networks, is also used by the brain. This is truly embarrassing, especially since he makes multiple references to deep learning luminary Geoffrey Hinton who just recently admitted that we should abandon backpropagation and start over.

Oh, the humanity! Oh, the mainstream crackpottery! And remember. These are people who get paid millions of dollars to know better.

See Also:

Samsung Electronics Launches AI Lab Headed by Joshua Bengio in Montreal, Canada
Unsupervised Machine Learning: What Will Replace Backpropagation?
AI Pioneer Now Says We Need to Start Over. Some of Us Have Been Saying This for Years

I Know the Hidden Secrets of the Brain

For the Record

I do understand how the brain works, from perception to motivation, motor learning and adaptation. I am not saying this to boast because I am not smart enough to figure out how the brain works on my own, not in a thousand years. I certainly did not learn the brain's secrets from mainstream AI or the neuroscience community. I learned them by deciphering ancient occult books that are thousands of years old. I am saying this for the record only. Take it or leave it.

PS. I am a crackpot and a kook. I will not deny it because it is true. LOL.

See Also:

Occult Physics Will Blow Your Mind

Monday, October 2, 2017

Neurons, Synapses, Plasticity, Etc.

A Funny Thing Happened

Has anyone noticed that I wrote 2 articles (see links below) on unsupervised learning in spiking neural networks and not once did I mention the properties of neurons and their synapses? I never said a word about spike timing dependent plasticity, synaptic strength or signal integration or anything of the sort. Guess what? It does not matter how neurons are implemented biologically or artificially. What matters is what they do, i.e., what function they perform as a whole and how they acquire their connections. Pattern neurons, for example, do one thing while they are learning and a different thing when they are no longer learning.

Biologically speaking, pattern neurons receive precisely timed synchronization signals from the hippocampus. There is also complex neural circuitry that comes into play during pattern pruning. It gets much more complicated in sequence memory. But this is all irrelevant to the principles we need to understand.

I always smile when I see people agonizing over what properties neurons should have. This is not what is important. The bigger principles are what we should be focusing on. Implementation is just engineering. It can be done any number of ways, in hardware or software. Some of you may be surprised to learn that I don't use synaptic strength properties in my own software experiments.

Something to think about.

See Also:

Fast Unsupervised Pattern Learning Using Spike Timing
Fast Unsupervised Sequence Learning Using Spike Timing

Las Vegas Mass Shooting

I woke up in a relatively good mood this morning only to have it shattered by the news of the worst mass shooting in US history. Humanity is on the road to global self-destruction. Having smart machines running around can only accelerate our demise. Our problem is spiritual. It cannot be fixed by science and technology. This is depressing as hell.

Sunday, October 1, 2017

The Mainstream AI Community Can Kiss My Ass

The AI community Does Not Put Food on My Table

This blog has received a lot of traffic since I started writing on unsupervised learning using spike timing. The response I'm getting from some in the AI community is that they already knew all this stuff about spiking neural networks and timing. They already knew that the brain expects a perfect world and that it does not perform probabilistic computations on its sensory inputs. My question is, why are you jackasses reading my blog if you already knew it all? And why have you spent all your energy and tens of billions of dollars on supervised learning and backpropagation for the last 30 years?

Let me make something perfectly clear. I don't write for the scientific community. They are a bunch of elitist assholes by nature and I abhor elitism. The scientific community, including mainstream AI, does not put food on my table and, even if they did, I would still tell them to kiss my ass. I did all my research on my own time and my own penny. And I had to unlearn most of the crap I learned from them. I owe them nothing and I don't need their goddam money. I know who my savior and provider is. They are not it.

Mainstream AI: You Will Be Disrupted

Your expertise in artificial intelligence is just a castle in the air. When the real McCoy is revealed, your little straw house will go up in flames and its ashes will be trampled into oblivion. You will become obsolete overnight. I'll be watching the whole thing unravel with a bag of Cheetos in one hand, a can of beer in the other and a smirk on my face. You will be disrupted.

One More Thing, Godammit!

I am not one of you. I am a Christian, not a brain-dead superstitious materialist who believes that machines are conscious, that the universe created itself and that life emerged from dirt on its own. In other words, my blog is not meant for you. I don't write for you. Go read somebody else's blog.

OK. I feel better now. Thank you.

Wednesday, September 27, 2017

I Need Time to Think

I have spent many years researching the brain and intelligence. I use unconventional methods that most would consider crazy but I have made tremendous progress. I have arrived at a sharply different understanding than the mainstream. The problem is that I don't really know what I should do with it. A week ago, I decided that the time had come for me to publish at least part of what I have discovered. Perception is the meat and potatoes of intelligence. Once you figure it out, the rest just falls into place.

Then I take a look at the miserable conditions of the world and the precarious state of international relations and I get cold feet. Humanity has a death wish. There is a terrible feeling that takes over me and paralyzes me. It's all deja vu but I can't shake it. I need a couple of days to think things through. Hang in there.

Monday, September 25, 2017

Fast Unsupervised Sequence Learning Using Spike Timing (1)

Novel Memory Architecture

Previously in this series on unsupervised learning, I explained how to implement a fast unsupervised pattern learning network based on spike timing. In this article, I introduce a novel architecture for sequence memory. Its purpose is to emulate the brain's flexible perceptual abilities.

Note: I originally intended this to be a single blog post but the subject is too vast for one post to do it justice. Expect one or more more installments after this one.

The Magic of Sequence Memory



Sequence memory is the seat of knowledge and cognition. This is where most of the magic of perception happens. It is the part of the brain that gives us a common sense, cause-effect understanding of the world in all of its 3-dimensional grandeur. Equally impressive is its ability to make highly accurate guesses when presented with incomplete or noisy sensory information. This ability is the reason that we have no difficulty recognizing highly stylized art or seeing faces and other objects in the clouds. Take a look at the image below. Those of us who are familiar with farm animals will instantly recognize the face of a cow even if we have never seen the picture before. Don't worry. Some of us never see the cow.


Font designers rely on the brain's ability to almost instantly classify objects according to their similarity to other known objects. Without it, we would have a hard time recognizing words written in unfamiliar fonts. It can also be used to play tricks on the brain. Cognitive scientist Douglas Hofstadter and others have written about this. Consider the ambigram below. We can read the bottom word as either 'WAVe' or 'particle'. How is that possible?
This magical flexibility is the gift of sequence memory. The brain can quickly recognize sequences at various levels of abstraction based on very little or even faulty information. My point here is that, unless we can design and build our neural networks to exhibit the same capabilities as the human brain, we will have failed. I am proposing a novel architecture for sequence memory that, I believe, will solve these problems and open up the field of AGI to a glorious future.

Note: Sequence memory is also the source of all voluntary motor signals and is essential to motor learning. I will cover this topic in a future article.

Math Is Not the Answer

At this point, some of you may be wondering why I use no math in my articles on AI. The reason is that the brain does not use it. Why? Only because its neurons are too slow and there is no time for lengthy calculations. Not that I have anything against math, mind you, but if you hear anyone claiming that AGI cannot be achieved without doing some fancy math (which is just about everybody in mainstream AGI research), you can rest assured that he or she hasn't a clue as to what intelligence is really about.

The Brain Assumes a Perfect World

One of the most specious yet ubiquitous myths in mainstream AI research is the notion that the world is uncertain and that, therefore, our intelligent machines should use probabilistic methods to make sense of it. It is a powerful myth that has severely retarded progress in AI research. I am not the first to argue this point. "People are not probability thinkers but cause-effect thinkers." These words were spoken by none other than famed computer scientist Dr. Judea Pearl during a 2012 Cambridge University Press interview. Pearl, an early champion of the Bayesian approach to AI, apparently had a complete change of heart. Unfortunately, the AI community is completely oblivious to any truth that contradicts their paradigm.

As I have said elsewhere, we can forget about computing probabilities because the brain's neurons are not fast enough. There is very little time for computation in the brain. The surprising truth is that the brain is rather lazy and does not compute anything while it is perceiving the world. It assumes that the world is perfectly deterministic and that it performs its own computations. The laws of classical physics and geometry are precise, universal and permanent. Any uncertainty comes from the limitations of our sensors. The brain learns how the world behaves and expects that this behavior is perfect and will not deviate. The perceptual process is comparable to that of a coin sorting machine whereby the machine assumes that the various sizes of the coins automatically determine which slots they belong to.

We cannot hope to solve the AGI problem unless we emulate the brain. But how can the brain capture the perfection that is in the world if it cannot rely on its sensors? It turns out that sensory signals are not always imperfect. Every once in a while, even if for a brief interval, they are indeed perfect. The brain is ready to capture this perfection in both pattern and sequence memories. None of the magic of perception I spoke of earlier would be possible without this capability.

Sequence Memory

Sequence memory is organized hierarchically and receives input signals from pattern memory. These signals arrive at the bottom level of the hierarchy and a few percolate upward to the top level. The number of levels depends on design requirements. I happen to know that the brain's cortical hierarchy has 20 levels. This is much more than is necessary for most purposes in my opinion. It is a sign that we can think at very high levels of abstraction. I estimate that most of our intelligent machines, at least in the beginning, will require less than half that number. In a future article on motor learning and behavior, I will explain how the bottom level of the sequence hierarchy serves as the source of all motor signals.
In the diagram above, we see three sequence detectors A, B and C (red filled circles) on two levels. Sequences A and C on level 1 receive inputs directly from 7 patterns neurons (blue filled circles). Unfinished sequence B on level 2 has only two inputs arriving from sequences A and C. The red lines represent connections to the output nodes (see below) which are the only pathways up the hierarchy.

The sequence is the building block of sequence memory. It is a 7-node segment within a longer series that I call the vine. The 7th node is the output node of the sequence. Every node in a sequence receives signals from either a pattern neuron or another sequence. Vines and sequences receive signals in a specific order separated by an interval. The interesting thing about a sequence is that it does not have a fixed duration. That is to say, the interval between nodes can vary. This is extremely important because, without it, we would not be able to make sense of the 3D geometry of the world or to understand events when their rates of occurrence change over time.
In the early days of my quest to understand the brain and intelligence, I used to think that the sequence hierarchy was just a way to organize various combinations of long sequences. I had assumed that a sequence at the top of the hierarchy was just a container for other shorter sequences at the lower levels. I cannot go, at this time, into how I eventually changed my mind but I was completely wrong. It turns out that the brain builds all of its sequences/vines at the bottom level of the sequence hierarchy. The upper levels are used primarily for finding temporal correlations between sequences and for building special structures called branches which are used for the invariant detection of complex objects in the world.

Coming Soon

In my next article in this series, I will explain how to use spike timing to do fast unsupervised sequence learning. I will explain how sequence detection occurs with relatively few sensory signals. I will also introduce a model for the brain's cortical column based on this architecture. Stay tuned.

Note (11/16/2017): I will eventually publish the next article in this series. Now is not the time.

See Also:

Fast Unsupervised Pattern Learning Using Spike Timing
Unsupervised Machine Learning: What Will Replace Backpropagation?
AI Pioneer Now Says We Need to Start Over. Some of Us Have Been Saying This for Years
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut
Why Deep Learning Is A Hindrance to Progress Toward True AI
The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

Friday, September 22, 2017

Fast Unsupervised Pattern Learning Using Spike Timing

Abstract

In my previous article on the problem with backpropagation, I made the case for using timing as the critic for unsupervised learning. In this article, I define what a sensory spike is, I explain the difference between pattern learning in the brain and neural networks and I reveal a simple and superfast 10-step method for learning concurrent patterns. Please note that this is all part of an ongoing project. I will have a demo program ready at some point in the future. Still, I will give out enough information in these articles that someone with adequate programming skills can use to implement their own unsupervised spiking neural network.



Sensors and Spikes

A sensor is an elementary mechanism that emits a discrete signal (a spike or pulse) when it detects a phenomenon, i.e., a change or transition in the environment. A spike is a discrete temporal marker that alerts an intelligent system that something just happened. The precise timing of spikes is extremely important because the brain cannot learn without it. There are two types of spikes, one for the onset of stimuli and the other for the offset. This calls for two types of sensors, positive and negative. A positive sensor detects the onset of a phenomenon while a negative sensor detects the offset.
For example, a positive audio sensor might detect when the amplitude of a sound rises above a certain level. And a complementary negative sensor would detect when the amplitude falls below that level. The diagram above depicts an amplitude waveform plotted over time. The horizontal line represents an amplitude level. The red circle A represents the firing of a positive sensor and B that of a negative sensor. In this example, sensor A fires twice as we follow the amplitude from left to right. To properly sense a variable phenomenon such as the amplitude of an audio frequency, the system must have many sensors to handle many amplitude levels. A complex intelligent system such as the human brain has millions of elementary sensors that respond to different amplitude levels and different types of phenomena. Sensors send their signals directly to pattern memory where they are grouped into concurrent patterns. Every sensor can make multiple connections with neurons in pattern memory.

Pattern Learning: Brain Versus Neural Networks

To a spiking neural net, such as the brain's sensory cortex, a pattern is a set of spikes that often arrive concurrently. To a deep neural net, a pattern is a set of data values. Unlike neural networks, the brain's pattern memory does not learn to detect very complex patterns, such as a face, a car, an animal or a tree. Strangely enough, in the brain, the detection of complex objects is not the job of pattern memory but of sequence memory. Pattern memory only learns to detect small elementary patterns (e.g., lines, dots and edges) which are the building blocks of all objects. The brain's sequence memory combines or pools many small pattern signals together in order to instantly detect complex objects, even objects that it has never encountered before.

Note: I will explain the architecture and working of sequence memory in an upcoming article.

Pattern Memory

Knowledge in the brain is organized hierarchically like a tree. In my view (which is, unfortunately, not shared by Jeff Hawkins' team at Numenta), an unsupervised perceptual learning system must have two memory hierarchies, one for pattern detection and the other for sequence detection. As seen in the diagram below, the pattern hierarchy consists of multiple levels arranged like a binary tree. I predict, based on my research, that the brain's pattern hierarchy resides in the thalamus (there is no other place for it to be) and that it has 10 levels. This means that pattern complexity in the brain ranges from a minimum of 2 inputs at the bottom level to a maximum of 1024 inputs at the top level. I have my reasons for this but they are beyond the scope of this article.


Sensors are connected to the bottom level (level 1) of the hierarchy. A pattern neuron (small red filled circles) can have only two inputs. But like a sensor, it can send output signals to an indefinite number of target neurons. Connections are made only between adjacent layers in the hierarchy. This is known as a binary tree arrangement. Every pattern neuron in the hierarchy also makes reciprocal connections to a sequence neuron (not shown) at the bottom level of sequence memory (more on this later). The hierarchical structure of pattern memory makes it possible to learn as many different pattern combinations as possible while using as few connections as possible.

Fast Unsupervised Pattern Learning

To repeat, the goal of pattern learning is to discover non-random elementary patterns in the sensory stream. Pattern learning is fully unsupervised in the brain, as it should be. That is to say, it is a bottom-up process dictated solely by the environment and the signals emitted by the sensors. Every learning system is based on trial and error, and as such, must have a critic to correct it in case of error. In the brain, the critic is in the precise temporal correlations between the sensory spikes. The actual pattern learning process is rather simple. It is based on the observation that non-random patterns occur frequently. It works as follows:
  • Start with a fixed number of unconnected pattern neurons at every level of the hierarchy.
  • Make random connections between the sensors and the neurons at the bottom level.
  • If the input connections of a neuron fire concurrently 10 times in a row, the neuron is promoted and the connections become permanent.
  • If a connection fails the test even once, it is immediately disconnected. Failed inputs are quickly resurrected and retried randomly.
As soon as a neuron gets promoted, it can make connections with the sequence hierarchy (not shown) and with the level immediately above its own, if any. The same concurrency test is applied at every level but perfect pattern detection is a must during learning. Excellent results can be obtained even if some inputs are never connected. Pattern learning is fast, efficient and can be scaled to suit different applications. Just use as many or as few sensors and neurons as is necessary for a given task. Connections are sparse, which means that bandwidth requirements are low.

Given that sensory signals are not always reliable and that only perfect pattern detections are used during learning, the process slows down as one goes up the hierarchy. This limits the number of levels in the hierarchy and the upper complexity of learned patterns. This is why the number of levels in the pattern hierarchy is only 10. In a computer application, we can use fewer levels and get good overall results. The goal is to create enough elementary pattern detectors to enable object detection in the sequence hierarchy. Note that the system does not assume that the world is probabilistic. No probabilistic computations are required. The system assumes that the world is deterministic and perfect. Errors or missing information are attributed to accidents and the system will try to correct them if possible.

But why require 10 consecutive firings in a row? Why not 2, 5 or 20? Keep in mind that this is a search for concurrent patterns that occur often enough to be considered above mere random noise. The choice of 10 is a compromise. Using less than 10 would run the risk of learning useless noise while having more than 10 would result in a slow learning process.

Pattern Pruning

The pattern hierarchy must be pruned periodically in order to remove redundancies. A redundancy is the result of a closed loop in the hierarchy.


Looking at the diagram above, we see a closed loop formed by sensor D and the pattern neurons A, B and C. This is forbidden because signals emitted by sensor D arrive at B via two pathways, D-A-B and D-C-B. One or the other must be eliminated. It does not matter which. Note that eliminating a pathway is not enough to prevent the closed loop from forming again. In the diagram above, either pattern neuron A or C should be barred permanently. That is to say, an offending pattern neuron should not be destroyed but simply prevented from forming output connections. This prevents the learning process from repeating the same mistake. In the brain, pattern pruning is done during REM sleep because it would interfere with sensory perception during waking hours. In a computer program, it can be done instantly even during learning.

Pattern Detection

Intuitively, one would expect a pattern neuron to recognize a pattern if all of its input signals arrive concurrently. But, strangely enough, this is not the way it works in the brain. The reason is that patterns are rarely perfect due to occlusions, noise pollution and other accidents. Uncertainty is a major problem that has dogged mainstream AI for decades. The customary solution in mainstream AI is to perform probabilistic computations on sensory inputs. However, this is out of the question as far as the brain is concerned because its neurons are too slow. The brain uses a completely different and rather clever solution and so should we.

Pattern recognition is a cooperative process between pattern memory and sequence memory. During detection, all sensory signals travel rapidly up the pattern hierarchy and continue all the way up to the top sequence detectors of sequence memory where actual recognition decisions are made. If enough signals reach a top sequence detector in the sequence hierarchy, they trigger a recognition event. The sequence detector immediately fires a recognition signal that travels all the way back down to the source pattern neurons which, in turn, trigger their own recognition events. Thus a pattern neuron recognizes its pattern, not when its input signals arrive, but upon receiving a feedback signal from sequence memory. This way, a pattern neuron can recognize a sensory pattern even if the pattern is imperfect.

Coming Soon

In the next article in this series, I will explain how to do unsupervised learning in sequence memory. This is where the really fun stuff happens. Hang in there.

See Also:

Unsupervised Machine Learning: What Will Replace Backpropagation?
AI Pioneer Now Says We Need to Start Over. Some of Us Have Been Saying This for Years
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut
Why Deep Learning Is A Hindrance to Progress Toward True AI
The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

Wednesday, September 20, 2017

Unsupervised Machine Learning: What Will Replace Backpropagation?


The Great Awakening?

At long last, the AI research community is showing signs of waking up from its decades-old, self-induced stupor. Deep learning pioneer Geoffrey Hinton has finally acknowledged something that many of us with an interest in the field have known for years: AI cannot move forward unless we discard backpropagation and start over. What took him so long? Certainly, the deep learning community can continue its own merry way but there is no question that AI research must retrace its steps back to the beginning and choose a new path. In this article, I argue that the future of machine learning will be based on the precise timing of discrete sensory signals, aka spikes. Welcome to the new age of unsupervised spiking neural networks.

The Problem With Backpropagation

The problem with backpropagation, the learning mechanism used in deep neural nets, is that it is supervised. That is to say, the system must be told when it makes an error. Supervised neural nets do not learn to classify patterns on their own. A human or some other entity does the classification for them. The system only creates algorithmic links between given patterns and given classes or categories. This type of learning (if we can call it that) is a big problem because we must manually attach a label (class) to every single pattern the system must classify and every label can have hundreds if not thousands of possible patterns.

Of course, anybody with a lick of sense knows that this is not how the brain learns. We do not need labels to learn to recognize anything. Backpropagation would require a little homunculus inside the brain that tells it when it activates a wrong output. This is absurd, of course. Reinforcement (pain and pleasure) signals cannot be used as labels since they cannot possibly teach the brain about the myriad intricacies of the world. The deep learning community has no idea how the brain does it. Strangely enough, some of their most famous experts (e.g., Demis Hassabis) still believe that the brain uses backpropagation.

The World Is Its Own Model

Loud denials notwithstanding, supervised deep learning is just the latest incarnation of symbolic AI, aka GOFAI. It is a continuation of the persistent but deeply flawed idea that an intelligent system must somehow model the world by creating internal representations of things in the world. As the late philosopher Hubert Dreyfus was fond of saying, the world is its own model. Unlike a neural net which cannot detect a pattern unless it has been trained to recognize it (it already has a representation of it in memory), the adult human brain can instantly see and understand an object it has never seen before. How is that possible?

This is where we must grok the difference between a pattern recognizer and a pattern sensor. The brain does not learn to recognize complex patterns; it learns how to sense complex patterns in the world directly. To repeat, it can do so instantly even if it has never encountered them before. Unless a sensed pattern is sufficiently rehearsed, the brain will not remember it. And if it does remember it, the memory is fuzzy and inaccurate, something that is well-known to criminal lawyers: eyewitness accounts are notoriously unreliable. But how does the brain do it? One thing is certain: we will not solve the perceptual learning problem unless we get rid of our representationalist baggage. Only then will the scales fall off our eyes so that we may see the brain for what it really is: a sensory organ connected to a motor organ and controlled by a motivation organ.

The Critic Is In the Data

How does the brain learn to see the world? Every learning system is based on trial and error. The trial part consist of making guesses and the error part is a mechanism that tells the system whether or not the guesses are correct. The error mechanism is what is known as a critic. Both supervised and unsupervised systems must have a critic. Since the critic cannot come from inside an unsupervised system (short of conjuring a homunculus), it can only come from the data itself. But where in the data? And what kind of data are we talking about? To answer these questions, we must rely on neurobiology.

How to Make Sense of the World: Timing

One of the amazing things about the cortex is that it does not process data in the programming sense. It does not receive numerical values from its sensors. The cortex only receives discrete signals or spikes. A spike is a discrete temporal marker that indicates that a change/event just occurred. It is not a binary value. It is a precisely timed signal. There is a difference. The brain must somehow find order in the spikes. Here is the clincher. The only order that can be found in multiple sensory streams of discrete signals is temporal order. And there can only be two kinds of temporal order: the signals can be either concurrent or sequential.

This here is the key to unsupervised learning. In order to make sense of the world, the brain must have the ability to time its sensory inputs. In this light, the brain should be seen as a vast timing mechanism. It uses timing for everything, from perceptual learning to motor behavior and motivation.

Coming Soon

In my next article, I will explain how sensors generate spikes and how the brain uses timing as the critic for fast and effective unsupervised learning. I will also explain how it creates a fixed set of small elementary concurrent pattern detectors/sensors as the building blocks of all perception. It uses the same elementary pattern sensors to sense everything. It also uses cortical feedback to handle uncertainty in the sensory data. Hang in there.

See Also:

Fast Unsupervised Pattern Learning Using Spike Timing
AI Pioneer Now Says We Need to Start Over. Some of Us Have Been Saying This for Years
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut
Why Deep Learning Is A Hindrance to Progress Toward True AI
The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

Saturday, September 16, 2017

AI Pioneer Now Says We Need to Start Over. Some of Us Have Been Saying This for Years

This Bothers Me

This is just a short post to point out how progress in science and technology can be held back by those who set themselves as the leaders. Artificial Intelligence pioneer Geoffrey Hinton now says that we should discard backpropagation, the deep learning technique used in deep neural nets, and start over. This bothers me because I and many others have been saying this for years. Some of us, including Jeff Hawkins, have known that this was not the way to go since the 1990s. Here is an article I wrote about this very topic back in 2015: Why Deep Learning Is a Hindrance to Progress Toward True AI.

Demis Hassabis, the Champion of Backpropagation

What is amazing about this is that Geoffrey Hinton is a famous Google employee (engineering fellow) and AI expert. He is now directly contradicting Demis Hassabis, another famous Google employee and co-founder of DeepMind, an AI company that has been acquired by Google. Hassabis and his team at DeepMind recently published a peer-reviewed paper in which they suggested that backpropagation is used by the brain and that their research may uncover biologically plausible models of backprop. I wrote an article about this recently: Why Google's DeepMind Is Clueless About How Best to Achieve AGI.

I find the whole thing rather annoying because these are people who are paid millions of dollars to know better. Oh, well.

See Also:

Unsupervised Machine Learning: What Will Replace BackPropagation?
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut
Why Deep Learning Is A Hindrance to Progress Toward True Intelligence
Mark Zuckerberg Understands the Problem with DeepMind's Brand of AI
The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

Tuesday, August 29, 2017

Occult Physics Will Blow Your Mind

Abstract

According to ancient occult physics, the electron is not elementary but consists of four subparticles. We exist in an immense 4-dimensional sea of energy arranged like a crystal lattice. This means unlimited clean energy, free for the taking once we learn how to tap into the lattice. The entire history of the universe is being recorded in the lattice. Ancient megalithic societies may have used this knowledge to transport huge quarried stones weighing 1000 tons or more. This is the first in a series of articles that I am writing on occult physics. I cannot promise that I will ever publish them all but, if or when I do, I can guarantee that they will blow everyone's mind.

Sacred Scientific Knowledge Hidden in Plain Sight

Many years ago, I stumbled on an amazing discovery. It occurred to me that a few ancient occult texts contained revolutionary scientific secrets about the fundamental principles of the physical universe. The secrets can be found in the books of Isaiah, Ezekiel and Revelation. They are written in an obscure metaphorical language that sounds nothing like science. However, once one understands the meaning of some of the metaphors, things begin to fall into place. At one point in my research, I became frightened and stopped thinking about it for a long time. I had concluded that the potential harm to humanity that this knowledge could unleash if it fell in the wrong hands was just too great.
Assyrian Lamassu or Human-Headed Winged Bull - Southern Iraq
Most ancient societies recorded their sacred wisdom in precisely chosen metaphors that only the initiates understood. The Sumerians, Babylonians, Assyrians and Egyptians thought that certain occult sciences were so powerful that they erected huge symbolic stone monuments to preserve them for posterity while keeping their true meaning hidden from the masses.
Two Human-Headed Winged Bulls - Iran
Although the Biblical symbols are not identical to the ones found in Mesopotamia, the many similarities are striking. Both use images of wings, discs (wheels), bulls, lions, eagles, hands, feet and faces to symbolize various aspects of the sacred knowledge.

Sumerian Anunnaki Winged God and Disc
For whatever reason, historians and archaeologists love to associate ancient occult symbology with mythology and religious superstition but they could not be more wrong. It is almost as if some hidden power is hellbent on preventing mankind from learning about their glorious past. None other than Isaac Newton, the father of modern physics, was convinced that there was secret knowledge encrypted in the Bible and in other ancient mythological writings. (Sources: What Was Isaac Newton's Occult Research All About? and Top 10 Crazy Secrets of Isaac Newton).

In my opinion, the Biblical seraphim and cherubim are occult descriptions of fundamental particles of matter and their properties (Sir Isaac would have jumped for joy if he had known about this). I believe this knowledge was known to ancient megalithic societies in Mesopotamia, Egypt, South America and elsewhere because it was the basis of the technology that they used to lift and transport huge cut stones weighing 1000 tons or more. I believe that a mastery of this knowledge will unleash an era of free unlimited clean energy and super fast transportation.

Stone of the Pregnant Woman - Baalbek, Lebanon
What follows is a short summary of the strange "living creatures" mentioned in the Bible and my interpretations. Note: I will not go into what I believe to be potentially dangerous aspects of this research.

Seraphim - Photons

Seraphim (singular, seraph) is a plural hebrew word that means the shining or burning ones. They are mentioned in the books of Revelation and Isaiah. They symbolize pure energetic particles and their properties. I have identified them as photons. There are 4 types of seraphim and each one has a different face property: man, lion, bull or eagle. One of the seraphim (the one with the bull's face) is responsible for electric phenomena and the other three for magnetic phenomena. The face of each seraph is associated with one of the 4 spatial dimensions (degree of freedom) of the cosmos. Each face has 2 possible states or orientations, forward or backward. It is more or less equivalent to what quantum physicists call the "spin angular momentum" of a particle, except that there really is no spin.

In all, the seraphim can have 8 possible orientations or spin states, 2 for each face. Two of the orientations, the ones associated with the face of a bull, determine whether or not the particle is involved with a positive or negative electric field. The other 6 states are responsible for magnetic phenomena.

Every seraph has energy properties which are symbolized by 6 wings. Unlike cherubim (explanation below), seraphim have no bodies or mass. Two of the wings of a seraph are used for motion, two are associated with its face and two with its feet. Yes, all matter particles have a property called feet (bull or calf hooves) which allow them to move in one direction of the 4th dimension at the speed of light. Wings, feet and hands are powerful metaphors the meaning of which I cannot expand on at this time. I will explain them further in future articles.

The Sea of Crystal - Zero-Point Energy

The most amazing thing about seraphim is that they are the constituents of an enormous 4-dimensional "sea of crystal" or "sea of glass" in which the normal matter of the universe exists and moves. It is a sea of wall-to-wall energetic particles (photons), lots of it, arranged as a stationary 4-dimensional lattice. We are totally immersed in it like fish in water and nothing could move without it. In fact, the entire visible universe is continually moving in the lattice in one dimension (bull) at the speed of light. As matter moves in the lattice, it leaves traces in it. In other words, the entire history of the universe is continually being recorded in the lattice down to the minutest details. Ancient Hindu and Buddhist societies were aware of this recording medium which was called the Akasha. Modern theosophists call it the Akashic records.

The closest analog to the lattice in modern physics is the so-called zero-point energy field that physicists believe permeates space but have no idea what it is made of or what its purpose is. Physicist Richard Feynman is reported to have said that "there is enough energy inside the space in an empty cup to boil all the oceans of the world." Gravitational, electric and magnetic phenomena are caused by the motion of matter in the lattice. Again, one day soon, in the not too distant future, society will learn how to tap into the lattice for unlimited clean energy production and super fast transportation. Current forms of transportation and energy production will become obsolete.

Cherubim - Quarter Electrons

Cherubim (singular, cherub) are symbolic winged creatures that modern theologians wrongly associate with angelic beings that fly around and do God's will. The Hebrew word cherubim is derived from the Assyrian term chiribu or kirubi which was the mystical name given to the representation of a winged bull or lion with a man's head. Various types of cherubim are mentioned in the Bible but my research is concerned strictly with the 4 cherubim (living creatures) in chapters 1 and 10 of the Book of Ezekiel. In chapter 10, verse 14, Ezekiel clearly equates the Hebrew word cherub with the face of a bull. He said nothing about angels.

Each living creature or cherub has 4 faces and 4 wings. Each also has a human body, 4 human hands and the feet of a bull. Having 4 faces means that a cherub has both electric and magnetic properties. All four cherubim move together in unison without turning, in any of the 4 dimensions.

My interpretation will come as a surprise. In my view, the cherubim are the 4 particles that comprise the electron or the positron. Yes, the electron is not an elementary particle as the Standard Model of particle physics would have us believe. Each cherub has 1/4 the charge of the electron. But this is not as surprising as it sounds. Physicists have known for some time that the electron is not truly elementary but they are a conservative and highly political bunch. Rather than come out and acknowledge the composite nature of the electron, they have taken to calling its constituent particles, quasiparticles instead. They also use the term quarter electron when they are feeling more liberal.

The 4 human hands of a cherub are special properties that confine them to stay and move together as one particle: they hold onto each other. The body of a cherub is a special kind of energy that physicists call mass. Each cherub also has a wheel or disc associated with it. The 4 wheels act as one wheel and move precisely with the 4 cherubim. In my interpretation, the wheel represents the electric field of the electron.

Coming Soon

In future blog articles, I will explain how particles move in the lattice and how the electric field of a charged particle works.

See Also:

Ezekiel 1: The Four Living Creatures, the Four Wheels and the Crystal Firmament
Ezekiel 10: The Four Cherubim and the Four Wheels
Isaiah 6: The Four Seraphim
Revelation 4: The Four Beasts and the Sea of Crystal
Physics: The Problem With Motion
There Is Only One Speed in the Universe, the Speed of Light. Nothing Can Move Faster or Slower