Monday, May 28, 2007
The beauty of automated transportation is that it could significantly reduce the number of cars on the road and help save energy. Consider that most vehicles are idle most of the time anyway. Big cities could reduce pollution and energy consumption by banning private vehicles altogether. City dwellers and visitors would be given a pager that they can use to summons transportation at the push of a button. The nearest parked vehicle would then drive itself to the passengers' location and take them to their destination.
There is only one catch, however. Such a system would involve highly complex software. Concern over things like safety, liability and development cost would kill the project before it is even started. The only solution is to adopt a non-algorithmic synchronous software model which is what Project COSA is about.
Sunday, May 27, 2007
The industry is ripe for a revolution. The market is screaming for it. And what the market wants, the market will get. It is time for a non-algorithmic, synchronous approach. That's what Project COSA is about. Intel would not be complaining about software not being up to par with their soon-to-be obsolete CPUs (ahahaha...) if they would only get off their asses and revolutionize the way we write software and provide revolutionary new CPUS for the new paradigm. Maybe AMD will get the message.
Saturday, May 26, 2007
Having seen first hand the inertia and hostility of the western computer industry and computer science community toward any suggestion that there may be a better way of doing things, I have concluded that the new revolution cannot come from the West. They have placed their computer pioneers on a pedestal and nobody dares question the wisdom of their gods. India and China, on the other hand, don't have that problem. They have nothing to lose and everything to gain. They have been on the tail end of the first computer revolution from the beginning but now they are in a position to leapfrog the western advantage and become the leader of the second revolution.
Friday, May 25, 2007
Note. Constraint discovery is part of my ongoing work in artificial intelligence. There is a way to combine inductive and categorical constraint discovery into a single mechanism that will enable its full automation. Unfortunately, I cannot divulge this method at this time. Sorry.
Thursday, May 24, 2007
I think the West has forced itself into a dangerous situation. The reason is that, while this is going on, the computer industry is suffering terribly from a chronic malady called unreliability. Their own scientists (e.g., Fred Brooks) are convinced that the problem is here to stay. As bad as it already is, the real cost of unreliability goes deeper than it appears on the surface. Consider that over 40,000 people die every year in the US alone as a result of traffic accidents. The solution is obvious: people should not be driving automobiles. That is to say, all vehicles should be self-driving. However, building driverless vehicles is out of the question because concerns over reliability, safety and cost have imposed an upper limit on the complexity of our current software systems. On the military and political front, there is a desperate need to automate the battle field as much as possible in order to minimize human casualties and appease the voters back home.
The western world is thus stuck between a rock and a hard place. On the one hand, they have a really nasty problem sitting on their lap and it keeps getting worse. On the other hand, they have a bunch of aging gurus with a firm grip on the accepted paradigm, telling them that the problem cannot be fixed. This is where the East may want to capitalize on and profit from the West's self-imposed mental paralysis, in my opinion. What if there were another paradigm that solved the reliability problem at the cost of beheading some of the demi-gods of western computer science? Should the East care? I don't think so. Is it their gods that would be sacrificed? No. Does not the West look down on them as being mere copycats? Yes. Are they not the technological maids hired by the West to cook and do their laundry (outsourcing), so to speak? Yes.
The point of all this is that countries like China and India may have been late jumping on the wagon but there is no longer any reason nor necessity for them to continue riding in somebody else's wagon. They can now afford their own. They don't have to do other people's laundry anymore. This is why I advise the movers and the shakers of the East to take a good look at Project COSA. COSA is the solution to the nasty problem that everyone has been talking about. It's the one solution that the West cannot touch for fear of dirtying their "noble" hands and insulting their gods.
There is a revolution coming, no doubt about it. The market wants it and what the market wants the market will get, by whatever means possible. Who will come out unscathed? Who will cease the opportunity and lead the revolution? The East or the West? Can the West wake up out of its drunken stupor and realize the error of its ways and repent in time? Seriously, I don't think so. I have seen first hand the power and inertia of conservatism. The old guard will not be replaced without a fight. There is too much at stake... unless, of course, the revolution happens in the East. Then they would have to stand up and take notice.
No other branch of science surpasses physics when it comes to promoting crackpottery. Most of us are already familiar with some of the more blatant instances of a science gone awry. Things like time travel, wormholes, black holes, parallel universes, etc... have already entered popular culture thanks to Hollywood's enfatuation with physics myths. Physicists believe that they are above public scrutiny because they are convinced that the public is too stupid to understand what they do. And nowhere is this more apparent than in the new "science" of quantum computing.
Physicists pride themselves in that theirs is a science based strictly on observation but they pay lip service to empiricism when it suits their political agenda. Consider that quantum computing (QC) is based on the concept of state superposition, a concept that was made famous in the last century by none other than quantum physics luminary and Nobel Prize winner, Erwin Schrodinger. Schrodinger asserted in his now famous Schrodinger's cat thought experiment that a subatomic particle can have two states simultaneously, both decayed and not decayed. Problem is, this superposition of states can never be observed because (using the quantum physics lingo) observation causes the wave function to collapse. So much for empiricism.
This reminds me of a young kid who insisted that he could jump as high as a tall building but only when nobody was looking. It would be funny it it weren't so pathetic. Unobservable superposition of states has become part of quantum physics' credo, so much so that an entire research industry has mushroomed in recent years to take advantage of the hype surrounding QC. I don't know how much money have been spent on it but it must be in the hundreds of millions of dollars. One of the most visible champion of QC is a physicist named David Deutsch. Deutsch believes in all sorts of voodoo science including the physical possiblity of time travel and the existence of an infinite number of parallel universes. Indeed, superposition is so blatantly false that some physicists felt it necessary (probably out of shame) to postulate the existence of parallel universes, one for every possible superposed quantum state. As Feyerabend wrote, "the most stupid procedures and the most laughable results in their domain are surrounded with an aura of excellence". The crackpottery never ends.
It is interesting to note that physicists have no clue as to why particle decay is probabilistic. And yet, in spite of this glaring lacuna in their understanding combined with the lack of observation, they feel free to tell us mere mortals that we must believe in their crackpottery. QC fanatics will point out that QC is a proven fact that has been demonstrated in the laboratory. Don't you believe it! It is all (to borrow a favorite term from the vernacular) bullshit. Periodic QC announcements in the media are just part of the hype and a way for researchers to justify continued funding. QC is either hoax or crackpottery or both. If you are a QC physicist and you feel that I am defaming your profession, then by all means, let the courts decide.
Wednesday, May 23, 2007
I will illustrate the concept with an example. Let's say we have a COSA component (controller) in charge of controlling the temperature of a room using sensory readings from a thermostat. The controller turns on the air conditioning unit if the temperature goes above 80 degrees Fahrenheit and turns it off when it goes below 75. In addition, the heater is turned on when the temperature goes below 65 and turned off when it goes above 70. During testing, the CDM will learn a few things about the way temperature changes. If the temperature goes above 80, the AC is turned on and the CDM expects the temperature to then go down below 75. If the temperature goes below 65, the heat is turned on and the CDM expects the temperature to then go up above 70.
If, for whatever reason, temperature changes do not occur in their expected order, the CDM will sound an alarm. This is an instance where temporal constraints learned by the CDM during testing can be left in the program and used for normal error handling.
PS. Remember that blogger supports rss (syndication). All you need is an rss reader (there are a number of free readers out there) and you can receive the Rebel Science News as they happen.
Monday, May 21, 2007
Constraint discovery is an inductive learning process that takes place while the application is running. As such, not all discovered constraints are necessarily valid. It is up to the application designer to validate or reject every constraint. The outputs of the cells may be connected to an alarm component or a report generator that compiles pertinent information into a file or database in case of violations. The inputs of the CDM cells are initially not connected. A special component called a searcher periodically sweeps through the program being tested and randomly connects the outputs of a few sensory cells within the program to the inputs of the CDM cells. There is a reason why only a few sensory cells are chosen during a pass of the searcher: learning can sometime be CPU intensive and it makes sense not to slow down the application too much during testing,especially in real time environments. Eventually all sensory cells get connected. CDM cells use synaptic strengthening to establish a correlation. If the strength of a synapse reaches a predetermined value, the correlation is accepted. Learning can be extremely fast in deterministic systems because a single violation is enough to invalidate a synaptic connection.
During development, it is advisable to run the CDM as often as possible in order to prevent the introduction of inconsistencies. All violations should be corrected immediately. When it comes time to launch the application, the CDM must be removed from the release version for two reasons: it is no longer needed and the application runs faster without it.
P.S. In a future article, I will talk about a few COSA optimization techniques that should make a COSA program at least as fast as existing scripting languages.
Aside from its synchronous and concurrent nature, there are two other concepts in COSA that I consider essential to the model. One is the automatic elimination of blind code and the other is the automatic discovery of temporal constraints. While the former effectively solves the problem of event or data dependencies, the latter insures that a system under development remains consistent and free of logical contradictions. This, in my opinion, is the most important aspect of the COSA model because it introduces the counterintuitive notion that design correctness is proportional to complexity. In other words, since additions and modifications are not allowed to break the existing design of a COSA application and since the number of constraints is proportional to the complexity of the application, therefore the more complex the application, the more correct it gets. This enables us to build applications of arbitrary complexity without the burden of unreliability, something that was impossible until now. This, in my opinion is what will truly bring us into the golden age of computing and automation. I will explain the mechanism of constraint discovery and enforcement in my next article. Like everything else in COSA, it is neither magic nor rocket science. Its power lies in its simplicity.
Saturday, May 19, 2007
What will it take to convince the computer industry to change over to a new paradigm that will make it possible to automate all vehicles? What will it take to convince software developers that complexity no longer has to be an enemy but can and should be a trusted friend? What will it take to convince them that there is a way to build bug-free software of arbitrary complexity? What will it take? Are 43,000 dead men, women and children not enough?
In my opinion, most of the funds allocated for traffic research by the U.S. Department of Transportation should be used to find a solution to the software reliability crisis. Why? Because the solution would keep tens of thousands of human beings from dying needlessly every year. Are you listening, Secretary Mary E. Peters?
Friday, May 18, 2007
One of the nice things about COSA is that it can accommodate several types of business models that target specific niche markets. Below is a list of products and/or services for which COSA is ideally suited.
- Embedded COSA Operating System (ECOS). COSA would be perfect as the basis for a small embedded operating system for mission-critical applications and/or portable devices such as automotive control systems, avionics, cell (mobile) phones, set-top boxes, PDAs, etc...
- COSA Virtual Machine (CVM). Similar to the Java Virtual Machine (JVM), the CVM could serve as an application execution engine for use in existing legacy operating systems such as Windows, Linux, OSX, etc... CVM and ECOS would have largely compatible execution kernels. This means that the same software construction tools (see below) could be used to develop applications for both environments.
- COSA Development Studio (CDS). The CDS would consist of a set of graphical tools for designing and testing COSA applications. It could be used as a proprietary rapid application development (RAD) tool with which to create software for either CVM, ECOS or COS (see below). CDS could be hosted on any of a number of existing desktop OSs. It could also be sold to the public as a RAD tool for legacy systems (CVM), embedded systems (ECOS) or the COSA operating system (COS).
- COSA Operating System (COS). COS could be either an open or closed source OS depending on the business model. It is a full operating system in the sense that it would include all the usual service components and applications found in systems like Linux, MacOS and Windows. In addition, COS would, due to its very nature, automatically support cluster computing for high-performance applications such as weather forecasting and scientific/technical simulations. COS should be initially marketed to businesses and government agencies, especially for mission-critical environments.
- COSA-Optimized Processors (COP). These are RISC-like central processing units (CPU) designed and built especially for the COSA software model. COPs would process COSA cells directly and would replace most of the COSA execution kernel. The end result would be extremely fast processing and simulated parallelism implemented at the chip level. COP chips can be designed for various markets such as end-user products (desktop computers, cell (mobile) phones, set top boxes, game boxes, notebook computers, laptops, etc...) and mission-critical business systems.
- COSA Neural Processors (CNP). The COSA project was heavily influenced by my ongoing work in spiking (pulsed) neural networks or SNNs. Since COSA cells are similar to spiking neurons, it makes sense to extend the capabilities of COSA-optimized processors so as to add support for fast SNN processing. Neural network driven applications are bound to multiply in the near future. The nice thing about CNPs is that they would be ideal for large-scale distributed SNN applications that require hundreds of millions or even billions of neurons.
Thursday, May 17, 2007
Sooner or later, the COSA paradigm will hit critical mass. It will happen when a sufficiently large number of intelligent people in the business recognize, not only the wisdom of the approach, but also the golden age that the new software model will bring. However, I have a word of caution for any Fortune 500 technology company that is seriously interested in capitalizing on the coming COSA revolution: the money is in the hardware, not the software. COSA is an idea that is already in the public domain. You can't patent it. You can't say it's your idea either unless, of course, you want to be laughed at. Until a de facto COSA standard model emerges, good luck trying to make money selling a proprietary COSA OS or development tools that may or may not be compatible with the eventual standard. Everybody and their uncle will be working on a competing OS and the free software movement is certainly not going to flip on its back and die. This is not to say that there will not be any money in selling COSA tools but the days of DOS-begat-Microsoft are over.
It is best to take an indirect approach, in my opinion. I think it more advisable for you to form a strong alliance within the industry with the openly stated objective of focusing your collective financial, political and philanthropic muscle into establishing a completely open standard. In other words, the COSA OS and the necessary development tools should be completely free and open. And I mean 'free' both as in "free beer" and as in liberty. In the meantime, you would be hard at work on your new fully COSA-optimized, multi-core, green CPU design. By the time the standard is agreed upon, you would be way ahead of the pack by being the first to market a CPU compatible with the accepted model. This will give you the breather you need to improve on your initial design and maintain an iron grip on the market. By that time, you're no longer in the business of manufacturing and selling computer CPUs. You're in the tool business. Don't worry about public acceptance of the new OS. COSA software construction will be so easy that the market will be flooded with high quality applications from the get go. Essentially, you'll be in the middle of a gold mining frenzy and you're the only supplier of picks and shovels in town.
Welcome to the true golden age of computing. Welcome to the COSA revolution.
Monday, May 14, 2007
Saturday, May 12, 2007
All Quantum Computing Articles
As I mentioned in my previous article, quantum computing is based on the belief that quantum states are superposed. The idea is that since both states (0 and 1) of a quantum bit (qbit) exist simultaneously, it should be possible to perform operations on both states at the same time. Why do quantum physicists believe in such an absurd concept? I suspect that it has to do with peer pressure. I think it all started when Erwin Schrödinger first proposed a now famous thought experiment known as Schrödinger's cat. While no one has ever observed multiple simultaneous states of a quantum property, quantum physicists accept it as a fact.
A great example of the probabilistic nature of quantum processes is what is known as the half life of subatomic particles. While it is not possible to predict exactly when a radioactive atom will decay, physicists can predict the decay time of half of a large group of identical atoms based on observation. The question is why does nature use probability? Physicists have no clue and yet, this nasty little lacuna in their understanding does not seem to have had an effect on their convictions.
The reason that quantum interactions are probabilistic is rather simple. Time is abstract and the universe is discrete. What this means is that the universe cannot calculate the exact duration of interactions. In other words, all interactions, regardless of the energies involved, have the exact same fundamental discrete duration, a very minute interval. The problem is that this would break conservation laws. Nature has no recourse but to use probability to decide when to allow interactions to happen. Over the long run, conservation laws are obeyed.
In no way does this mean that nature must somehow maintain both states (decayed and not decayed) of a particle. All it means is that nature knows how energetic a particle's interaction with another is and uses this value to determine the percentage of a group of similar particles which must undergo decay. There is no need to invoke quantum weirdness, superposition of states, infinite universes, voodoo or any other such magic. It is for these reasons that I maintain that quantum computing is voodoo science of the worst kind regardless of the incessant claims of its practitioners.
D-Wave's Quantum Computing Crackpottery
Thursday, May 10, 2007
Paul Feyerabend, the foremost science critic of the last century, once wrote in his book 'Against Method' that "the most stupid procedures and the most laughable results in their domain are surrounded with an aura of excellence. It is time to cut them down in size, and to give them a more modest position in society." Feyerabend was speaking of scientists in general but he may as well have been talking about the new "science" of quantum computing. Quantum computing is based on the so-called Copenhagen interpretation of quantum mechanics. The idea is that the states of certain quantum properties, such as the spin of a particle, are superposed, meaning that a quantum property can have multiple states simultaneously.
The blatantly ridiculous nature of this belief has not stopped an entire research industry from sprouting everywhere in the academic community. QC researchers are making grandiose promises about magical computational powers being just around the corner in order to obtain grants and attract the attention of gullible investors while having nothing to show of practical importance. Not a week goes by without some announcement about some "progress" or "advance" in QC. It's like a magician going through all sorts of contortions without ever pulling the rabbit out of the hat. Rather than retrace their steps, a few physicists have tried to explain away the contradictions by postulating the existence of an infinite number of universes one for each quantum state. In so doing, the QC hole keeps getting deeper and deeeper and words like fraud, crackpottery and hoax come to mind.
The problem with QC is not so much its laughable absurdity, but the fact that quantum physicists have no clue as to why certain quantum processes are probabilistic in the first place. Physicists love to boast that theirs is an empirical science but have no qualms believing in things that have never been observed. To them, superposition is not an interpretation or a belief but a fact. However, from my vantage point, QC is now a full-blown organized religion. In my next article, I will explain the simple reason that quantum processes are probabilistic and why quantum computing is utter nonsense or, using one of my favorite putdowns, "chicken feather voodoo physics".
Go to Part II
D-Wave's Quantum Computing Crackpottery
Monday, May 7, 2007
On the right side of this blog page is a panel labeled "Older News". In it you will find links to the old news pages. I plan to eventually transfer all the old news articles to the archive for this blog so that they can be searched by keywords.
Having said this, nothing is written in stone. I am always willing to see the error of my ways and repent if necessary. Please let me know what you think.
Anyway, I edited and made a few additions to the Seraphim page while I am debating whether or not this is the right time for this knowledge to emerge. There is a lot more stuff I want to write about but, frankly, I am afraid. Forgive my use of the vernacular but this is truly powerful shit I am meddling with here. This stuff is downright scary. The artificial intelligence stuff is scary too but the physics stuff is scarier, in my opinion, if only because I believe it can be implemented by almost anybody on a very short notice. In a world so divided and shaken by strife and violence, this is the sort of thing that would surely bring us face to face with catastrophe on a global scale. Unless we change our ways, of course. More to come...