Wednesday, May 28, 2008

The COSA Saga

Rockwell Aim 65

I had my initial idea for COSA back in 1980. I had just bought myself a 1 MHz Rockwell AIM 65 single board computer with 4k of RAM. At the time, I knew almost nothing about computers other than that they fascinated me to no end. On the same day my computer arrived, I dove into one of the assembly language programming books that came in the package. In the introductory chapter, the author (I can’t remember his name) covered the basics of how computers work and what a program consists of. He then explained that, unlike a biological brain, which has many slow processors (neurons) working in parallel, a computer has a single CPU that processes instructions one after the other. However, he continued, a CPU such as the 6502 is so fast that it can do the work of many slow parallel processors in a short interval.

Right away, even before I knew enough assembly language to write my first program, it was clear to me that computers were fundamentally flawed. I don’t know how but I immediately understood that a computer program should be more like a neural network. I knew that processors should be designed and optimized to emulate the reactive parallelism of the brain at the instruction level. I also realized that, for performance purposes, the emulation mechanism had to reside within the processor hardware itself.

Good Ideas Don’t Get Stolen

Soon afterward, I mastered assembly language and landed a job as a game programmer at a small company. I tried to explain my idea to other programmers but they either did not care or thought it was a dumb idea. I was stunned by their reaction. I just could not understand how seemingly intelligent people could be so blind to something that was so obvious. Surely, game programmers understood the concept of simulating parallelism with the help of two buffers and a loop. My idea essentially took the concept down to the lowest possible level. It is not rocket science. Someone even told me that, if my idea was any good, I should keep it a secret. I was then reminded of the following quote by computer pioneer Howard Aiken: “Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats.”

No Reason to Change

Over the ensuing decades, I developed a love-hate relationship with computers: I loved their potential but I hated the way they worked and the way they were programmed. I hated programming languages. I still do. I longed to do something about it but, even though COSA is dear to me, I could never find much time to devote to it. Like everyone else, I was busy making a living. Besides, I was preoccupied with another passion of mine, artificial intelligence, which is the real reason that I became interested in computers in the first place. I watched in horror as computer scientists turned computer programming into a complete mess, a veritable tower of Babel. The Turing machine was accepted as the de facto computing model. Hopelessly flawed computer languages and operating systems began to proliferate like mad. Needless to say, unreliability and low productivity wreaked havoc everywhere. A few people in the business began to complain and the reliability industry sprouted like a mushroom. Years later, the problem is still with us, worse than ever. Realizing that the COSA model is deterministic and inherently reliable, I got the idea of using the software reliability crisis as a springboard to promote Project COSA. That worked to a certain extent. A handful of people started paying attention but that was not enough to motivate the leaders of the industry to change to a better model. After all, since they had a captive market, people were making money hand over fist and so there was no real reason to change. Besides, the computer science community (mostly a bunch of old computer geeks who don't know when it is time to retire) still has a firm grip on the business.

Hooked on Speed

Buggy software, high production costs and late product launches took their toll but the industry was resigned to live with the pain. Even if everybody became somehow convinced that a solution to their problems was available, there was already too much time and effort invested in legacy applications to turn the ship around. But more trouble was lurking around the corner. As the complexity of software applications increased, the demand for higher performance also went up. Moore’s law took care of that problem for decades. Eventually however, the laws of physics got the upper hand and it became impossible to increase speed while keeping the chips cool enough. Disaster loomed. The industry could see only one way out of its predicament: multicore parallel processing based on an old programming technique called multithreading. Problem is, programming with threads is hard as hell. The solution is obviously not working and, lately, there has been an odor of panic in the air.

A New Opportunity

The COSA software model is, of course, inherently parallel and has been from the start. In addition, it does not use threads and is thus free of all the nasty problems associated with multithreading. Furthermore, since COSA is signal-based just like hardware components, it is ideally suited to compositional software construction via plug-compatible modules. In other words, fast and easy parallel programming, precisely what the industry is searching for. In the past, I never saw the need to emphasize the parallel nature of COSA. It did not occur to me until about a year ago that the only thing that would convince the computer industry to abandon its evil ways would be the performance issue. I saw it as an excellent opportunity to promote COSA in earnest. I made a lot of noise and a lot of people in the business are beginning to take notice. Still, I am a realist. The computer industry is like the Titanic and nobody can expect it to turn on a dime. There is a lot of inertia that must be overcome, not only in terms of engineering, but also in terms of leadership. And by this I am not talking about leaders in academia. I had long ago given up on the computer science community since I am convinced that computer scientists are the ones who got us all into this mess in the first place. They are not about to accept anything like COSA because COSA will make them look stupid. Only the captains of the industry have the power to decide its future course. I am talking about people like Bill Gates, Steve Ballmer, Paul Otellini, Hector Ruiz, etc… Problem is, trying to get the attention of people like that is like trying to get an audience with the Pope. Oh well. I am doing the best I can and it’s not over until it’s over.

The story of COSA continues. Stay tuned.

See Also:

How to Solve the Parallel Programming Crisis
Half a Century of Crappy Computing
Why Software Is Bad and What We Can Do to Fix It

Thursday, May 22, 2008

Encouraging Mediocrity at the Multicore Association

Thread Monkey Society

The Multicore Association is an industry-sponsored group that aims to provide standard approaches to multicore programming. Although I am an avid proponent of standardization, it pains me to see an industry association actively promoting parallel computing standards and practices that are designed to favor only one group, multicore processor and programming tool makers. In other words, the Multicore Association does not have the interest of customers in mind but that of their members, i.e., the vendors. On the association’s site, we read the following:
In no way, of course, does the effort to establish standard APIs intend to limit innovation in multicore architectures. APIs that reflect the intrinsic concurrency of an application are in no sense a restriction on the creativity and differentiation of any given embodiment.
This is pure BS, of course, because as soon as a given set of parallel programming standards are accepted and established, the industry becomes pretty much locked into one type of multicore architecture or another. As an example, take a look at their Multicore Programming Practices Group. Their goal is to see how the C and C++ programming languages can best be used to create code that are multicore ready. How can anybody maintain that the use of last century’s programming languages does not limit innovation in multicore architectures? Who are they kidding? That is precisely what it does. It encourages vendors to continue to make and sell multicore processors that use the thread-based model of concurrency. How else are you going to use C or C++ to implement concurrency in a multicore processor without threads or something similar? There is no escaping the fact that the Multicore Association is really a society created for the benefit of thread monkeys. Why? Because the current crop of multicore chips being put out by the likes of Intel, AMD, IBM and the others are worthless without threads. These folks are desperate to find a way to future-proof their multicore technology and they figure that the Multicore Association can help. Now, if you object to being called a thread monkey, that is too bad. I really don’t want to hear about it.

What the Market Wants

You know, this is getting really tiresome. How many times must it be repeated to the industry that the only thing worse than multithreading is single threading? Is the Multicore Association what the computer industry really needs? I don’t think so. It may be what Intel or AMD or Freescale needs but this is not what the customers need. And by customers, I mean the multicore processor market, the people who buy and program multicore computers. The market wants super fast, fine-grain, self-balancing, parallel computers that are easy to program. People want to create parallel programs that scale automatically when more cores are added. They want a programming environment that is better than last century’s technology. They don't even want to think about cores other than as a technology that they can buy to increase performance. Does Intel, or AMD, or Freescale, or IBM or any of the other multicore vendors sell anything that even comes close to delivering what the market wants? I don’t think so. The only board member listed on the Multicore Association's site that can claim to be truly innovative is Plurality Ltd of Israel. Even so, Plurality’s programming model sucks (see my article on Plurality’s Hypercore Architecture) because its task-oriented model is just multithreading in disguise.

We Ain’t Buying this Crap

What is needed is an association that has the interests of multicore customers in mind. Multicore customers must make themselves heard and the only way to do this is with their pocket books. IT directors and IT sponsors should refuse to buy the current crop of multicore processors for the simple reason that they suck. Am I calling for a boycott? You bet I am. The market should refuse to buy into the mediocrity that is the multithreading programming model. And the only way the market is going to get what it wants and change the course of computing in its favor is when those beautiful multicore chips begin to pile up at the fab, all dressed up with nowhere to go. The vendors may have their evangelists, their trade organizations and their snake oil salesmen. The market has something better, which is the power to say, “We ain’t buying this crap!” That would be a message heard loud and clear.

My Message to Marcus

My message to Marcus Levy is the following. I am not one to foment trouble just for the hell of it. It is just my nature to tell it like I see it. My main interest in multicore technology is that of a consumer and developer. My position is that it is not in the interest of the multicore industry to be the purveyors of mediocrity. In the end, this kind of attitude will come back to haunt you and the members of your association. But it does not have to be that way. The whole thing can be a win-win situation if the leaders of the multicore industry are willing to listen to wisdom and realize their folly. Their approach to parallel programming is crap. I know it, you know it, and they know it. They know it because, no matter how much time and money they spend on trying to make it all work, it is still a royal pain in the ass. They know it because their researchers have visited my blog countless times since I wrote my “Nightmare on Core Street” series. Now I perfectly understand the not-invented-here syndrome but that is no excuse.

As the head of an influential and rapidly growing organization, you have two options, in my opinion. You can choose to take the cowardly route and play along with the mediocrity bunch or you can step up to the plate like a hero and let the industry know in no uncertain terms that it is full of shit. Zero or hero, take your pick. But then again, it may not matter in the grand scheme of things. If the current players refuse to change their ways and do the right thing, some unknown entity may just sneak behind them and steal the pot of gold.

Saturday, May 17, 2008

Parallel Computing: Why Legacy Is Not Such a Big Problem

Legacy’s Double Edged Sword

The computer industry has wisely and correctly embraced parallelism as the only way to maintain the continuous performance increases of the previous decades. Unfortunately, the software legacy problem hangs menacingly like the sword of Damocles over the heads of industry executives, threatening to derail their careful plans for a smooth transition to parallel computing. It is a two-edged sword. On the one hand, if they choose to retain compatibility with the existing code base via multithreading with x86 cores, parallel programming will continue to be a pain in the ass. On the other hand, if they decide to adopt a universal computing model that makes parallel programming easy, they will lose compatibility with the huge installed base of legacy code. Damned if you do and damned if you don’t. It is not easy being a multicore executive.

Not the End of the World

The legacy problem may seem like a nightmare from hell with no solution in sight, but it is not the end of the world. Keep in mind that most computers are connected to either a local or global area network. Even if the industry changes over to a completely new kind of computer architecture, current servers, printers, databases will still continue to work as they are for the foreseeable future. It will take time for all the nodes in the network to change over to the new architecture but communication protocols will continue as they were. Standards like IP, HTTP, HTML, XML, SQL, PDF, etc… will remain viable. This way, new systems will have no trouble sharing the same network with the old stuff. Consider also that the embedded software industry will not hesitate to adopt more advanced processor and programming technologies, no matter how disruptive.

The End of Windows, Mac OS, Linux, etc…

The only real legacy problem has to do with mass-market operating systems (e.g., Windows, Mac OS, Linux, etc…) and the applications that run on them. These systems can continue to run on legacy hardware but they obviously cannot make the transition to a new incompatible computing model. It seems like a real losing proposition but what if the computer industry introduced a multicore model that made parallel programming so easy and intuitive that practically anybody could use it to develop sophisticated, rock-solid software applications using a drag-and-drop, component-based (see below) software construction environment? What if the cost of reprogramming complex legacy applications from scratch using this new model were negligible compared to the advantages? What if the recreated applications were blindingly fast, scalable, bug-free, secure and better in terms of features and ease of use? What if the end user could increase the performance of his or her computer simply by replacing, say, its 8-core processor, with a more powerful unit having 16, 32, 64 or more cores? What if multicore processors could handle both general purpose and data-intensive multimedia apps with equal ease? Would the world switch to this new model? You bet it would. Will it be the beginning of the end for MS Windows, Linux and the others? I think so. And not just operating systems, it would be the end of dedicated graphics processors as well.

Component-Based Programming

Componentizing is a time-honored and highly successful tradition in the hardware industry. Computer scientists have tried for decades to emulate this success in software with mixed results. In my thesis on the software reliability crisis, I argue, among other things, that the reason that component-based programming never became widespread is that our current algorithmic software model is fundamentally flawed. I maintain that it is flawed primarily because, unlike the hardware model, it provides no mechanism for the deterministic control of timing, i.e., the execution order (concurrent or sequential) of operations. In other words, software lacks the equivalent of the deterministic hardware signal. The parallel programming crisis affords us with an unprecedented opportunity to do things right, the way it should have been in the first place. I have argued for many years (long before multicore processors became the rage) that the correct computing model is one that is fundamentally synchronous, reactive (signal-based) and supports fine-grained parallelism at the instruction level. This is the basis of the COSA software model. COSA components are plug-compatible, that is to say, they can automatically and safely connect themselves to other compatible components using uniquely tagged male and female connectors.

The Age of the Do-It-Yourself Operating System

In the COSA software model, the operating system is componentized and extensible. In this light, applications are no longer considered stand-alone programs but high-level components that can be used to extend the OS. A company could use this technology to design and build a scalable multicore computer starting with a skeleton OS, super fast graphics, a set of easy-to-use software composition tools and just one initial end-user application, a powerful web browser. Extending or customizing the OS will be so easy (just drag a new component from a vendor’s web-based component library and drop it on the OS object et voila!) that it will be up to the customer/user to decide what features he or she wants to purchase. For example, if you don’t need mouse or file I/O support, just don’t buy those components. The same method will be used to construct and customize almost any application such as the browser, word processor or paint program.

Conclusion

I am claiming that once processors (both single and multicore) are designed and optimized to support the COSA software model, rapid, drag-and-drop programming will become the norm. It will turn almost everybody into a computer programmer: programming for the masses. I believe that when COSA is adopted by the computer industry, it will usher in the next computer revolution, the true golden age of automation and supercomputing on the desktop. Notice that I wrote ‘when’ rather than ‘if’. This is how confident I am of the correctness of the COSA model. The computer industry has no alternative, in my opinion, because there is only one correct parallel computing model and COSA is it. The industry can retain the flawed multithreading model and continue to live in hell, or it can do the right thing and reap the profits. It's kind of like the Matrix movie; it's either the red pill or the blue pill. Take your pick.

In a future post, I will go over each and every advantage of adopting the COSA software model.

See also:

Nightmare on Core Street

Sunday, May 11, 2008

Parallel Computing: The Case for Universality

Bill McColl on Domain-Specific Parallel Programming

Bill McColl just posted a well-argued, albeit unconvincing, article on his blog to try to make the case that domain-specific parallel programming tools and languages will have the biggest impact on parallel computing in the short term. McColl writes:

The central unresolved question in this area is whether a single general-purpose parallel programming language can emerge that will achieve the kind of universality that languages such as C, C++ and Java have achieved in sequential computing. My own view, based on more than 20 years of research in this area, is that this is very unlikely, certainly in the short to medium term. Instead I expect that we will see a number of powerful new domain-specific parallel languages emerging that quickly capture a large user base by being high-level and easy-to-use, with all of the complex implementation aspects of mapping, scheduling, load balancing and fault tolerance being handled automatically.
It is hard to tell whether McColl is making a prediction about the near future direction of parallel computing based on his experience in the field and his familiarity with various on-going research projects or whether this is the direction that he is personally promoting. I think it is both since McColl’s company, Parallel Machines, is also in the business of developing domain-specific programming tools for parallel computers. Although I agree that the industry seems to be moving in that direction (and I wish Bill McColl the best of luck in his business venture), I disagree that this is how it should spend its research money. Indeed, I am convinced that it would be a colossal mistake, the end result of which will be to create another huge legacy of soon-to-be obsolete computer applications and tools.

The Need for Universality

Bill McColl is mistaken, in my opinion. If you have spent the last 20 years researching only domain-specific parallel programming tools, as opposed to a universal computing model, you can only see one side of the picture. You are therefore in no position to decide or advise others that universality is not the way to proceed, even in the short term. Domain-specific tools are designed to treat the symptoms of the malady, not to cure its cause once and for all. As I argued in my Nightmare on Core Street series, universality should be the primary objective of multicore research. The reason is that, once you have achieved universality, you know that you have solved the problem. Universality must not be limited to programming tools, however. It should be the direct and natural outcome of a universal computing model. The old sequential approach to computing is obviously not universal; otherwise the industry would not be in the mess that it is in. It is not as if people in the business do not already observe and understand the high costs of non-universality. Making the transition from sequential computing to massive parallelism is obviously turning out to be a very costly nightmare. Should the industry embark on another costly adventure in non-universality? Answer: Of course not. It would be the ultimate exercise in foolishness. As they say, a scalded cat is afraid of cold water. At least, it should be.

The COSA Model Is Universal Now

I have my biases and Bill McColl has his but I can honestly claim that I understand both sides of this debate (universality vs. domain-specificity) as well as or better than anyone else in the business and I am not saying this to boast. My primary area of interest is artificial intelligence and I see the need for a universal parallel computing model to support the massively parallel artificial brains of the future. I have spent the better part of the last two decades researching a universal computing model called the COSA software model. Certainly, it will require a radical change in the way we curently build and program our computers but it is not rocket science. A COSA-compatible multicore processor can be made pin and signal-compatible with existing motherboards. Given the right resources, it can be designed and implemented in as little as two years using current fabrication technology. The proposed COSA development environment offers many advantages over domain-specific tools besides universality. It is inherently deterministic and implicitly parallel and the development environment is graphical. Determinism is icing on the parallel cake because it leads to secure and rock-solid applications that do not fail. In addition, it makes it possible to effectively implement plug-compatible components, an essential characteristic of drag-and-drop programming and massive code reusability. COSA will usher in the age of programming for the masses and the era of the custom operating system: drag'm and drop'm. In fact, I believe that COSA programming will be so easy that rewriting existing applications for the COSA environment will be a breeze.

See also:

Nightmare on Core Street
Parallel Programming, Math and the Curse of the Algorithm
Why Parallel Programming Is So Hard
Parallel Computing: Why the Future Is Non-Algorithmic

Wednesday, May 7, 2008

Half a Century of Crappy Computing (Repost)

Decades of Deception and Disillusion

(Note: This article was originally posted in October 2007)

I remember being elated back in the early 80s when event-driven programming became popular. At the time, I took it as a hopeful sign that the computer industry was finally beginning to see the light and that it would not be long before pure event-driven, reactive programming was embraced as the universal programming model. Boy, was I wrong! I totally underestimated the capacity of computer geeks to deceive themselves and everyone else around them about their business. Instead of asynchronous events and signals, we got more synchronous function calls; and instead of elementary reactions, we got more functions and methods. The unified approach to software construction that I was eagerly hoping for never materialized. In its place, we got inundated with a flood of hopelessly flawed programming languages, operating systems and CPU architectures, a sure sign of an immature discipline.

The Geek Pantheon

Not once did anybody in academia stop to consider that the 150-year-old algorithmic approach to computing might be flawed. On the contrary, they loved it. Academics like Fred Brooks decreed to the world that the reliability problem is unsolvable and everybody worshipped the ground he walked on. Alan Turing was elevated to the status of a deity and the Turing machine became the de facto computing model. As a result, the true nature of computing has remained hidden from generations of programmers and CPU architects. Unreliable software was accepted as the norm. Needless to say, with all this crap going on, I quickly became disillusioned with computer science. I knew instinctively what had to be done but the industry was and still is under the firm political control of a bunch of old computer geeks. And, as we all know, computer geeks believe and have managed to convince everyone that they are the smartest human beings on earth. Their wisdom and knowledge must not be questioned. The price [pdf], of course, has been staggering.

In Their Faces

What really bothers me about computer scientists is that the solution to the parallel programming and reliability problems has been in their faces from the beginning. We have been using it to emulate parallelism in such applications as neural networks, cellular automata, simulations, VHDL, Verilog, video games, etc… It is a change-based or event-driven model. Essentially, you have a global loop and two buffers (A and B) that are used to contain the objects to be processed in parallel. While one buffer (A) is being processed, the other buffer (B) is filled with the objects that will be processed in the next cycle. As soon as all the objects in buffer A are processed, the two buffers are swapped and the cycle repeats. Two buffers are used in order to prevent the signal racing conditions that would otherwise occur. Notice that there is no need for threads, which means that all the problems normally associated with thread-based programming are non-existent. What could be simpler? Unfortunately, all the brilliant computer savants in academia and industry were and still are collectively blind to it. How could they not? They are all busy studying the subtleties of Universal Turing Machines and comparing notes.

We Must Reinvent the Computer

I am what you would call a purist when it come to event-driven programming. In my opinion, everything that happens in a computer program should be event-driven, down to the instruction level. This is absolutely essential to reliability because it makes it possible to globally enforce temporal determinism. As seen above, simulating parallelism with a single-core CPU is not rocket science. What needs to be done is to apply this model down to the individual instruction level. Unfortunately, programs would be too slow at that level because current CPUs are designed for the algorithmic model. This means that we must reinvent the computer. We must design new single and multiple-core CPU architectures to directly emulate fine-grained parallelism. There is no getting around it.

Easy to Program and Understand

A pure event-driven software model lends itself well to fine-grain parallelism and graphical programming. The reason is that an event is really a signal that travels from one object to another. As every logic circuit designer knows, diagrams are ideally suited to the depiction of signal flow between objects. Diagrams are much easier to understand than textual code, especially when the code is spread across multiple pages. Here is a graphical example of a fine-grained parallel component:


Computer geeks often write to argue that it is easier and faster to write keywords like ‘while’, ‘+’, ‘-‘ ‘=’, etc… than it is to click and drag an icon. To that I say, phooey! The real beauty of event-driven reactive programming is that it makes it easy to create and use plug-compatible components. Once you’ve build a comprehensive collection of low-level components, then there is no longer a need to create new ones. Programming will quickly become entirely high-level and all programs will be built entirely from existing components. Just drag’m and drop’m. This is the reason that I have been saying that Jeff Han’s multi-touch screen interface technology will play a major role in the future of parallel programming. Programming for the masses!

Too Many Ass Kissers

I often wondered what it will take to put an end to decades of crappy computing. Reason and logic do not seem to be sufficient. I now realize that the answer is quite simple. Most people are followers, or more accurately, to use the vernacular, they are ass kissers. They never question authority. They just want to belong in the group. What it will take to change computing, in my opinion, is for an intelligent and capable minority to stop kissing ass and do the right thing. That is all. In this light, I am reminded of the following quote attributed to Mark Twain:

“Whenever you find that you are on the side of the majority, it is time to pause and reflect.”

To that I would add that it is also time to ask oneself, why am I kissing somebody's ass just because everybody else is doing it? My point here is that there are just too many gutless ass kissers in the geek community. What the computer industry needs is a few people with backbones. As always, I tell it like I see it.

See Also:

How to Solve the Parallel Programming Crisis
Parallel Programming, Math and the Curse of the Algorithm
Parallel Computing: Why the Future Is Non-Algorithmic
The Age of Crappy Concurrency: Erlang, Tilera, Intel, AMD, IBM, Freescale, etc...

Sunday, May 4, 2008

Parallel Computing: Why the Future Is Non-Algorithmic

Single Threading Considered Harmful

There has been a lot of talk lately about how the use of multiple concurrent threads is considered harmful by a growing number of experts. I think the problem is much deeper than that. What many fail to realize is that multithreading is the direct evolutionary outcome of single threading. Whether running singly or concurrently with other threads, a thread is still a thread. In my writings on the software crisis, I argue that the thread concept is the root cause of every ill that ails computing, from the chronic problems of unreliability and low productivity to the current parallel programming crisis. Obviously, if a single thread is bad, multiple concurrent threads will make things worse. Fortunately, there is a way to design and program computers that does not involve the use of threads at all.

Algorithmic vs. Non-Algorithmic Computing Model

A thread is an algorithm, i.e., a one-dimensional sequence of operations to be executed one at a time. Even though the execution order of the operations is implicitly specified by their position in the sequence, it pays to view a program as a collection of communicating elements or objects. Immediately after performing its operation, an object sends a signal to its successor in the sequence saying, ‘I am done; now it’s your turn”. As seen in the figure below, an element in a thread can have only one predecessor and one successor. In other words, only one element can be executed at a time. The arrow represents the direction of signal flow.


In a non-algorithmic program, by contrast, there is no limit to the number of predecessors or successors that an element can have. A non-algorithmic program is inherently parallel. As seen below, signal flow is multi-dimensional and any number of elements can be processed at the same time.

Note the similarity to a neural network. The interactive nature of a neural network is obviously non-algorithmic since sensory (i.e., non-algorithmically obtained) signals can be inserted into the program while it is running. In other words, a non-algorithmic program is a reactive system.
Note also that all the elements (operations) in a stable non-algorithmic software system must have equal durations based on a virtual system-wide clock; otherwise signal timing would quickly get out of step and result in failure. Deterministic execution order, also known as synchronous processing, is absolutely essential to reliability. The figure below is a graphical example of a small parallel program composed using COSA objects. The fact that a non-algorithmic program looks like a logic circuit is no accident since logic circuits are essentially non-algorithmic behaving systems.

No Two Ways About It

The non-algorithmic model of computing that I propose is inherently parallel, synchronous and reactive. I have argued in the past and I continue to argue that it is the solution to all the major problems that currently afflict the computer industry. There is only one way to implement this model in a Von Neumann computer. As I have said repeatedly elsewhere, it is not rocket science. Essentially, it requires a collection of linked elements (or objects), two buffers and a loop mechanism. While the objects in one buffer are being processed, the other buffer is filled with objects to be processed during the next cycle. Two buffers are used in order to prevent signal racing conditions. Programmers have been using this technique to simulate parallelism for ages. They use it in such well-known applications as neural networks, cellular automata, simulations, video games, and VHDL. And it is all done without threads, mind you. What is needed in order to turn this technique into a parallel programming model is to apply it at the instruction level. However, doing so in software would be too slow. This is the reason that the two buffers and the loop mechanism should ideally reside within the processor and managed by on-chip circuitry. The underlying process should be transparent to the programmer and he or she should not have to care about whether the processor is single-core or multicore. Below is a block diagram for a single-core non-algorithmic processor.
Adding more cores to the processor does not affect existing non-algorithmic programs; they should automatically run faster, that is, depending on the number of objects to be processed in parallel. Indeed the application developer should not have to think about cores at all, other than as a way to increase performance. Using the non-algorithmic software model, it is possible to design an auto-scalable, self-balancing multicore processor that implements fine-grained deterministic parallelism and can handle anything you can throw at it. There is no reason to have one type of processor for graphics and another for general purpose programs. One processor should do everything with equal ease. For a more detailed description of the non-algorithmic software model, take a look at Project COSA.

Don’t Trust Your Dog to Guard Your Lunch

The recent flurry of activity among the big players in the multicore processor industry underscores the general feeling that parallel computing has hit a major snag. Several parallel computing research labs are being privately funded at major universities. What the industry fails to understand is that it is the academic community that got them into this mess in the first place. British mathematician Charles Babbage introduced algorithmic computing to the world with the design of the analytical engine more than 150 years ago. Sure, Babbage was a genius but parallel programming was the furthest thing from his mind. One would think that after all this time, computer academics would have realized that there is something fundamentally wrong with basing software construction on the algorithm. On the contrary, the algorithm became the backbone of a new religion with Alan Turing as the godhead and the Turing machine as the quintessential algorithmic computer. The problem is now firmly institutionalized and computer academics will not suffer an outsider, such as myself, to come on their turf to teach them the correct way to do things. That’s too bad. It remains that throwing money at academia in the hope of finding a solution to the parallel programming problem is like trusting your dog to guard your lunch. Bad idea. Sooner or later, something will have to give.

Conclusion

The computer industry is facing an acute crisis. In the past, revenue growth has always been tied to performance increases. Unless the industry finds a quick solution to the parallel programming problem, performance increases will slow down to a crawl and so will revenue. However, parallel programming is just one symptom of a deeper malady. The real cancer is the thread. Get rid of the thread by adopting a non-algorithmic, synchronous, reactive computing model and all the other symptoms (unreliability and low productivity) will disappear as well. In an upcoming post, I will go over the reasons that the future of parallel programming is necessarily graphical.

[This article is part of my downloadable e-book on the parallel programming crisis.]

See Also:

How to Solve the Parallel Programming Crisis
Parallel Computing: The End of the Turing Madness
Parallel Computing: Why the Future Is Synchronous
Parallel Computing: Why the Future Is Reactive
Why Parallel Programming Is So Hard
Why I Hate All Computer Programming Languages
Parallel Programming, Math and the Curse of the Algorithm
The COSA Saga

PS. Everyone should read the comments at the end of Parallel Computing: The End of the Turing Madness. Apparently, Peter Wegner and Dina Goldin of Brown University have been ringing the non-algorithmic/reactive bell for quite some time. Without much success, I might add, otherwise there would be no parallel programming crisis to speak of.