Saturday, September 27, 2008

COSA: A New Kind of Programming, Part II

Part I, II, III, IV, V, VI

Abstract

In Part I of this multi-part article, I wrote that the master/slave approach to elementary behavior control in the COSA programming model should be extended to high-level software construction. My thesis is that this control mechanism is so simple and intuitive that it will revolutionize computer programming by moving it out of the exclusive realm of computer nerds into that of the average computer user. Since COSA is inherently parallel, this will, in effect, solve the parallel programming problem. Below, I go over the elementary principles of control used in COSA.

Effector Control

Most of us are familiar with the controls that come with many of our electrical and electronic appliances. Some may be a little bit fancier than others but almost all come equipped with a minimum set of controls: the on/off (start/stop) buttons. To reuse the metaphor of the previous post, we are the masters and the appliances are the slaves. It turns out that motor command neurons in the brain’s basal ganglia use a similar method to control their target muscles: excitatory (start) and inhibitory (stop) signals. The way it works is that the neuron begins firing as soon as it receives an excitatory signal and stops firing when it receives an inhibitory signal. This is what gave me the original idea for COSA effectors. The addition effector shown below will repeat its operation over and over until it receives a stop command.

A single motor command neuron may receive excitatory and inhibitory signals from hundreds or even thousands of afferent synaptic connections. It goes without saying that the basal ganglia are using some kind of error detection mechanism in order to keep all those incoming control signals from conflicting with one another. COSA effectors, too, use a special mechanism that automatically detects command conflicts. It is applied during application development for debugging purposes and it is based on what I call the principle of motor coordination:
No action can be started if it has already started, or stopped if it is already stopped.

In sum, low-level behavior control in COSA is a very simple thing that even children can grasp. In Part III, I will explain how to control the behavior of high-level COSA components by applying the same principles used with elementary objects.

Related articles:

How to Solve the Parallel Programming Crisis
Parallel Computing: Why the Future Is Compositional

Thursday, September 25, 2008

COSA: A New Kind of Programming, Part I

Part I, II, III, IV, V, VI

Abstract

A few exciting ideas related to the on-going evolution of the COSA programming model have been percolating in my mind for quite some time. I wrote a little about them in my recent article, Parallel Computing: Command Hierarchy. These ideas form the basis of a radically new way of looking at software construction that is so intuitive, it promises (or threatens, as the case may be) to reclassify computer programming as a mostly geek-only occupation into something that the average computer user can partake in and enjoy. This multi-part article is an account of the reasoning that led to my current thinking.

Something Is Missing

There is no doubt that the graphical approach to parallel programming can do wonders to productivity and program comprehension. It is true that the use of plug-compatible components in COSA will facilitate drag-and-drop software composition but the simple and intuitive feel that one gets from connecting a sensor to an effector is absent in the realm of high-level components.


Even though looking at a bunch of interconnected COSA components may give one a sense of the various functional parts of a program, the manner in which the parts cooperate to accomplish a task is not obvious. Something is missing.
Masters and Slaves

I realized that the only way to solve the problem is to return to COSA’s roots. The COSA philosophy is that a program is a behaving machine that senses and effects changes in its environment. Of course, there is more to the design of a behaving machine than realizing that programs behave. We, as designers, want the program to behave in a certain way, that is to say, we want control. At the lowest level, we can control the behavior of a program by connecting specific sensors to specific effectors. The applicable metaphor, in this example, is that the sensor is the master or controller and the effector is the slave, i.e., the object that is under control. I rather like the master/slave symbolism because it perfectly illustrates the principle of complementarity. Those of you who have followed my work over the years know that I have always maintained that complementarity is the most important of all the COSA principles because it is the basic underlying principle of every complex organism.

In part II, I will describe how behavior control in COSA works at the elementary level.

Related articles:

How to Solve the Parallel Programming Crisis
Parallel Computing: Why the Future Is Compositional

Friday, September 19, 2008

Parallel Computing: Both CPU and GPU Are Doomed

Tim Sweeny

A few weeks after I wrote Heralding The Impending Death of the CPU, Tim Sweeney, the renowned founder of Epics Games and pioneering game engine designer, predicts the impending fall of the GPU. In an interview titled Twilight of the GPU: an epic interview with Tim Sweeney, published by Ars Technica last Saturday, the same day hurricane Ike ripped Texas a new one, Sweeny does the same to the future of graphics processors. Here is something that Sweeny said that caught my attention:
In the next console generation you could have consoles consist of a single non-commodity chip. It could be a general processor, whether it evolved from a past CPU architecture or GPU architecture, and it could potentially run everything—the graphics, the AI, sound, and all these systems in an entirely homogeneous manner. That's a very interesting prospect, because it could dramatically simplify the toolset and the processes for creating software.
This is exactly what I have been saying for a long time. Homogeneity and universality are the names of the new game. I may not agree with Sweeny on what development tools we will use in the future (he seems to be married to the old C, C++ linguistic approach), but he is absolutely correct about the future of parallel processors.

Nvidia

This brings me to thinking about Nvidia. Unlike Intel and AMD, Nvidia’s financial future is not tied to the CPU. The CPU will soon join the vacuum tube and the buggy whip in the heap of obsolete technologies. The future of parallel computing is in vector processing and, as we all know, Nvidia’s GPUs are vector processors. Sure, GPUs are not universal parallel processors because they use an SIMD (single instruction, multiple data) configuration. However, this is a problem that Nvidia will eventually correct by switching over to a pure MIMD (multiple instruction, multiple data) vector architecture. In my opinion, Nvidia is ideally positioned to dominate the processor industry in the decades to come. That is, assuming its leadership is shrewd enough to see and heed the writings on the wall.

PS. As an aside, a little over a month ago, Tim wrote a couple of comments to my article Larrabee: Intel's Hideous Heterogeneous Beast.

Related Articles:

Parallel Computing: The Fourth Crisis
Radical Future of Computing, Part II
Heralding the Impending Death of the CPU
Transforming the TILE64 into a Kick-Ass Parallel Machine
How to Solve the Parallel Programming Crisis

Wednesday, September 10, 2008

Parallel Computing: Command Hierarchy

Abstract

I had originally decided not to publish this article because it reveals a powerful aspect of the COSA software model that I could conceivably use to my advantage should I eventually form a company around COSA, but who cares? I have already given away tons of my ideas; so one more will not matter much. Long ago, I realized that the best ideas are the ones that can be explained via metaphors. The reason is that a metaphor uses a familiar and well-understood concept to elucidate one that is not so familiar but is structurally similar. Below, I describe how the metaphor of a command hierarchy can serve as a powerful organizational principle for composing parallel applications.

Command Hierarchy

In the real world, parallel entities normally go about their business without the need for a hierarchical command structure. They just interact via sensors and effectors according to their set behaviors. However, any time there is a need to perform a complex task or to reach a long-term goal that requires the cooperation of many, it pays to divide the subtasks among various individuals under the direction of a supervisor. If the task is very complex, the supervisors may in turn be placed under the supervision of an even higher authority, and so on. The use of a command hierarchy is a very powerful way to efficiently organize any complex, goal-oriented system consisting of multiple actors or individuals. It is for this reason that it is used by all large organizations such as corporations, churches, armies, etc. Since a COSA application is a collection of behaving entities, it makes sense to use the same method to organize and control their behavior.

The COSA Supervisor

Some of my readers may be familiar with object-oriented programming languages like C++, Smalltalk or Java. It is important not to confuse a class hierarchy in OOP with a command hierarchy. Unlike high-level C++ objects that have their own methods, a high-level command component in COSA does nothing other than supervise the manner in which its subordinate components perform their actions. This is not unlike the way a conductor conducts a musical orchestra. Remember that a COSA component can be either active or inactive. When activated, a component’s sensors immediately detect input changes, if any. If necessary, the component accomplishes its task and, when the task is done, it sends an output signal before going back to sleep. The job of a COSA supervisor component is to determine the right time to activate (or deactivate) various components under its control in order to accomplish a task. This does not mean that the tasks are necessarily consecutive, however. Any number of components may be running concurrently.

COSA QuickSort Revisited

One of the nice things about using a supervisor is that there is no longer a need for low-level components to send messages to each other. They just communicate with the supervisor at the start and end of their tasks. These low-level components will use what I call a data sensor, a simple connector that allows one component to see or sense data changes in another. If time permits, I will redesign the COSA QuickSort component to include the use of a COSA supervisor and a command hierarchy.

See Also:
COSA: A New Kind of Programming

Friday, September 5, 2008

The Radical Future of Computing, Part II

Part I, II

Abstract

This post is a continuation of my response to reader Marc’s interesting comments at the end of my recent article, Heralding the Impending Death of the CPU.

The Market Wants Speed and Low Energy Consumption

The microprocessor market is also highly fragmented between cheap low-end processor makers like Microchip and Atmel, and desktop makers. The desktop players have their own mindset that has made them successful in the past. The obviously-easily parallelizable tasks (sound, graphics...) are so common that custom parallel processors were designed for them. You might be able to get Microchip to squeeze in 20 16f84 microcontrollers on one piece of silicon and could easily use a bunch of cheap PICs to emulate a bunch of 20 vector processors with current technology at a chip cost of maybe $100. But then, the optimum bus design would vary on the application.

What application would be most compelling to investors? I don't know... But I think an FPGA or multi-PIC proof of concept would help your idea become implemented at low cost, and a "suggestion software on how to parallelize applications" for sequentially-thinking programmers, combined with a parallel processor emulator for conventional chip architectures would help programmers see parallel programmingas an approachable solution instead of a venture capitalist buzzword.

Well, I am not so sure that this would attract the people with the money. I sense that, when it comes to processors, people are more impressed with proven performance than anything else. And, nowadays, people also want low energy usage to go with the speed. Sure, it would be cool if I could demonstrate a powerful parallel programming tool, but it would be an expensive thing to develop and it would not prove the superiority of the target processor. What I would like to deliver, as an introduction, is a low wattage, general-purpose, single-core processor that is several times more powerful (measured in MIPS) than say, an Intel or AMD processor with four or more cores. I think I can do it using vector processing. This, too, is not something that can be built cheaply, in my estimation. It must be designed from scratch.

SIMD Vector Processor: Who Ordered That?

At this point in the game, there should be no doubt in anyone’s mind that vector processing is the way to go. As GPUs have already amply demonstrated, vector processing delivers both high performance and fine-grain deterministic parallelism. Nothing else can come close. That multicore vendors would want to use anything other than a vector core is an indication of the general malaise and wrongheadedness that have gripped the computer industry. As everyone knows, multithreading and vector processing are incompatible approaches to parallelism. For some unfathomable reason that will keep future psycho-historians busy, the computer intelligentsia cannot see past multithreading as a solution to general purpose parallel computing. That's too bad because, unless they change their perspective, they will fall by the wayside.

When I found out that Intel was slapping x86 cores laced together with SIMD vector units in their upcoming Larrabee GPU, I could not help cringing. What a waste of good silicon! The truth is that the only reason that current vector processors (GPUs) are not suitable for general-purpose parallel applications is that they use an SIMD (single instruction, multiple data) configuration. This is absurd to the extreme, in my opinion. Why SIMD? Who ordered that? Is it not obvious that what is needed is an MIMD (multiple instruction, multiple data) vector core? And it is not just because fine-grain MIMD would be ideal for general-purpose parallel applications, it would do wonders for graphics processing as well. Why? Because (correct me if I’m wrong) it happens that many times during processing, a bunch of SIMD vector units will sit idle because the program calls for only a few units (one instruction at a time) to be used on a single batch of data. The result is that the processor is underutilized. Wouldn't it be orders of magnitude better if other batches of data could be processed simultaneously using different instructions? Of course it would, if only because the parallel performance of a processor is directly dependent on the number of instructions that it can execute at any given time.

MIMD Vector Processing Is the Way to Go

Most of my readers know that I absolutely abhor the multithreading approach to parallelism. I feel the same way about CPUs. A day will come soon when the CPU will be seen as the abomination that it always was (see Heralding the Impending Death of the CPU for more on this topic). However, SIMD vector processors are not the way to go either even if they have shown much higher performance than CPUs in limited domains. It is not just that they lack universality (an unforgivable sin, in my view) but the underutilization problem that is the bane of the SIMD model will only get worse when future vector processors are delivered with thousands or even millions of parallel vector units. The solution, obviously, is to design and build pure MIMD vector processors. As I explained in a previous article on Tilera’s TILE64, the best way to design an MIMD vector processor is to ensure that the proportion of vector units for every instruction reflects the overall usage statistics for that instruction. This would guarantee that a greater percentage of the units are used most of the time, which would, in turn, result in much lower power consumption and greater utilization of the die’s real estate for a given performance level. Of course, a pure MIMD vector core is useless unless you also have the correct parallel software model to use it with, which is what COSA is all about.

Have you looked at any Opencores designs?
No, I haven’t. The open source issue is very interesting but it opens a whole can of worms that I'd better leave for a future article.

Wednesday, September 3, 2008

The Radical Future of Computing, Part I

Part I, II

Abstract

A reader named Marc left an interesting comment at the end of my previous post, Heralding the Impending Death of the CPU. I hope Marc will forgive me for using his words as a vehicle on which to piggyback my message.

Linguistic Origin of Programming


I think that algorithmic programming is popular because it is similar to the way many of us write in western natural language; people plan whether a thought should be after or before a previous one in academic essays, which is inherently sequential in nature.
I agree. I believe that the modern computer evolved from the sequential/algorithmic needs of mathematicians like Charles Babbage and Ada Lovelace. As you know, linguistic or textual symbols are perfect for expressing mathematical algorithms. I have often wondered what kind of computers we would have if clockmakers or locomotive engineers had had a more direct influence on the development of early computer technology. Those folks are more accustomed to having multiple interacting mechanisms performing their functions concurrently.

Note also that the typewriter predated modern computers and served as a model for the input device (keyboard) of the first mass market computers and has profoundly affected the way we perceive them. Although the mouse was a major influence in changing human-computer interactions, the event-driven approach to programming that it engendered somehow failed to convince computer scientists that every action in programming should be a reaction to some other action (event-driven), down to the instruction level. Hopefully, the new multi-touch screen technologies will drastically change our idea of what a computer is or should be.

Petri Nets, Conditions, Metaphors and Simplicity


Native parallel programming requires that the programmer (or implementer if you'd rather call it that) decides what are the conditions that have to be met for each cell to trigger and what are the outputs that are produced based on those conditions so it requires skills that are part-user, part coder. Petri Nets are a great graphical symbolism for this. It actually requires that people focus on the problem instead of style.
I agree. Nobody should have to worry about syntax or have to learn the meaning of a token (in someone else’s native language and alphabet) in order to program a computer. Only a few graphical symbols (sensors, effectors, pathways, data and encapsulations) should be allowed. Labeling should be done in the designer’s preferred language. I believe that the main reason that graphical programming languages have not taken off is that their designers not only don’t seem to appreciate the importance of encapsulation (information hiding), but they have a tendency to multiply symbols beyond necessity. I am a fanatic when it comes to simplicity.

One of my goals is to turn computer programming into something that the lay public will find approachable and enjoyable. In this regard, I think that even Petri Nets, in spite of their simplicity compared to other programming models, are still too complicated and too abstract, making them unpalatable to the masses or the casual developer. I rather like PNs and I am sorry that the concept never really became mainstream. However, I have a bone to pick with the notion of conditions (places?). Don’t take me wrong; I don’t disagree that there is a need for conditions. I just don’t think the token concept is intuitive or concrete enough to appeal to the layperson. In my opinion everything should be driven by events (changes or transitions). What Petri calls a transition is what I call a sensor. A condition, to me, is just a past or current event and, as such, it should be used in conjunction with sensors (logic sensors, sequence detectors, etc.). This makes it easy to extend the idea of conditions to include that of temporal expectations, a must for reliability in COSA.

That being said, the ideal programming metaphors, in my opinion, are those taken from the behavioral sciences such as psychology and neuroscience: stimulus/response, sensor/effector, sequence memory, action/reaction, environment (variables), etc… The reason is that a computer program is really a behaving machine that acts and reacts to changes in its environment. A layperson would have little trouble understanding these metaphors. Words like ‘transitions’, ‘tokens’ and ‘places’ don’t ring familiar bells. Let me add that, even though I applaud the clean graphical look of PNs, my main criticism is that they are not deterministic. In my view, this is an unpardonable sin. (I confess that I need to take another close look at PNs because it seems that they have evolved much over the years).

New Leadership to Replace the Old


To me, starting with a software specification before implementing a solution seems obvious, but my son has mainly sold freelance projects to business types based on his suggested user interface first; when he tried to tell his potential customers what data sources he used and how he got to his results, the customers' eyes would often glaze over...
Yes. People love to see pretty pictures, which is understandable because business types tend to see technology from a user’s perspective. They want to see the big picture and they don’t care about what makes it work. You make an interesting point because I have pretty much given up on selling my ideas directly to techies. I am slowly coming to the conclusion that the next computer revolution will have to emerge out of some government-funded initiative or some industry-wide consortium under the leadership of an independent, strategy-minded think tank. The reason is that the industry is in a deep malaise caused by the aging baby boomers who drove computer innovation in the last half of the 20th century, but lately have run out of ideas simply because they are old and set in their ways. I don't want to generalize too much but I think this is a major part of the problem. Their training has taught them to have a certain perspective on computing that is obviously not what is needed to solve the parallel programming and software reliability crises. Otherwise, they would have been solved decades ago. In fact, it is their perspective on computing that got the industry and the world into this mess in the first place.

As a case in point, consider this recent article at HPCwire by Michael Wolfe. It pretty much sums up what the pundits are thinking. Michael believes that “the ONLY reason to consider parallelism is for better performance.” I don’t know how old Michael is but it is obvious to me that his thinking is old and in serious need of an update. The problem is that the older computer nerds are still in charge at the various labs/universities around the world and they hold the purse strings that fund research and development. These folks have titles like CTO or Project Director or Chief Science Officer. That does not portend well for the future of computing.

As I wrote somewhere recently, the computer industry is in dire need of a seismic paradigm shift and there is only one way to do it. The old computer nerds must be forced into retirement and new leadership must be brought in. The new mandate should be to reevaluate the computing paradigms and models of the last century and assess their continued adequacy to the pressing problems that the industry is currently facing, such as the parallel programming and software reliability crises. If they are found to be inadequate (no doubt about it from my perspective), then they should be replaced. These kinds of strategic decisions are not going to be made by the old techies but by the business leaders, both in the private sector and within the government. Sometimes, it pays not to be too married to the technology because you can’t see the forest for the trees.

Software Should Be More Like Hardware and Vice Versa


There is plenty of parallel processing already going around in graphics processors, Field-programmable Gate Arrays and other Programmable Logic chips. It's just that people with software-experience who are used to a certain type of tool are afraid to make the effort to acquire what they see as hardware-type electrical engineer thought-habits; I know my programmer son would have an issue. The US has developed a dichotomy between electrical engineers and computer scientists.
Which is rather unfortunate, in my opinion. In principle, there should be no functional distinction between hardware and software, other than that software is flexible. I foresee a time when the distinction will be gone completely. The processor core as we know it will no longer exists. Instead, every operator will be a tiny, super-fast, parallel processor that can randomly access its data directly at any time without memory bus contention problems. We will have a kind of soft, super-parallel hardware that can instantly morph into any type of parallel computing program.

Programming for the Masses


Also "Talking heads" have a vested interest in promoting certain products that are only incremental improvements over the existing tools, because otherwise they would need to educate the clients about the details of the new paradigm, which would require extended marketing campaigns which would only pay back over the long term.
Yeah, legacy can be a big problem but it doesn’t have to be. I wrote about this before but you bring out the important issue of client education, which is a major part of the paradigm shift that I am promoting. I think the time has come to move application design and development from the realm of computer geeks into that of the end user. The solution to the parallel programming problem gives us an unprecedented opportunity to transform computer programming from a tedious craft that only nerds can enjoy into something that almost anybody can play with, even children. Now that multi-touch screens are beginning to enter the consumer market, I envision people using trial-and-error methods together with their hands and fingers (and possibly spoken commands) to quickly manipulate parallel 3-D objects on the screen in order to create powerful and novel applications. I see this as the future of programming, kind of like putting Lego blocks together. In this regard, I don’t think we will need to reeducate traditional programmers to accept and use the new paradigm. They will have to get with the new paradigm or risk becoming obsolete.

PS. I’ll respond to the rest of your message in Part II.