Tuesday, July 29, 2008

How to Solve the Parallel Programming Crisis

Abstract

Solving the parallel computing problem will require a universal computing model that is easy to program and is equally at home in all types of computing environments. In addition, the applications must be rock-solid. Such a model must implement fine-grained parallelism within a deterministic processing environment. This, in essence, is what I am proposing.

No Threads

The solution to the parallel programming problem is to do away with threads altogether. Threads are evil. There is a way to design and program a parallel computer that is 100% threadless. It is based on a method that has been around for decades. Programmers have been using it to simulate parallelism in such apps as neural networks, cellular automata, simulations, video games and even VHDL. Essentially, it requires two buffers and an endless loop. While the parallel objects in one buffer are being processed, the other buffer is filled with the objects to be processed in the next cycle. At the end of the cycle, the buffers are swapped and the cycle begins anew. Two buffers are used in order to prevent racing conditions. This method guarantees rock-solid deterministic behavior and is thus free of all the problems associated with multithreading. Determinism is essential to mission and safety-critical environments where unreliable software is not an option.

Speed, Transparency and Universality

The two-buffer/loop mechanism described above works great in software but only for coarse-grain objects such as neurons in a network or cells in a cellular automaton. For fine-grain parallelism, it must be applied at the instruction level. That is to say, the processor instructions themselves become the parallel objects. However, doing so in software would be much too slow. What is needed is to make the mechanism an inherent part of the processor itself by incorporating the two buffers on the chip and using internal circuitry for buffer swapping. Of course, this simple two-buffer system can be optimized for performance by adding one or more buffers for use with an instruction prefetch mechanism if so desired. Additionally, since the instructions in the buffer are independent, there is no need to process them sequentially with a traditional CPU. Ideally, the processor core should be a pure MIMD (multiple instructions, multiple data) vector core, which is not to be confused with a GPU core, which uses an SIMD (single instruction, multiple data) configuration.
The processor can be either single core or multicore. In a multicore processor, the cores would divide the instruction load in the buffers among themselves in a way that is completely transparent to the programmer. Adding more cores would simply increase processing power without having to modify the programs. Furthermore, since the model uses fine-grain parallelism and an MIMD configuration, the processor is universal, meaning that it can handle all types of applications. There is no need to have a separate processor for graphics and another for general purpose computing. A single homogeneous processor can do it all. This approach to parallelism will do wonders for productivity and make both GPUs and traditional CPUs obsolete.

Easy to Program

The main philosophy underlying this parallel processing model is that software should behave logically more like hardware. A program is thus a collection of elementary objects that use signals to communicate. This approach is ideal for graphical programming and the use of plug-compatible components. Just drag them and drop them, and they connect themselves automatically. This will open up programming to a huge number of people that were heretofore excluded.
Conclusion

Admittedly, the solution I am proposing will require a reinvention of the computer and of software construction methodology, as we know them. But there is no stopping it. The sooner we get our heads out of the threaded sand and do the right thing, the better off we will be.

See Also:

Why Parallel Programming Is So Hard
Parallel Computing: Why the Future Is Non-Algorithmic
Computer Scientists Created the Parallel Programming Crisis
UC Berkeley's Edward Lee: A Breath of Fresh Air
Why I Hate All Computer Programming Languages
Half a Century of Crappy Computing
How to Construct 100% Bug-Free Software
Why Software Is Bad and What We Can Do to Fix It
Parallel Computing: Both CPU and GPU Are Doomed

Saturday, July 26, 2008

Back on Line

All right. I guess that I and others had gotten so used to having rebelscience.org around, it was not a good idea to just let it die. Besides, sending out a COSA zip file to anybody who asked for it was getting tiresome. Thanks to everybody who wrote and asked for it. Please make full copies of the material on the site because I might change my mind again. One never knows.

Tuesday, July 22, 2008

My Business Plan Is Simple

COSA Inc.

Investment firms from around the world (including big companies like JP Morgan and others) frequently visit my blog. Look folks, my plan is simple. I want to create a line of kick-ass, energy efficient, single-core and multicore processors with a suite of free and easy-to-use, graphical dev tools for easy parallel programming. My stuff will blow everybody out of the water. It will cost you about $US 10 million and take about two years. That is all.

Related Articles:

How to Solve the Parallel Programming Crisis
Transforming the TILE64 into a Kick-Ass Parallel Machine

Monday, July 21, 2008

UC Berkeley’s Edward Lee: A Breath of Fresh Air

Not All Academics Are Ass kissers

I don’t hide the fact that my general attitude vis-à-vis the computer science community is a hostile one. I just cannot stand the way most academics kiss each other’s ass. I guess it is a legacy of peer review, a mechanism whose main purpose is to stifle dissent within the community. However, this does not mean that I believe all academics are inveterate ass kissers. Some are more like the prophet Daniel in King Nebuchadrezzar’s court. I previously lauded Peter Wegner and Dina Goldin on this blog for their work on non-algorithmic interactive computing. I am sure there are many more like them. Today, I would like to draw your attention to the work of Professor Edward A. Lee at UC Berkeley’s Department of Electrical Engineering and Computer Sciences.

Threads Are Bad and Precise Timing Is Crucial

Professor Lee made major news a couple of years ago with the publication of his The Problem with Threads (pdf) in which he methodically laid out the evils of multithreading. As my readers know, I have been waging a ferocious battle against multithreading for many years. What impresses me the most about Lee’s work is that he seems to have a deep understanding of what I believe to be two of the most important issues in computing: timing and implicit concurrency. Deterministic timing is essential to program reliability and implicitly concurrent programming elements are essential to the design and composition of parallel programs. Although I do not agree with Lee’s apparent obsession with doing for software timing what has been done in hardware (real time precision in complex software is a pipe dream, in my opinion), I recommend that everybody takes a close look at Professor Lee’s work, especially the Ptolemy Project.

Thread Monkeys All Around

Professor Lee is, as of this writing, the chair of UC Berkeley’s Parallel Computing Lab, which is supported in part by Microsoft and Intel. Now, it is no secret that the people at Intel and Microsoft are a bunch of thread monkeys, not unlike the thread monkeys at Stanford's Pervasive Parallelism Lab. I don’t know about the views of the other members of Berkeley’s research team but it is obvious that Intel and Microsoft’s addiction to multithreading is at odds with Lee’s position. I sure hope that Professor Lee stands his ground and absolutely refuses to go along with the brain-dead thread mentality. I hope, for the sake of the future of computing, that UC Berkeley and professor Lee are willing to stand up to the Wintel idiocy and tell them in no uncertain terms that their thread-based approach to parallel programming and multicore architecture design is just crap.

Having said that, I am afraid that Lee’s style, in contrast to mine, is very low-key and he may lose this battle. Lee needs to be a lot more forceful, in my opinion, otherwise he does not stand a chance against the thread monkeys. At any rate, it’s going to be interesting to see what comes out of Berkeley’s Parallel Computing Lab by the end of this year.

2/15/2010 The Battle at Berkeley

OK. It has been more than a year and what do you know? UC Berkeley is nowhere near a solution to the parallel programming crisis. No big surprise here. I was right. Professor Lee has neither the political clout nor the personal strength to stand up to the nefarious army of thread monkeys. He's surrounded by them, not just within Berkeley's computer science department but also at Intel and Microsoft. Heck, it doesn't look like Professor Lee is even the chair of Berkeley’s Parallel Computing Lab any more. That's too bad, but there is no need to cry over spilled milk. Those of you who are really interested in this topic should go over to the Rebel Science E-Bookstore (pay only if you like what you read) and download my e-book, "How to Solve the Parallel Programming Crisis". At the very least, read the related blog posts at the links below. Something is bound to happen, sooner or later.

See Also:
How to Solve the Parallel Programming Crisis
Parallel Computing: Why the Future Is Non-Algorithmic
Parallel Computing: The End of the Turing Madness

Sunday, July 20, 2008

Computer Academics Can Kiss My Ass

[I just wrote a reply to a comment made by J.L. at the end of Parallel Computing: Why the Future Is Non-Algorithmic. I thought I would reproduce it as a regular post because it spells out my attitude toward the computer science community.]

[J.L.] (does not want his name to appear on my blog)
Several people have already pointed this out on Slashdot, but you should look at the functional languages and the GHC in particular. It's quite well at automatically parallelizing your code. And, yes, academics have known of this one too since the 60's (most have probably preferred functional programming as it is more concise and elegant--closer to the mathematics). But, industry has not been listening to them either.

[Me]
Wow! This is a good one. You have got to be kidding me. For your information, industry does nothing but listen to academia. Guess who is in charge of the parallel programming research groups at Stanford, Berkeley, the University of Chicago and all the other labs in academia that are being sponsored by industry? You guessed it, computer academics. Guess where all the technology leaders in industry came from? You got it again, straight from academia.

Computer scientists have been pushing functional programming for years. Erlang is already accepted at Ericsson in Sweden as God's gift to humanity. The only reason that functional programming has not taken off like a rocket is because reality keeps kicking it in the ass.

When you get a chance, go take a look at the work being conducted at Stanford's Pervasive Parallelism Lab. Download their PDF doc. What do you see? You see academics talking about tens of thousands of threads, that's what. You see the cream of the crop of computer science cluelessly talking about domain specific languages (DSL) as the answer to the parallel programming problem.

So don't give me the shit that the industry is not listening to you. This is precisely why the industry is in a crisis. They do nothing but listen to you people.

PS. I am tired of people telling me that I am alienating the very people that I am trying to impress. The last thing I want to do is impress a bunch of ivory tower academics who are convinced that they are the smartest people in the universe and condescendingly talk down to the lay public, the same public who ultimately pays their salaries. To hell with the computer science community. They are the problem, not the solution. They can kiss my ass. How about that? Hey, I like the sound of that. I think I might turn it into a regular post, just for fun.

Thursday, July 17, 2008

AMD Must Stop Parroting Intel's Ancient x86 Technology

I was about to completely throw in the towel when I heard that Hector Ruiz is stepping aside as CEO of AMD. He will continue as Chairman while Dirk Meyer takes over as CEO. This is on the heels of AMD's announcement that it lost $1.2 billion for the quarter that ended on June 28. It is sad to read news like this. Ruiz said in a statement that "Dirk is a gifted leader who possesses the right skills and experience to continue driving AMD and the industry forward in new, compelling directions," (source: EETimes). I sure hope Ruiz is right because replacing him with Meyer is just window dressing unless Meyer takes the opportunity to change the company's business model.

It is time for AMD to realize that, even though it has the best engineering team in the world, parroting Intel's x86 technology is a losing proposition. Nobody can beat a behemoth like Intel while playing Intel's own game in Intel's own backyard. Now that the industry is transitioning away from sequential computing toward massive parallelism, AMD has the opportunity of a lifetime to take the bull by the horns and lead the world into the next era of computing. Intel is making a grave mistake by adopting multithreading as its parallel programming model. AMD must not make the same mistake. There is an infinitely better way to design and program multicore processors that does not involve threads at all. To find out why multithreading is not part of the future of computing, read Parallel Computing: Why the Future Is non-Algorithmic.

Tuesday, July 15, 2008

The COSA Saga: Final Chapter

I Was Wrong

I have been writing about and promoting the COSA software model on the web for close to ten years. I made all my ideas available on the internet, free for the taking. I tried my best. I had high hopes that the computer industry and the computer science community would see the error of their ways and change for the better. I was wrong. I have decided that, unless I see some sign of intelligent life in the industry, I will not renew the rebelscience.org domain. COSA is a waste of my time. I need to focus my energy on projects that are more important to me. Thanks to everyone who supported Project COSA.

Related article:

Back on Line
The COSA Saga

Monday, July 14, 2008

Rebel Science Site Down

I just noticed that the parent site for this blog is down. It's just negligence on my part. Please be patient as I correct the problem.

Monday, July 7, 2008

Erlang Is Not the Solution

[What follows is a comment that I posted at Intel Research Blogs in response to a recent article by Anwar Ghuloum titled Unwelcome Advice. I was actually replying to another commenter, Falde, who is promoting Erlang for parallel programming. Since Erlang is back in the news lately (see Slashdot article), I felt it would be appropriate to let my readers know what I feel about the suitability of Erlang as a solution to the parallel programming problem.]

The Erlang “solution” to the parallel programming problem is not the solution, otherwise we would not be here discussing the problem. The functional programming model has major drawbacks, not the least of which is that most programmers are not familiar with it and have a hard time wrapping their minds around it.

Another problem is that the Erlang message passing model forces the OS to copy entire arrays onto a message channel. This is absurd because performance takes a major hit. Shared memory messaging is the way to go. Certainly, using shared memory can be very problematic in a multithreaded environment but that is not because it is wrong. It is because multithreading is wrong. Threads are inherently non-deterministic and, as a result, hard to manage. Memory (data) is the environment that parallel agents use to accomplish a task. Agents sense and react to changes in their environment: reactive parallelism is the way it should be. Memory should be shared for the same reason that we humans share our environment. Switch to a non-algorithmic, deterministic model that does not use threads and all the problems with shared memory will disapear.

Furthermore, in spite of its proponents’ insistence on the use of words like “micro-threading” and "lightweight processes", Erlang enforces coarse-grain parallelism. In my opinion, if your OS, or programming language or multicore processor does not use fine-grain parallelism, it is crap. There are lots of situations that call for fine-grain processing. I have yet to see a fine-grained parallel quicksort implemented in Erlang.

Finally, a true parallel programming paradigm should be universal. There is no reason to have one type of processor for graphics and another for general purpose computing. If that’s what you are proposing, then it is obvious that your model is wrong and in serious need of being replaced. [Interestingly, Erlang proponents have nothing interesting to say on this issue.]

In conclusion, I will reiterate what I have said before. What is needed to solve the parallel programming crisis is a non-algorithmic, synchronous, reactive software model. By the way, I am not the only one ringing the non-algorithmic/reactive bell. Check out the work of Peter Wegner and Dina Goldin at Brown university. They’ve been ringing that bell for years but nobody is listening. You people are hard of hearing. And you are hard of hearing because you are all Turing Machine worshippers. It’s time for a Kuhnian revolution in computer science.

See also:

Parallel Computing: Why the Future Is Non-Algorithmic
How to Construct 100% Bug-Free Software
How to Solve the Parallel Programming Crisis

Sunday, July 6, 2008

There Is Purpose to this Madness

Two Kinds of People

People often write to tell me that they have a hard time taking my work seriously because, when they Google my name, they cannot reconcile the foul mouthed Louis Savain whom many associate with a crank or a troll with the Louis Savain who writes about software reliability and parallel programming. Well, the way I see it, there are two kinds of people in the world, those who throw the baby out with the bathwater and those who don’t. I hate the former and I love the latter. Over the years, I purposely and systematically created an Internet persona that mixes both baby and bathwater in one package. I took pains to be as irreverent as possible toward some in the scientific community. Those who reject my work on the basis of something I wrote in a Usenet newsgroup or some other forum are not worth my time. And I’m sure they feel the same way about me. Those who see the merit of my work in spite of what they read on the Internet are the ones that I want to associate with. It’s that simple. You are either with me or you are against me.

The Religious Issue

Many years ago, I realized that science and technology are the religion of atheists. In fact, atheists, in their arrogance, have come to believe that science belongs to them by right and that no God believer has any business being a scientist. They established the Internet as their playground and they do their best to control the dissemination of scientific information so as to maintain their anti-religion bias. Intimidation and ridicule are their favorite weapons. They vilify anybody who is openly religious and, for whatever reason, their main target seems to be Christianity. The situation is so bad that many scientists and engineers of the Christian faith have gotten in the habit of hiding their religious inclinations so as to avoid discrimination. I am regularly attacked in certain circles (mostly computer geeks and a few rabid shit-for-brains (LOL) Darwinists like Paul Zachary Myers) because of my religious convictions. If it’s any consolation to the atheists, many Christians badmouth me as well.

Others hate me because of my writings on physics in which I criticize their favorite idols and portray them as crackpots. I admit it’s a little hobby of mine. Let me tell you though, I am not fazed in the least. This is one Christian who is going to be in your face everyday telling you that you’re full of shit. And, while I am in the mood, all of you people out there who worship the so-called Universal Turing machine are full of shit. The transformation of the UTM by the computer science community into a quasi-religious icon is one of the worse things to have happen to computing, in my opinion. I just thought I’d say this just for grins and giggles.

Don’t Read My Stuff

In conclusion, let me repeat something that I have said many times before. If you don’t like what I write or if what you read about me on the web bothers you or if you hate the fact that I am a Christian, then don’t read my stuff. Go read somebody else’s blog, goddamnit! You don’t put food on my table and I don’t put food on yours. This makes us even, right? Right.


PS. I just read a paper by Peter Wegner and Dina Goldin of Brown University titled “The Church-Turing Thesis: Breaking the Myth”. I can tell you this, it is an eye opener. I am planning to write a critique of it in the next few days. I recommend Peter and Dina’s work on non-algorithmic computing to all my readers. Thank God, there is still intelligent life in academia, after all.