Monday, March 29, 2010

Why I Hate All Computer Programming Languages (repost)

[I just can't stop bashing computer science. This is a repost of a previous article.]

That’s All I Want to Do!

I hate computer languages because they force me to learn a bunch of shit that are completely irrelevant to what I want to use them for. When I design an application, I just want to build it. I don’t want to have to use a complex language to describe my intentions to a compiler. Here is what I want to do: I want to look into my bag of components, pick out the ones that I need and snap them together, and that’s it! That’s all I want to do.

I don’t want to know about how to implement loops, tree structures, search algorithms and all that other jazz. If I want my program to save an audio recording to a file, I don’t want to learn about frequency ranges, formats, fidelity, file library interface, audio library interface and so forth. This stuff really gets in the way. I just want to look into my bag of tricks, find what I need and drag them out. Sometimes, when I meditate about modern computer software development tools, I get so frustrated that I feel like screaming at the top of my lungs: That is all I want to do!

Linguistic Straightjacket

To me, one of the main reasons that the linguistic approach to programming totally sucks is that it is entirely descriptive by definition. This is a major drawback because it immediately forces you into a straightjacket. Unless you are ready to describe things in the prescribed, controlled format, you are not allowed to program a computer, sorry. The problem with this is that, we humans are tinkerers by nature. We like to play with toys. We enjoy trying various combinations of things to see how they fit together. We like the element of discovery that comes from not knowing exactly how things will behave if they are joined together or taken apart. We like to say things like, “oh”, “aah”, or “that’s cool” when we half-intentionally fumble our way into a surprising design that does exactly what we want it to do and more. Computer languages get in the way of this sort of pleasure because they were created by geeks for geeks. Geeks love to spoil your fun with a bunch of boring crap. For crying out loud, I don’t want to be a geek, even if I am one by necessity. I want to be happy. I want to do cool stuff. I want to build cool things. And, goddamnit, that’s all I want to do!

Conclusion

Unless your application development tool feels like a toy and makes you want to play like a child, then it is crap. It is a primitive relic from a primitive age. It belongs in the Smithsonian right next to the slide rule and the buggy whip. If you, like me, just want to do fun stuff, you should check out Project COSA. COSA is about the future of programming, about making programming fast, rock solid and fun.

[This article is part of my downloadable e-book on the parallel programming crisis.]

See also:

Parallel Computing: Why the Future Is Compositional
COSA, A New Kind of Programming
Half a Century of Crappy Computing
New Interfaces for Parallel Programming
How to Solve the Parallel Programming Crisis
The COSA Control Hierarchy

Nothing Can Move In Spacetime (repost)

[This is a repost of a previous article. I am still bashing physicists and computer scientists. It has become a habit of mine.]

Dr. Mark Chu-Carroll, PhD Computer Scientist

Not too long ago, Google software engineer, self-appointed anti-bozo crusader and PhD computer scientist Mark Chu-Carroll (yet another insufferably pompous computer geek, ahahaha…) got terribly offended by my web page Nasty Little Truth About Spacetime Physics. Of course, Mark is offended, not because of my arguments (which went over his head) but because I make fun of a bunch of scientists such as Stephen Hawking, Godel, etc... Mark decided that my problem is that I don’t understand what a dimension is. So, Mark goes into a tirade of outright indignation that is somewhat funny in its own right, especially when, all the while, Mark has half his foot planted squarely in his mouth. Yep, like a bozo.

I really don’t feel like defending myself against Mark’s accusations and crackpottery here simply because I’m tired of it. The fact remains that, Mark’s protestations notwithstanding, time cannot change by definition and, as a result, nothing can move in spacetime. I’ll just repeat a quote from a textbook written by a well-known relativity expert:
There is no dynamics within space-time itself: nothing ever moves therein; nothing happens; nothing changes. [...] In particular, one does not think of particles as "moving through" space-time, or as "following along" their world-lines. Rather, particles are just "in" space-time, once and for all, and the world-line represents, all at once the complete life history of the particle.
From "Relativity from A to B" by Dr. Robert Geroch, U. of Chicago
Obviously I am agreeing with a relativist who happens to understand that nothing can move in spacetime. Yes, there are a few out there. Geroch is not alone but most relativists don’t know this and most, like good old Dr. Mark Chu-Carroll, will refuse to accept it simply because it goes against their chicken-shit Star-Trek time travel religion. Too bad.

Stephen Wells

Stephen (I have no idea who he is) posted an objection to my arguments (comment #33) on Mark’s blog that I would like to respond to, not because Stephen deserves a response, mind you, but because my original rebuttal of that particular objection was inadequate. I’ll revise it in a few days to reflect what I write below. Stephen writes:
Now we move to a 4D spacetime model, where a particle trajectory is in (3+1) dimensions. Louis tries to parametrise all four variables (x,y,z,t) in terms of t, declares that parametrising t in terms of t is wrong (which is true), and decides that all of SR and GR is a massive fraud and conspiracy
(which is false).

First of all, it is not true that I decided that all of GR is a massive fraud and conspiracy. I actually believe that GR and SR are useful theories as theories go, even though they don't explain much. I just think that all the nonsense about time travel and the like that Einstein’s followers have been preaching over the last century is just that, nonsense. Relativity does not support time travel and relativity does not prove that absolute motion and position don't exist. In fact, there is every reason to suppose that they do and that it is the relative that is abstract and non-existent. Of course, there is no such thing as a particle trajectory in 4D spacetime but Stephen is not about to concede this little truth. He continues:
Meanwhile, back in the land of the sane, we parametrise a worldline (x,y,z,t) in terms of the proper time tau of the particle along that worldline. So we don't parametrise t in terms of t anyway.
So what? Both delta-tau and and delta-t represent temporal intervals. If anybody thinks that a second time can be used to prove that change can occur in another time, I’ve got a bridge to sell you. If you use tau to show a change in t, you must be prepared to show how tau can change. Why? Because time is time. What’s good for the goose is good for the gander. If t can vary, so can tau. To show a change in tau one would need a meta-tau, and a meta-meta-tau for the meta-tau, ad infinitum. And no, you cannot use t to parameterize tau because that would be circular. The expression dt/dtau does not show that t can change. It is just a ratio of two different temporal intervals measured by two different clocks. That is all. Having wrestled his pathetic little strawman to the ground, Stephen valliantly declares victory:
Louis will never grasp this- he can't allow himself to, after decades spent ranting on the subject. Sad, really.

Sad indeed. Entire generations of young minds believing in a lie. Enough to make a grown man cry. Will I ever get an apology from the likes of Stephen Wells and PhD computer scientist Mark Chu-Carroll? I'm not holding my breath.

Saturday, March 20, 2010

Computer Scientists Created the Parallel Programming Crisis (repost)

[I seem to be in a permanent computer science bashing mode. I am reposting this article because it reflects my current mood.]

It Pays to Be Incompetent

The computer industry is spending large sums of money to come up with a solution to the parallel programming crisis but the very people who created the crisis are the direct beneficiaries. About two years ago, the major players in the computer industry (Intel, Microsoft, AMD, Nvidia, etc.) poured tens of millions of dollars into the coffers of major university research centers. Has there been any progress in finding a solution since? Answer: No. Is there any indication that progress will be made in the near future? Answer: No. Now, everyone knows that two years is a long time in the computer business. A lot can and should happen in two years. The amazing thing about parallel programming research is that computer scientists have been working on the problem for the last thirty years! They are just as clueless now as to what the solution might be as they were when they first started working on it! What is wrong with this picture?

Seismic Paradigm Shift Ahead

Computer scientists are not ashamed to admit that they have no idea what the solution to the crisis might be. Why should they be? Long term failure has never stopped the money from flowing in. In fact, research money on parallel computing has markedly increased in the last few years because the industry is beginning to panic. Nobody messes with Moore's law and walks away to brag about it. In a way, it makes sense that the industry should approach academia for help. I mean, who else but academia is qualified to find a solution to this pressing problem? But how much longer can the crisis continue? Can the industry fund unsuccessful research indefinitely? Can we continue to live forever with hideous monstrosities like heterogeneous processors and multithreading? I don't think so. Sooner or later, something will have to give. The captains of the industry will eventually realize that they are pouring money into a black hole and many will wise up. A seismic upheaval in the way computer science is conducted will ensue. Many scientists who are now placed on a pedestal will see their work discredited. The computer science community may think they are immune to hard times but the market is known to be rather cruel when profits are in the balance.

Wrong From the Start

If you asked me who are the most to blame for the current crisis, I would tell you without hesitation that it is the mathematicians. All the major participants who helped to shape the history of computing, people like Charles Babbage, Lady Ada Lovelace, Alan Turing and John Von Neumann, were mathematicians. Their vision of the computer is that of a machine built for the computation of mathematical functions or algorithms. Even today, after years of failure to solve the parallel programming problem, mathematicians are still clamoring for the use of functional programming as the solution. Everything looks like a nail when all you have is a hammer. The only reason that the functional programming community has not succeeded in pushing FP into the mainstream is that reality keeps kicking them in the ass. The truth is that functional programming languages are counter-intuitive, hard to learn and a pain to work with.

Behaving Machines, Communication and Timing

Mathematicians notwithstanding, a computer is a class of automaton known as a behaving machine. As such, it must not be seen as a function calculator that takes input arguments and returns an answer. It should be seen as belonging to the category of machines that includes brains and neural networks. The proper paradigm for describing the working of such machines is not mathematics but psychology. We should be using terms like stimuli, responses, sensors, effectors, signals, and environment when describing computers. This way, an operation becomes an effect performed by an effector on the environment (data) and a comparison operator becomes a sensor. Once the computer is seen in its proper light, it immediately becomes clear that, like nervous systems, a computer program is really a communication network that senses and effects changes in its environment. Nothing more and nothing less. And, as with all communication systems, the deterministic timing of sensed events (stimuli) and operations (responses) becomes critical. Most of the problems that plagues the computer industry (e.g., unreliability and low productivity) stem from the inability to precisely determine the temporal order (concurrency or sequentiality) of all events in the machine. Temporal determinism is a must. This then is the biggest problem with the Turing Computing Model: timing is not an inherent part of the model.

How to Solve the Crisis

We must reinvent the computer. We must turn it from a mathematician's wet dream into a psychologist's wet dream, from a glorified function calculator into a universal behaving machine. And we must do so at the fundamental level. It is time to stop feeding money into the academic black hole for the reason that the academics have failed. It is time to stop encouraging mediocrity among the baby boomer generation who created this mess. It is time for the boomers to gracefully retire and allow a new generation of thinkers to have their turn at the wheel. Industry leaders should simply say to the Turing Machine worshipers, “thank you very much, ladies and gentlemen, for your great service; here's a nice retirement check for all your troubles.” Then they should forget the whole mess and forge a bold new future for computing. And then the next computer revolution will make the first one pale in comparison.

See Also:

How to Construct 100% Bug-Free Software
How to Solve the Parallel Programming Crisis
Parallel Computing: The End of the Turing Madness
Why Parallel Programming Is So Hard
Parallel Computing: Why the Future Is Non-Algorithmic
Half a Century of Crappy Computing
Why Software Is Bad and What We Can Do to Fix It

Wednesday, March 10, 2010

The Rebel Science Forum

The COSA Virtual Machine

Many of my readers have suggested that I start an open source COSA project to get my ideas off the ground. Personally, I don't have the time to manage a software or hardware project as I am rather busy with other matters. My main interest in computing is artificial intelligence research. A little over a decade ago, I began promoting the COSA software model because I came to the conclusion that the future of AI depends on massive parallelism. My detractors notwithstanding, there is no doubt in my mind that COSA is the way to solve the parallel programming crisis. I have not been able to convince the industry at large of the soundness of the model but I do have a few supporters around the world. Most are software or hardware engineers. And, to me, that's saying something.

My fear is that a COSA virtual machine (CVM) will not be fast enough to convince anybody in the industry. Making the CVM work on current multicore processors is possible but it will be tough because those chips are designed to support the non-deterministic multi-threaded software model. It is clear to me that the captains of the computer industry want a solution that will add value to their existing multicore technology. If COSA were to be widely adopted, it would make all current multicore processors obsolete. That would be a disaster of unimaginable proportions for the big players (Intel, AMD, ARM, Nvidia, IBM, etc.) and they know it. My experience in this business tells me that COSA will meet with unwaveringly high resistance and even open hostility. That is, until the pain becomes too unbearable. That's when the seismic paradigm shift that will usher in the next computer revolution will happen. Hopefully, that will happen very soon.

The Discussion Forum

I just created a discussion forum on the Rebel Science site. Right now, discussions are restricted only to the topic of creating an open source COSA virtual machine. My previous experience with administrating a discussion forum is that, as soon as it becomes a little popular, the spammers descend in droves and then it becomes a real pain. For this reason, I am allowing only registered users to post on the forum. Guests can read the discussions but cannot post. If you have an interest in COSA and you have ideas on how to proceed, you might consider taking part in the discussions.

See Also:

Project COSA
How to Solve the Parallel Programming Crisis
The COSA Saga

Friday, March 5, 2010

How to Construct 100% Bug-Free Software

Abstract

Software unreliability is a monumental problem. Toyota's brake pedal troubles are just the tip of the iceberg. Yet, the solution is so simple that I am almost tempted to conclude that computer scientists are incompetent. As I showed in my previous post, the usual 'no silver bullet' excuse (Brooks's excuse) for unreliable code is bogus. Contrary to Fred Brooks's claim in his famous No Silver Bullet paper, it is not necessary to enumerate every state of a program to determine its correctness. What matters is the set of conditions or temporal expectations that dictate the program's behavior. Timing is fundamental to the solution. Below, I expand on my thesis by arguing that the computer can in fact automatically discover everything that may go wrong in a complex program even if the programmer overlooks them. Please read Unreliable Software, Part I-III before continuing.

Expectations and Abnormalities

Jeff Voas, a software reliability expert and a co-founder of Cigital, once said, "it's the things that you never thought of that get you every time." Voas is not in any hurry to see a solution to the unreliability problem because he would be out of a job if that happened. Still, I agree with him that it is observably true that the human mind cannot think of everything that can go wrong with a complex software system but (and this is my claim) the computer itself is not so limited. It is because the computer has a certain advantage over the human brain: it can do a complete exhaustive search of what I call the expectation space of a computer program. The latter has to do with all the possible decision pathways that might occur within a program as a result of expected events.

A billion mathematicians jumping up and down and foaming at the mouth notwithstanding, software is really all about stimuli and responses, or actions and reactions. That function calculation stuff is just icing on the cake. Consider that every decision (reaction) made by a program in response to a sensed event (a stimulus) implicitly expects a pattern of sequential and/or simultaneous events to have preceded the decision. This expected temporal signature is there even if the programmer is not aware of it. During the testing phase, it is easy for a diagnostic subprogram to determine the patterns that drive decisions within the application under test. It suffices to exercise the application multiple times to determine its full expectation pattern. Once this is known, it is even more trivial for the subprogram to automatically generate abnormality sensors that activate in the event that the expectations are not met. In other words, the system can be made to think of everything even if the programmer is not thorough. Abnormality sensors can be automatically connected to an error or alarm component or to a component constructed for that purpose. The system should then be tested under simulated conditions that force the activation of every abnormality sensor in order to determine its robustness under abnormal conditions.

Learn to Relax and Love the Complexity

The above will guarantee that a program is 100% reliable within its scope. The only prerequisite to having a diagnostic subprogram like the one I described is that the software model employed must be synchronous and reactive. This insures rock-solid deterministic program behavior and timely reactions to changes, which are the main strengths of the COSA software model. The consequences of this are enormous for the safety-critical software industry. It means that software developers no longer need to worry about bugs in their programs as a result of complexity. This way, adding new functionality to a system makes it even more robust and reliable. Why? Because new functionality cannot break the system's existing expectations without triggering an alarm. They must conform to the functionality that is already in place. Expectations are like constraints and the more complex a program is, the more constraints it has. We can make our programs as complex as necessary without incurring a reliability penalty. So there is no longer any reason to not have a completely automated mass transportation or air traffic control system.

Academic Responsibility

This is the part where I step on my soapbox and start yelling. This blog is read everyday by academics from various institutions around the world and from research labs in the computer industry. I know, I have the stats. If you are a computer scientist and you fail to act on this information, then you are a gutless coward and an asshole, pardon my French. Society should and probably will hold you personally responsible for the over 40,000 preventable traffic fatalities on U.S. roads alone. You have no excuse, goddammit.

See Also:

Why Does Eugene Kaspersky Eat Japanese Baby Crabs and Grin?
Why the FAA's Next Generation Air Traffic Control System Will Fail
Computer Scientists Created the Parallel Programming Crisis
How to Solve the Parallel Programming Crisis
Parallel Computing: Why the Future Is Non-Algorithmic
Parallel Computing: Why the Future Is Synchronous
Parallel Computing: Why the Future Is Reactive
Parallel Computing: The End of the Turing Madness
Why Software Is Bad and What We can Do to Fix It
COSA: A New Kind of Programming