Tim Bray is writing a series of posts, taking a run at the concurrent programming problem, with a focus on languages. I think Tim is aiming in the right direction, but has his focus set at the wrong distance.
There are good reasons to take a run at the problem. Physics is changing what we can expect from future computers. Starting a few years back, and barring any unexpected shifts in technology, the rate at which a CPU can process a single instruction stream will increase only slowly. The economics of chip fabrication allow us to build a CPU with multiple cores. The physics of power consumption tell us that we can get more computing done per watt with slimmer cores at slightly lower clock rates. All of which argues for fabricating CPU chips with a slowly increasing number of cores, and slowly increasing clock rates.
All of which means that to make full use of present and future CPUs, there has to be a lot of concurrent computing.
Concurrency in programming is tricky, and often got wrong. This is nothing new. My first job out of college (so many years ago) was to work of an Ada compiler (a computer language with direct support for multi-threaded interaction) on a product (the Pascal Microengine) that had thread support built into the CPU’s microcode. There was then much talk of how to do concurrent programming.
What we learned then and in the time since is that fine-grained multi-threaded concurrent programming is tricky, and very easy to get wrong. For the bulk of programs and programmers, there is very limited need for this sort of concurrency. All things considered, this is probably a good thing.
Tim Bray - and the folk responding to his post - are mainly focusing on a programming language for concurrent programs. I suspect this is a mistake. Maybe some new (or newish) programming language will make bug-free concurrent programs easy, but I do not think this is likely.
I think we already have the bulk solution. Web-scale applications (at least those that work well) make use of large numbers of CPUs, with a huge amount of concurrent execution. Web-scale programming is mostly about swarms of small-scale execution single-threaded programs (not necessarily small), well isolated from other threads. Many of the attributes Tim lists are true - or mostly-true, or should-be-true - of web scale applications.
Clearly the web-scale application approach does not work for all applications - though it may work for more than you expect. There are always going to be some applications that need fine-grained threading … but I suspect this group is very small. For the bulk of programming we want to allow for massive concurrency, but well-isolated and coarse-grained.
Why the focus on programming languages that support fine-grained concurrent programs? What problem - in the application space - does that solve? Are more than a tiny fraction of applications in that space?
My answer to Tim’s question is to point at the concerns of web-scale applications and cloud-computing. The problem does not drive an interest in new programming languages. The answer to large-scale concurrent execution is - for most applications - large numbers of single-threaded programs, responding to requests. Tim’s list of characteristics - in part - is useful for that sort of programming. Editing down the list:
(Have to admit, I am not sure the above grouped notions are distinct, when viewed at this level.)
For the bulk of programmers and applications, the main needed change is finding the simplest possible adaptation to the needs of web-scale programming. Once done, concurrency and threading are solved problem. We do not want or need fine-scale single-name/address space multi-threading - that way lies madness. We do need well-isolated single threads, in mass numbers, cooperating across web-scale process, machine, and network boundaries.