jducoeur: (0)
jducoeur ([personal profile] jducoeur) wrote 2008-07-04 04:25 pm (UTC)

I agree with the general points about adequate programmers not using fancy concepts when they can avoid them. That said, the interesting question here is whether they *will* be avoidable. In a dozen-core world, probably so; in a thousand-core world, not so much. At that point, I suspect that *something* is going to have to give -- most likely, the adequate programmers will wind up having to learn very different programming models so that they can be productive without needing to really grok the threading that's going on under the hood. (I do think Microsoft is on to something with their Workflow Foundation -- it's a *sucky* system for good programmers, but rather well thought out for bad ones doing pure business programming.)

Java pushed the paradigm hard at the programmer, and still you see entire classes made of static methods and parallel arrays.

*Twitch*. True -- but *twitch*.

I actually looked into Erlang, mostly because it's time I learned some more declarative languages. I don't like the loose typing, I really never like loose typing.

I dunno. I somewhat agree -- I've developed a fondness for strongly-typed languages over the years. That said, I don't mind *good* loosely-typed languages: Ruby remains a favorite of mine, for example. I'm intrigued by the next-gen Javascript dialects like ActionScript 3, which allow both models side-by-side.

I don't love Erlang, but that's a larger issue: I just find the language rather more idiosyncratic than it needs to be. I suspect that the same ideas could be put into a more mature language that I would appreciate more.

I believe the ability to leverage mutil cores will probably rely on smarter compilers and simple libraries, rather than more well informed programmers.

Perhaps -- but again, it's going to come down to How Many Cores. With a relatively modest number of cores, or special-purpose ones, libraries will do well enough: the programmer thinks mainly in terms of linear programs, and calls out the parallel bits explicitly.

But if they do get up to the terascale thousand-core systems, I really doubt that's going to hack it -- the linear parts of the program will turn into bottlenecks that prevent you from leveraging the system at all well, and bog things down badly. Smart compilers can only buy you so much, if they're being applied to current languages, because those languages just don't have the right *semantics* for automatic parallelization.

So at that point, I really suspect we're going to see a shift into newer languages, that are more natively parallelizable -- languages that *do* allow the compiler to really make the program hum nicely on a massively parallel system. Those may not be as weird as Erlang, but I suspect that they will be at least like Fortress. (Which *defaults* to parallel processing unless you explicitly prevent it.)

The moral of the hardware story is that smart chips will only get you so far before you have to change paradigms. My strong suspicion is that the same will be true of software -- that smarter compilers can only get you so far before you have to change the language...

Post a comment in response:

(will be screened)
(will be screened if not validated)
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting