Entry tags:
The many-core future
If you're not already following it, I commend today's post in Ars Technica about the upcoming changes to hardware. It's not precisely new, but it does underline what I've been saying for the past couple of years: the time of Massively Multicore is upon us.
Everybody's getting used to having two or maybe even eight cores in a computer, and you can almost kind of ignore that in most cases -- after all, you're just writing one process among several on the machine, so if you're limited to one core it's not a big deal. You might even tweak the program to use a few threads. But Intel is now talking seriously about architectures that range from dozens to *thousands* of little cores working together. You can't ignore that if you're going to be doing any kind of serious programming.
There's a message here, and it's an important one if you're in the field: if you're not already good at multi-threaded programming, you need to *get* good at it. There are probably deep changes to programming coming soon -- they're hard to predict at this point, but the root message is that you're going to need to understand threading pretty well. Or you're going to need to learn the languages that are inherently threadsafe, like Erlang. Or both. If not, you risk confining yourself to the limited subset of the field where threading can be ignored. (Eg, the simpler forms of web-server programming, but not the really interesting bits.)
It's exciting stuff -- we may be looking at the most dramatic changes to the *art* of programming (rather than just its uses) in decades. But if you are seriously in this field, you need to be paying attention to it, and making sure your skills are up-to-date, or you risk dead-ending your career...
Everybody's getting used to having two or maybe even eight cores in a computer, and you can almost kind of ignore that in most cases -- after all, you're just writing one process among several on the machine, so if you're limited to one core it's not a big deal. You might even tweak the program to use a few threads. But Intel is now talking seriously about architectures that range from dozens to *thousands* of little cores working together. You can't ignore that if you're going to be doing any kind of serious programming.
There's a message here, and it's an important one if you're in the field: if you're not already good at multi-threaded programming, you need to *get* good at it. There are probably deep changes to programming coming soon -- they're hard to predict at this point, but the root message is that you're going to need to understand threading pretty well. Or you're going to need to learn the languages that are inherently threadsafe, like Erlang. Or both. If not, you risk confining yourself to the limited subset of the field where threading can be ignored. (Eg, the simpler forms of web-server programming, but not the really interesting bits.)
It's exciting stuff -- we may be looking at the most dramatic changes to the *art* of programming (rather than just its uses) in decades. But if you are seriously in this field, you need to be paying attention to it, and making sure your skills are up-to-date, or you risk dead-ending your career...
no subject
OOP has been around for how long? And yet, you still see very procedural approaches even in Java. Even when an OO solution is obviously superior. Indeed, look at any code base that supports OO ideas like classes and you'll often see, perhaps most of the time, those ideas eschewed for concepts that are presumably more well understood by the programmer.
Programmers don't usually go multi threaded unless it solves a problem and even then not always. Unless some mechanism is presented that forces the concepts, I don't see this changing. Again, Java and Objects. Java pushed the paradigm hard at the programmer, and still you see entire classes made of static methods and parallel arrays.
I actually looked into Erlang, mostly because it's time I learned some more declarative languages. I don't like the loose typing, I really never like loose typing. I want to like Python, but can't because of that. However, I recently read an inteview with Bjarne Stroustrup (http://www.computerworld.com.au/index.php/id;408408016;fp;4194304;fpid;1;pf;1) where he talks at little about the nextgen C++ and concurrent programming. I believe the ability to leverage mutil cores will probably rely on smarter compilers and simple libraries, rather than more well informed programmers.
no subject
Java pushed the paradigm hard at the programmer, and still you see entire classes made of static methods and parallel arrays.
*Twitch*. True -- but *twitch*.
I actually looked into Erlang, mostly because it's time I learned some more declarative languages. I don't like the loose typing, I really never like loose typing.
I dunno. I somewhat agree -- I've developed a fondness for strongly-typed languages over the years. That said, I don't mind *good* loosely-typed languages: Ruby remains a favorite of mine, for example. I'm intrigued by the next-gen Javascript dialects like ActionScript 3, which allow both models side-by-side.
I don't love Erlang, but that's a larger issue: I just find the language rather more idiosyncratic than it needs to be. I suspect that the same ideas could be put into a more mature language that I would appreciate more.
I believe the ability to leverage mutil cores will probably rely on smarter compilers and simple libraries, rather than more well informed programmers.
Perhaps -- but again, it's going to come down to How Many Cores. With a relatively modest number of cores, or special-purpose ones, libraries will do well enough: the programmer thinks mainly in terms of linear programs, and calls out the parallel bits explicitly.
But if they do get up to the terascale thousand-core systems, I really doubt that's going to hack it -- the linear parts of the program will turn into bottlenecks that prevent you from leveraging the system at all well, and bog things down badly. Smart compilers can only buy you so much, if they're being applied to current languages, because those languages just don't have the right *semantics* for automatic parallelization.
So at that point, I really suspect we're going to see a shift into newer languages, that are more natively parallelizable -- languages that *do* allow the compiler to really make the program hum nicely on a massively parallel system. Those may not be as weird as Erlang, but I suspect that they will be at least like Fortress. (Which *defaults* to parallel processing unless you explicitly prevent it.)
The moral of the hardware story is that smart chips will only get you so far before you have to change paradigms. My strong suspicion is that the same will be true of software -- that smarter compilers can only get you so far before you have to change the language...