jducoeur: (Default)
[personal profile] jducoeur
If you're not already following it, I commend today's post in Ars Technica about the upcoming changes to hardware. It's not precisely new, but it does underline what I've been saying for the past couple of years: the time of Massively Multicore is upon us.

Everybody's getting used to having two or maybe even eight cores in a computer, and you can almost kind of ignore that in most cases -- after all, you're just writing one process among several on the machine, so if you're limited to one core it's not a big deal. You might even tweak the program to use a few threads. But Intel is now talking seriously about architectures that range from dozens to *thousands* of little cores working together. You can't ignore that if you're going to be doing any kind of serious programming.

There's a message here, and it's an important one if you're in the field: if you're not already good at multi-threaded programming, you need to *get* good at it. There are probably deep changes to programming coming soon -- they're hard to predict at this point, but the root message is that you're going to need to understand threading pretty well. Or you're going to need to learn the languages that are inherently threadsafe, like Erlang. Or both. If not, you risk confining yourself to the limited subset of the field where threading can be ignored. (Eg, the simpler forms of web-server programming, but not the really interesting bits.)

It's exciting stuff -- we may be looking at the most dramatic changes to the *art* of programming (rather than just its uses) in decades. But if you are seriously in this field, you need to be paying attention to it, and making sure your skills are up-to-date, or you risk dead-ending your career...

(no subject)

Date: 2008-07-02 05:27 pm (UTC)
From: [identity profile] dragonazure.livejournal.com
I've moved out of the programming field, but I am curious as to what the ramifications of this are for application-level programming and design.

Edit: in the graphics, scientific and engineering field, I get it, but what about the business applications side?
Edited Date: 2008-07-02 05:29 pm (UTC)

(no subject)

Date: 2008-07-03 12:55 pm (UTC)
From: [identity profile] dragonazure.livejournal.com
As I said, I've moved out of the s/w development area and away from the scientific computing area....8^( A long time ago, I did some work on parallel processing, but wasn't able to continue that line of research beyond graduate school--so the Ars Technica article was of interest. Today, my job entails the design and support of business systems.

The only reason office automation tools (such as word processors) run as fast as they did 20+ years ago is because of code bloat. 8^) It is like "stuff". It expands to fill all available space....

However, I was actually thinking more along the lines of decision support systems and other back office applications. I can see a lot of potential for supporting e-Commerce systems and for speeding up queries into data warehousing systems, but trying to understand how to apply this to specifiying requirements for application design is escaping me at the moment. Of course, we'd have to make a good case for the business to upgrade to the new hardware in the first place. There are still plenty of linear programs running the world.

(no subject)

Date: 2008-07-02 06:39 pm (UTC)
mneme: (Default)
From: [personal profile] mneme
I think the future here is unordered programming languages, where parallelism isn't something you code in, but, as in Haskell, simply the default, which you must code around in situations where order -is- explicitly relevant (see: monads).

Except in domains that are naturally at that level, the programmer shouldn't have to be thinking of thread management and where and when to thread; instead, they should be able to lay down the logic of a program without imposing order on its operations except where necessary, and let the compiler/interpreter do its own optimizing.

Something of a hybrid here are languages with a lot of matrix calculations and lazy evaluation (as perl 6 has, at least semantically). As long as you don't have a contract saying what order a matrix operator has to process stuff in, and can run lazy evaluations in parallel rather than sequence, you can get a lot of threading in under the hood without having to bother the programmer about it.

That was my first thought....

Date: 2008-07-02 09:50 pm (UTC)
From: [identity profile] metahacker.livejournal.com
"Time to learn Erlang?"

Or time to have compilers that do this automatically...I'm still not clear on why multithreading is left in the hands of the programmer when it's *so* easy to get wrong in horrible, unfindable ways. It's like garbage collection -- one of the places where the abstraction bleed is still inescapable, because state of the art is still crummy for handling this basic element of application infrastructure.

(True, GC is much better than it was 15-20 years ago; but it still can occupy a huge amount of processing space, in some languages (cough Java cough) seems to work at about 50% efficiency, and can routinely produce programs with memory leaks...)

Google's demonstrated that the MapReduce paradigm is a good example of a shift that leads to inherently parallelizable programs. I think it should be taught in Programming 201, just after recursion...

Re: That was my first thought....

Date: 2008-07-03 03:15 am (UTC)
From: [identity profile] metahacker.livejournal.com
Well, largely because it's difficult/impossible to do automatically in traditional languages, and people are used to traditional languages

This is kind of why I brought up the GC example. When I was first taught programming, "automatic" garbage collection was some sort of weird voodoo that no one quite believed in, and you had to be very careful to make sure you free()d things, and such.

Wind forward some years, and Java's GC (while slow and inefficient) is essentially foolproof, barring a few memory leaks over the years (like Strings). And I'm hoping there's another 15 years of progress since then that have improved this situation.

Parallelically, I'm hoping that there's something we're all missing about multithreading along the same lines -- that some minor change in programming, possibly involving an extra layer of abstraction (by analogy with the functional -> OOP shift), will mean that we get multithreading for free. And no one will have to write locks or monitors or whatever, ever again, because they're just too easy to get horribly wrong.

And a pony!

(no subject)

Date: 2008-07-04 02:35 pm (UTC)
ext_44932: (tech)
From: [identity profile] baavgai.livejournal.com
There are good programmers, competent programmers, and merely adequate programmers. The good are those who enjoy the activity and take it upon themselves to excel at it. Competent are those who know what needs to be done and can do it, but don't necessarily do any more. Adequate, the majority, are those who can solve a problem with the tools they have but will never look in the toolbox for something better and always leave the toolbox at work. These categories span all professions, of course.

OOP has been around for how long? And yet, you still see very procedural approaches even in Java. Even when an OO solution is obviously superior. Indeed, look at any code base that supports OO ideas like classes and you'll often see, perhaps most of the time, those ideas eschewed for concepts that are presumably more well understood by the programmer.

Programmers don't usually go multi threaded unless it solves a problem and even then not always. Unless some mechanism is presented that forces the concepts, I don't see this changing. Again, Java and Objects. Java pushed the paradigm hard at the programmer, and still you see entire classes made of static methods and parallel arrays.

I actually looked into Erlang, mostly because it's time I learned some more declarative languages. I don't like the loose typing, I really never like loose typing. I want to like Python, but can't because of that. However, I recently read an inteview with Bjarne Stroustrup (http://www.computerworld.com.au/index.php/id;408408016;fp;4194304;fpid;1;pf;1) where he talks at little about the nextgen C++ and concurrent programming. I believe the ability to leverage mutil cores will probably rely on smarter compilers and simple libraries, rather than more well informed programmers.

Profile

jducoeur: (Default)
jducoeur

June 2025

S M T W T F S
12 34567
891011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags