jducoeur: (Default)
[personal profile] jducoeur
If you're not already following it, I commend today's post in Ars Technica about the upcoming changes to hardware. It's not precisely new, but it does underline what I've been saying for the past couple of years: the time of Massively Multicore is upon us.

Everybody's getting used to having two or maybe even eight cores in a computer, and you can almost kind of ignore that in most cases -- after all, you're just writing one process among several on the machine, so if you're limited to one core it's not a big deal. You might even tweak the program to use a few threads. But Intel is now talking seriously about architectures that range from dozens to *thousands* of little cores working together. You can't ignore that if you're going to be doing any kind of serious programming.

There's a message here, and it's an important one if you're in the field: if you're not already good at multi-threaded programming, you need to *get* good at it. There are probably deep changes to programming coming soon -- they're hard to predict at this point, but the root message is that you're going to need to understand threading pretty well. Or you're going to need to learn the languages that are inherently threadsafe, like Erlang. Or both. If not, you risk confining yourself to the limited subset of the field where threading can be ignored. (Eg, the simpler forms of web-server programming, but not the really interesting bits.)

It's exciting stuff -- we may be looking at the most dramatic changes to the *art* of programming (rather than just its uses) in decades. But if you are seriously in this field, you need to be paying attention to it, and making sure your skills are up-to-date, or you risk dead-ending your career...

Re: That was my first thought....

Date: 2008-07-03 03:15 am (UTC)
From: [identity profile] metahacker.livejournal.com
Well, largely because it's difficult/impossible to do automatically in traditional languages, and people are used to traditional languages

This is kind of why I brought up the GC example. When I was first taught programming, "automatic" garbage collection was some sort of weird voodoo that no one quite believed in, and you had to be very careful to make sure you free()d things, and such.

Wind forward some years, and Java's GC (while slow and inefficient) is essentially foolproof, barring a few memory leaks over the years (like Strings). And I'm hoping there's another 15 years of progress since then that have improved this situation.

Parallelically, I'm hoping that there's something we're all missing about multithreading along the same lines -- that some minor change in programming, possibly involving an extra layer of abstraction (by analogy with the functional -> OOP shift), will mean that we get multithreading for free. And no one will have to write locks or monitors or whatever, ever again, because they're just too easy to get horribly wrong.

And a pony!

Profile

jducoeur: (Default)
jducoeur

July 2025

S M T W T F S
  12345
6789101112
13141516171819
20212223242526
27 28293031  

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags