jducoeur: (Default)
[personal profile] jducoeur
Web programmers should take a look at this article in Ars Technica, which talks about a radical enhancement that will be coming to Mozilla soon.

Summary: they're about to integrate trace optimization into the Spidermonkey Javascript engine. The underlying concept of this technique (assuming that I'm understanding it correctly) is essentially that it learns as it goes, making the program faster as it figures out how it really works in practice. This notion isn't quite as new as they make it sound, but this would be the widest deployment of it that I've heard about so far. In principle, it could eventually allow some Javascript programs to run *faster* than compiled code, because trace optimization can accomplish some speedups that compilers can't.

From the numbers they show, it looks like the new Javascript engine performs something like twice as fast in ordinary conditions, and up to 20-40 times faster in some cases. (Probably involving key loops that can be optimized well.) While that doesn't totally change the world yet, it's another key step in turning Javascript from an annoying and clunky toy into a language capable of serious work. And it means that, at least in Firefox, some complex web apps are about to get a speedup without having to do a thing. (It also means that the browser itself is likely to speed up in many places, because much of it is written in Javascript...)

(no subject)

Date: 2008-08-25 03:50 pm (UTC)
From: [identity profile] zachkessin.livejournal.com
Wow that will be really nice.

(no subject)

Date: 2008-08-25 04:05 pm (UTC)
From: [identity profile] metahacker.livejournal.com
In principle, it could eventually allow some Javascript programs to run *faster* than compiled code, because trace optimization can accomplish some speedups that compilers can't.

So why not include trace optimization into some sort of re-compiler? I could guess that's the idea behind JIT compilers, which seem like they're out of vogue currently. I'm afraid I'm out of touch with this stuff and don't really understand why the above might be infeasible.

(no subject)

Date: 2008-08-25 06:01 pm (UTC)
laurion: (Default)
From: [personal profile] laurion
Might it have to do with user interaction? If you have two loops equally likely, and the user selects one, you optimize that one. Optimizing both may be mutually incompatible? Or maybe JIT memory management.

(no subject)

Date: 2008-08-25 08:05 pm (UTC)
From: [identity profile] metahacker.livejournal.com
That may be it -- JIT isn't the new thing, it's what everyone does, so you don't hear about it any more.

Trace optimization in compilers

Date: 2008-08-25 08:36 pm (UTC)
From: [identity profile] metageek.livejournal.com
I know I've read about recompilers that use execution profiles to advise the optimizer...yeah, here's a paper on it, apparently from 1993.

JITs don't necessarily do trace optimization; some do, apparently, but the basic JIT generates machine code at runtime, but does so without any knowledge of how the code is used. The only real reason for doing it at runtime is if you want to be able to distribute a machine-independent program (e.g., Java bytecode).

(no subject)

Date: 2008-08-25 11:20 pm (UTC)
From: [identity profile] crschmidt.livejournal.com
For the record, increase in speed in SunSpider test running (the canonical JavaScript interpreter speed test) is almost entirely unrelated to speed of how web applications perform, in my experience.

The speed of the DOM interactions -- at least for OpenLayers -- are so much more important as to be dwarfing everything else at this point. For example, FF3 runs about 10% faster than Safari 3.1 on my machine, but the OpenLayers test suite runs about 5x faster in Safari.

It's nice that they're putting all this effort into improving Javascript performance, but really, all they're doing is taking a part of web application interactions that is pretty small potatoes -- the Javascript algorithms/function calls themselves -- and making them an even smaller percentage of the total time. Going from JS calls taking 10% of the time to 2% or even 1% of the time still only gives you an 8%-9% speedup overall -- enough to show up in performance testing, but not really enough to show up on users radar as being much faster -- and I'd be surprised if 'tracemonkey' is really going to offer even that much of a speedup in application code that isn't doing a lot of looping.

(I did try to build tracemonkey to test with, but failed to do so yesterday. Did get JavaScriptCore, and the JSC standalone compiler, working though. And found out that Rhino is 10x slower than year-old spidermonkey on some simple tests I did, which is pretty impressive.)

Profile

jducoeur: (Default)
jducoeur

June 2025

S M T W T F S
12 34567
891011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags