Yes and no -- I think it's more a matter of preventing the problems via the architecture. That's very much Akka's style: so long as you play by the rules, it scales more or less automatically.
For this particular problem, it isn't really possible to solve it in a completely general way, so the approach is to instead stick the problem right in your face, so that it's hard to ignore. To that end, the new Akka systems make the point about at-least-once delivery over and over again, emphasizing that you need to build it in as an assumption. (Don't know if they have automatic support for hammering it in the Akka TestKit yet, but I suspect somebody'll add a standard "deliver some duplicates" test mode before long.)
I confess, it's kind of neat watching this evolution. The Akka support for this model is new (just released a few weeks ago), and explicitly experimental, but seems to be fairly robust and infrastructure is filling in fast. My guess is that, within a few years, we're going to have a nicely thorough environment for quickly building massively-scalable systems using the JVM stack...
no subject
For this particular problem, it isn't really possible to solve it in a completely general way, so the approach is to instead stick the problem right in your face, so that it's hard to ignore. To that end, the new Akka systems make the point about at-least-once delivery over and over again, emphasizing that you need to build it in as an assumption. (Don't know if they have automatic support for hammering it in the Akka TestKit yet, but I suspect somebody'll add a standard "deliver some duplicates" test mode before long.)
I confess, it's kind of neat watching this evolution. The Akka support for this model is new (just released a few weeks ago), and explicitly experimental, but seems to be fairly robust and infrastructure is filling in fast. My guess is that, within a few years, we're going to have a nicely thorough environment for quickly building massively-scalable systems using the JVM stack...