jducoeur: (Default)
[personal profile] jducoeur

Just came across an article on Ars Technica (yes, I'm behind): The intelligent intersection could banish traffic lights forever. It's neat stuff: basically, a researcher has designed a traffic-control system for autonomous vehicles, and demonstrated that by using such technology we could enormously reduce how often you have to stop at intersections -- not only speeding up travel times, but improving fuel efficiency quite a bit.

All of which is great, but my Security Architect senses are pinging here. This is postulating an external server that talks to the cars on the road and tells them what to do. That is absolutely terrifying if you understand the typical state of Internet-of-Things security.

But let's put a positive spin on this. This system is at least 1-2 decades from deployment as described (since it assumes only autonomous vehicles on the road). We might be able to head off disaster by figuring out the obvious hacks in advance, so they can be designed around.

So here's the challenge: name ways that a hacker could abuse this system, and/or ways to ameliorate those weaknesses.

I'll start with a few obvious ones:

  • Base story: I (the hacker) send out signals spoofing the controller for traffic intersection T, allowing me to cause nightmarish havoc. Possible solution: traffic controllers are listed in some well-known registry, signed with public keys, so that their signals can be authenticated to prevent spoofing.
  • Assuming the above hacking isn't prevented: I time the signals sent to the cars, telling them all to hit the intersection at the same moment. Crash! Solution: as a belt-and-suspenders thing, cars must not completely trust the signal controllers. Their autonomous checks have to override those signals, to prevent crashes.
  • Reverse of the previous: I send out signals telling all the cars, in all directions, that the intersection is currently blocked by opposing traffic. The entire city quickly devolves into gridlock. Solution: good question. I got nothing.

What else? I'm sure we can come up with more nightmarish scenarios, and possible solutions.

Yes, this may seem like overkill to think about now, but history says that, if you don't design the system around abuses, you will hurt forevermore. Security isn't something you add later: it should be baked into the designs from the get-go. (Which is why it accounts for a large fraction of Querki's architecture, despite the fact that we only have a couple hundred users yet...)

(no subject)

Date: 2017-06-08 03:17 pm (UTC)
laurion: (Default)
From: [personal profile] laurion
A more likely solution involves the car-to-car (C2C or V2V [vehicle]) communication protocols currently in development. If those are refined enough it should be fairly straightforward to include the proximity and location of other vehicles into the existing autonomous framework and let the cars sort it out for themselves. Then you wouldn't need intersection controllers at all, or any lights, except for some simple systems for pedestrians and whatnot to signal that they desire to cross and need a break in traffic. Those systems could also send out data on the V2V system.

But of course, the security hackles should be raised in this scenario too. ;)

Improved Throughput for whom?

Date: 2017-06-08 06:37 pm (UTC)
etherial: A Dollar Sign composed of sperm ($perm)
From: [personal profile] etherial
Otherwise, I just tune my car to be more aggressive than yours, and that's great for me until we hit a terribly literal tragedy of the commons in the middle of some intersection.

Just like with #netneutrality, the corporations will fight tooth and nail the idea that John Q. Public has as much right to cross the intersection as someone willing to pay extra for permanent right of way.
Edited Date: 2017-06-08 06:38 pm (UTC)

(no subject)

Date: 2017-06-09 06:17 pm (UTC)
mneme: (Default)
From: [personal profile] mneme
Isn't this the same problem you get with human drivers, though? Some are more aggresive, some are more safety-concious and therefore less aggressive, etc?

(no subject)

Date: 2017-06-09 06:37 pm (UTC)
drwex: (Default)
From: [personal profile] drwex
Yes, it is. We regulate that behavior through a complex network of social expectations, repeated prisoner's dilemma exchanges, legal systems with gun-carrying enforcement, ability to remove driving privileges and on and on.

All of which is to say that if the autonomous system and its user aren't subject to those same networks there's gonna be hell to pay.

(no subject)

Date: 2017-06-09 10:05 pm (UTC)
mneme: (Default)
From: [personal profile] mneme

I'm not certain this is true. Automated systems are more predictable and easier to test than human drivers, even if they are negotiating from a point of distrust.

That said, you're probably right wrt traffic lights unless there are lights restricted to cars running a specific interaction protocol (with other intersections being more open).

(no subject)

Date: 2017-06-08 03:40 pm (UTC)
dsrtao: dsr as a LEGO minifig (Default)
From: [personal profile] dsrtao
Every city and town does its own traffic engineering. A PKI to hierarchically delegate and validate all of those users? Bigger than anything else ever tried. Probably doomed to failure. Assuming that spoofing can be done...

Here's a fun thing not to try at home: a beacon on your own car that claims to be a portable traffic signal. "Lane closing; keep right."

"I'm a police car, let me pass."

Wayze likes to route vehicles through quiet neighborhoods as shortcuts. Drop a spoofer on the corner and the road is "closed" or "5 MPH" and stops being used as a shortcut.

"Invisible construction area, wait for [nonexistent] flagman."




(no subject)

Date: 2017-06-08 07:56 pm (UTC)
dsrtao: dsr as a LEGO minifig (Default)
From: [personal profile] dsrtao
Oh, and obvs to me but not to everyone: the first failure mode of public key infrastructure is failure to authenticate a good actor; the second failure mode is authentication of an evil actor; the third failure mode is that revocation of authentication is really really hard, and the fourth is that all secrets leak. Five: asking permission takes too long, so default-deny becomes default-allow-if-authenticated.

The end result is that not only does every traffic engineer in every small town have the keys to change all the traffic lights in town, but they also share keys with the people one town over, and they send them through email, and keys from people who have moved to other jobs are valid ten years later...

The only reason we don't have traffic apocalypse now is (a) people notice when lights are funky, and they complain and (b) you can't do it remotely
except in a few hardwired cities where you can't do it arbitrarily remotely.

(no subject)

Date: 2017-06-08 03:42 pm (UTC)
metahacker: A picture of white-socked feet, as of a person with their legs crossed. (Default)
From: [personal profile] metahacker
Today you can buy the device that switches lights for oncoming emergency vehicles; the downside is that it’s obvious when you use it illegally since the flashy lights go on on the traffic light stalk. Extrapolate to the future and it gets ugly...

(no subject)

Date: 2017-06-09 12:27 am (UTC)
ilaine: (Default)
From: [personal profile] ilaine
I'm betting nobody is taking the bicyclists into account here

(no subject)

Date: 2017-06-09 07:02 pm (UTC)
drwex: (Default)
From: [personal profile] drwex
You're missing the really interesting threats here in part because you're not looking at buggy or intentionally misbehaving systems, but let me tell you about some actual things that exist today. ETA: which is totally fine and I agree there are interesting challenges there but situations in which people are deliberately malicious scare me more..

Right now there has been a proof of concept of an algorithm to generate an overlay for an image that is invisible to the human eye but that prevents facial recognition systems from working. Now let's put that technology into the hands of someone malicious. Let's say I can put an overlay on your protest sign that you can't see with your human eyes but when you wave that sign the vision algorithm sees a child running into the street.

If you think that's impossible then imagine the following scenario: for every deep learning algorithm that gets rewarded for doing what you want, imagine creating an opposition algorithm that gets rewarded for generating situations or images or whatever that cause the first algorithm to mess up.
Edited Date: 2017-06-09 08:13 pm (UTC)

(no subject)

Date: 2017-06-11 10:20 pm (UTC)
cellio: (Default)
From: [personal profile] cellio
How can this be fully automated so long as we have pedestrians, bicyclists, kids/animals/stuff-falling-from-trucks, and sudden emergencies like tire blow-outs in play? The traffic signal would have to be as smart as one of the autonomous vehicles, which is rather more work than a mere controller.

More broadly: not every thing on the streets will be part of the IoT, but they're still actors and obstacles.

Profile

jducoeur: (Default)
jducoeur

June 2017

S M T W T F S
     123
456 7 8 910
11121314151617
18 192021222324
2526 2728 2930 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags