![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Just came across an article on Ars Technica (yes, I'm behind): The intelligent intersection could banish traffic lights forever. It's neat stuff: basically, a researcher has designed a traffic-control system for autonomous vehicles, and demonstrated that by using such technology we could enormously reduce how often you have to stop at intersections -- not only speeding up travel times, but improving fuel efficiency quite a bit.
All of which is great, but my Security Architect senses are pinging here. This is postulating an external server that talks to the cars on the road and tells them what to do. That is absolutely terrifying if you understand the typical state of Internet-of-Things security.
But let's put a positive spin on this. This system is at least 1-2 decades from deployment as described (since it assumes only autonomous vehicles on the road). We might be able to head off disaster by figuring out the obvious hacks in advance, so they can be designed around.
So here's the challenge: name ways that a hacker could abuse this system, and/or ways to ameliorate those weaknesses.
I'll start with a few obvious ones:
- Base story: I (the hacker) send out signals spoofing the controller for traffic intersection T, allowing me to cause nightmarish havoc. Possible solution: traffic controllers are listed in some well-known registry, signed with public keys, so that their signals can be authenticated to prevent spoofing.
- Assuming the above hacking isn't prevented: I time the signals sent to the cars, telling them all to hit the intersection at the same moment. Crash! Solution: as a belt-and-suspenders thing, cars must not completely trust the signal controllers. Their autonomous checks have to override those signals, to prevent crashes.
- Reverse of the previous: I send out signals telling all the cars, in all directions, that the intersection is currently blocked by opposing traffic. The entire city quickly devolves into gridlock. Solution: good question. I got nothing.
What else? I'm sure we can come up with more nightmarish scenarios, and possible solutions.
Yes, this may seem like overkill to think about now, but history says that, if you don't design the system around abuses, you will hurt forevermore. Security isn't something you add later: it should be baked into the designs from the get-go. (Which is why it accounts for a large fraction of Querki's architecture, despite the fact that we only have a couple hundred users yet...)
(no subject)
Date: 2017-06-08 03:17 pm (UTC)But of course, the security hackles should be raised in this scenario too. ;)
(no subject)
Date: 2017-06-08 06:05 pm (UTC)------
But Gregory Stuart Pettigrew hit on the real challenge. The thing is, for this to achieve the desired goal -- fewer stops and improved throughput -- you probably need to require that everyone use the same legally-mandated algorithm. Otherwise, I just tune my car to be more aggressive than yours, and that's great for me until we hit a terribly literal tragedy of the commons in the middle of some intersection.
The logistics of getting all cars on the same algorithm are nightmarish. Agreeing on a *protocol* that you must abide by isn't too terrible, but getting all the nuances of an algorithm precisely to spec can be challenging, given that everybody's on different chipsets.
And the algorithm itself is likely to be challenging, to say the least -- there are lots of obvious failure modes that result in cars going, "After you, my dear Alphonse" at each other.
Keep in mind, I wrote one of the first major peer-to-peer videogames, so this is a topic near and dear to my heart. I came away from three years of P2P projects concluding that:
a) Almost anything you can do with a central server, you can do distributed.
b) Doing it distributed and *well* is usually much harder than building a centralized solution.
Could it be done? Yes. Do I trust modern society to not fuck it up? Really not. (Whereas I think the centralized version is a *bit* more plausible, albeit with its own problems.)
------
Improved Throughput for whom?
Date: 2017-06-08 06:37 pm (UTC)Just like with #netneutrality, the corporations will fight tooth and nail the idea that John Q. Public has as much right to cross the intersection as someone willing to pay extra for permanent right of way.
(no subject)
Date: 2017-06-09 06:17 pm (UTC)(no subject)
Date: 2017-06-09 06:37 pm (UTC)All of which is to say that if the autonomous system and its user aren't subject to those same networks there's gonna be hell to pay.
(no subject)
Date: 2017-06-09 09:44 pm (UTC)(no subject)
Date: 2017-06-09 10:05 pm (UTC)I'm not certain this is true. Automated systems are more predictable and easier to test than human drivers, even if they are negotiating from a point of distrust.
That said, you're probably right wrt traffic lights unless there are lights restricted to cars running a specific interaction protocol (with other intersections being more open).
(no subject)
Date: 2017-06-08 03:40 pm (UTC)Here's a fun thing not to try at home: a beacon on your own car that claims to be a portable traffic signal. "Lane closing; keep right."
"I'm a police car, let me pass."
Wayze likes to route vehicles through quiet neighborhoods as shortcuts. Drop a spoofer on the corner and the road is "closed" or "5 MPH" and stops being used as a shortcut.
"Invisible construction area, wait for [nonexistent] flagman."
(no subject)
Date: 2017-06-08 06:09 pm (UTC)Yeah -- I'm actually wondering if this is the cause of a street I've been noticing recently, which is along the route that Google prefers to take me on to get to
(no subject)
Date: 2017-06-08 07:56 pm (UTC)The end result is that not only does every traffic engineer in every small town have the keys to change all the traffic lights in town, but they also share keys with the people one town over, and they send them through email, and keys from people who have moved to other jobs are valid ten years later...
The only reason we don't have traffic apocalypse now is (a) people notice when lights are funky, and they complain and (b) you can't do it remotely
except in a few hardwired cities where you can't do it arbitrarily remotely.
(no subject)
Date: 2017-06-08 03:42 pm (UTC)(no subject)
Date: 2017-06-09 12:27 am (UTC)(no subject)
Date: 2017-06-09 07:02 pm (UTC)Right now there has been a proof of concept of an algorithm to generate an overlay for an image that is invisible to the human eye but that prevents facial recognition systems from working. Now let's put that technology into the hands of someone malicious. Let's say I can put an overlay on your protest sign that you can't see with your human eyes but when you wave that sign the vision algorithm sees a child running into the street.
If you think that's impossible then imagine the following scenario: for every deep learning algorithm that gets rewarded for doing what you want, imagine creating an opposition algorithm that gets rewarded for generating situations or images or whatever that cause the first algorithm to mess up.
(no subject)
Date: 2017-06-11 10:20 pm (UTC)More broadly: not every thing on the streets will be part of the IoT, but they're still actors and obstacles.