Just came across an article on Ars Technica (yes, I'm behind): The intelligent intersection could banish traffic lights forever. It's neat stuff: basically, a researcher has designed a traffic-control system for autonomous vehicles, and demonstrated that by using such technology we could enormously reduce how often you have to stop at intersections -- not only speeding up travel times, but improving fuel efficiency quite a bit.
All of which is great, but my Security Architect senses are pinging here. This is postulating an external server that talks to the cars on the road and tells them what to do. That is absolutely terrifying if you understand the typical state of Internet-of-Things security.
But let's put a positive spin on this. This system is at least 1-2 decades from deployment as described (since it assumes only autonomous vehicles on the road). We might be able to head off disaster by figuring out the obvious hacks in advance, so they can be designed around.
So here's the challenge: name ways that a hacker could abuse this system, and/or ways to ameliorate those weaknesses.
I'll start with a few obvious ones:
- Base story: I (the hacker) send out signals spoofing the controller for traffic intersection T, allowing me to cause nightmarish havoc. Possible solution: traffic controllers are listed in some well-known registry, signed with public keys, so that their signals can be authenticated to prevent spoofing.
- Assuming the above hacking isn't prevented: I time the signals sent to the cars, telling them all to hit the intersection at the same moment. Crash! Solution: as a belt-and-suspenders thing, cars must not completely trust the signal controllers. Their autonomous checks have to override those signals, to prevent crashes.
- Reverse of the previous: I send out signals telling all the cars, in all directions, that the intersection is currently blocked by opposing traffic. The entire city quickly devolves into gridlock. Solution: good question. I got nothing.
What else? I'm sure we can come up with more nightmarish scenarios, and possible solutions.
Yes, this may seem like overkill to think about now, but history says that, if you don't design the system around abuses, you will hurt forevermore. Security isn't something you add later: it should be baked into the designs from the get-go. (Which is why it accounts for a large fraction of Querki's architecture, despite the fact that we only have a couple hundred users yet...)