The approaching New Age of Phishing
Dec. 4th, 2022 03:14 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
This just came up in conversation on Mastodon, and is worth mentioning here.
Everyone is posting examples of "conversations" with ChatGPT at the moment. The results aren't always right, but it's surprising how often they're close enough. And when they are wrong, I've noticed that they are often wrong in the same sorts of ways that a slightly over-confident human might be wrong -- not obviously "ah, you are a computer", more "no, you're confusing two similar problems". It's approaching Imitation Game quality faster than I had expected, with far less of the glitchiness that the image-generation bots are still prone to.
But a fair number of people are still going, "Enh, so it's a slightly better chatbot. Doesn't really matter that much. Who cares?"
I strongly suspect that there are two audiences who will care: criminals and state actors.
Consider:
- Most people live much of their lives publicly online, and have lots of miscellaneous more-or-less public information.
- Moreover, their social networks are basically trivial to derive from public information.
- We now have AIs that are extremely good at sucking in and collating massive amounts of information.
- We now have AIs that are rapidly passing the uncanny valley in their text conversations, and becoming pretty convincingly human in their responses.
There are two extremely obvious use cases for this set of bullet points: spear-phishing and psyops manipulation.
The slam-dunk one is spear-phishing. Take an AI like ChatGPT, train it on individuals, what we know about them, and how they relate to each other. Then tell it to generate emails from Person A to Person B, using public information about Person A to trick Person B.
Some helpful context: when most people think about "hackers", they envision people breaking through firewalls and manipulating the computers themselves. There's some of that -- but a very large fraction of hacking is nothing like that. Instead, it's all about "social engineering": figuring out how to get a person to do what you want. The all-about-the-computers hacking often happens after a social-engineering attack opens the front door in the first place. Anything that makes social-engineering attacks more successful is seriously dangerous.
When I pointed this out, a friend on Mastodon experimentally tried kicking the tires, and even some trivial quick attempts with hand-seeded information produces at least vaguely plausible emails: not great, but not awful. Now imagine an AI that is trained specifically for this task: not just with Person A's public facts but samples of their writing. (Many people have established that ChatGPT is at least decent at imitating writing styles.) Imagine receiving emails that aren't just the usual boilerplate, but sound like the person they are pretending to be from. Are you confident you would never fall for that?
No, this wouldn't be trivial, but the opportunity is huge, and a forward-thinking criminal boss can surely see that it would be worth investing a few million dollars in building such a thing. (Heck, you could probably cloak in in respectability with some well-chosen use cases.) Done right, it could probably recoup the development cost before anyone even realized what was happening.
The political manipulation case is less obvious, but arguably even scarier. We've already seen what Russian hackers can do with the information available on Facebook and some made-up insane bullshit. Now imagine an AI trained on established psyops techniques and demagogic rhetoric. Have it create accounts on FB, start friending people, and spreading panic and hate in realtime, targeting that panic precisely at what its "friends" have shown they care about.
That would be a heavier lift to get right, but with the resources of a state actor working on it I don't see any reason why it wouldn't be doable.
And the thing is, if I can see these opportunities, I'd be astonished if there aren't already people working on them.
So button up -- I suspect the next few years are going to be very interesting. I'm not especially worried about Skynet taking over the world (yet), but I think there is reason to be concerned about industrial-scale manipulation of individuals, for theft, ransomware attacks, political manipulation, and so on. I have no idea how we might counter that, but it's time to start seriously thinking about it.
Thoughts? Am I off-base here? I can't see any reason offhand why both of the above aren't probably going to be feasible, but I've only been pondering this for an hour. Ideas about how we might fight back?