(no subject)

Date: 2017-06-09 07:02 pm (UTC)
drwex: (0)
From: [personal profile] drwex
You're missing the really interesting threats here in part because you're not looking at buggy or intentionally misbehaving systems, but let me tell you about some actual things that exist today. ETA: which is totally fine and I agree there are interesting challenges there but situations in which people are deliberately malicious scare me more..

Right now there has been a proof of concept of an algorithm to generate an overlay for an image that is invisible to the human eye but that prevents facial recognition systems from working. Now let's put that technology into the hands of someone malicious. Let's say I can put an overlay on your protest sign that you can't see with your human eyes but when you wave that sign the vision algorithm sees a child running into the street.

If you think that's impossible then imagine the following scenario: for every deep learning algorithm that gets rewarded for doing what you want, imagine creating an opposition algorithm that gets rewarded for generating situations or images or whatever that cause the first algorithm to mess up.
(will be screened)
(will be screened if not validated)
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

jducoeur: (Default)
jducoeur

July 2025

S M T W T F S
  12345
6789101112
13141516171819
20212223242526
27 28293031  

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags