You're missing the really interesting threats here in part because you're not looking at buggy or intentionally misbehaving systems, but let me tell you about some actual things that exist today. ETA: which is totally fine and I agree there are interesting challenges there but situations in which people are deliberately malicious scare me more..
Right now there has been a proof of concept of an algorithm to generate an overlay for an image that is invisible to the human eye but that prevents facial recognition systems from working. Now let's put that technology into the hands of someone malicious. Let's say I can put an overlay on your protest sign that you can't see with your human eyes but when you wave that sign the vision algorithm sees a child running into the street.
If you think that's impossible then imagine the following scenario: for every deep learning algorithm that gets rewarded for doing what you want, imagine creating an opposition algorithm that gets rewarded for generating situations or images or whatever that cause the first algorithm to mess up.
(no subject)
Date: 2017-06-09 07:02 pm (UTC)Right now there has been a proof of concept of an algorithm to generate an overlay for an image that is invisible to the human eye but that prevents facial recognition systems from working. Now let's put that technology into the hands of someone malicious. Let's say I can put an overlay on your protest sign that you can't see with your human eyes but when you wave that sign the vision algorithm sees a child running into the street.
If you think that's impossible then imagine the following scenario: for every deep learning algorithm that gets rewarded for doing what you want, imagine creating an opposition algorithm that gets rewarded for generating situations or images or whatever that cause the first algorithm to mess up.