Relevant
Relevant Feed
Bringing context to the space between culture and technology.
1472 Members
See All
We'll be adding more communities soon!
© 2019 Relevant Protocols Inc.
0
Adversarial designs, as this kind of anti-AI tech is known, are meant to "trick" object detection algorithms into seeing something different from what's there, or not seeing anything at all. In some cases, these designs are made by tweaking parts of a whole image just enough so that the AI can't read it correctly. The change might be imperceptible to a human, but to a machine vision algorithm it can be very effective: In 2017, researchers fooled computers into thinking a turtle was a rifle.