Artificial Intelligence

MIT researchers fool Google’s image-recognition software

Security systems that use image recognition based on artificial intelligence can be manipulated. And scientists from the renowned Massachusetts Institute of Technology (MIT) have proven it.

15 Nov. 2017
Source: labsix
Source: labsix

Current image-recognition systems, which are based on neural networks, are susceptible to targeted manipulation. This is what the researchers in MIT's LabSix team have found, focusing on Google's image-recognition system Inception V3. One of the objects they generated as part of their research was a 3D model of a turtle, which was unequivocally recognizable as a turtle to humans, but which the artificial intelligence (AI) behind Inception V3 classified as a rifle from almost every angle.

The AI initially classified a 2D image of a tabby cat as a bowl of guacamole; it was only once the image had been rotated slightly that the AI correctly classified it as a cat.

The fact that AI-based image recognition is sometimes unable to handle images from the real world at best leads to problems that require a degree of subsequent human rectification when it comes to standard applications like the automated archiving and cataloging of photos. In security-related areas, criminals may, however, exploit the weaknesses of these systems to cause considerable damage.

Artificial Intelligence Data Analytics & BIResearch & Innovation CEBIT RSS Feed