Thales' AI specialist explains how an AI can be deceived on a war ground

Thales' AI specialist explains how an AI can be deceived on a war ground

A video, titled “New Robots Makes Soldiers Obsolete (Bosstown Dynamics)” (or “New robots make soldiers obsolete”, in English) and went viral in October 2019, showed a robot soldier shooting at targets in the middle of the desert, while humans hit him. Some Internet users thought that this is a real robot soldier designed by the US army. The video was actually a montage made by the company Corridor, which parodied the YouTube channel of Boston Dynamics, which regularly publishes the prowess of its robots.

Robot soldiers of this kind do not exist on the ground. But defense and the military are a sector that has well integrated the latest technological advances such as artificial intelligence (AI). Armies from all over the world are working and developing AI systems to help them in their missions. To the point where today there are tools specifically designed to blur AI.

Christophe Meyer, Director of research in AI and data at Thales, explained to Business Insider France how it is now possible to deceive an image recognition AI for example :

Concretely, in the military field, I take an image of a military tank and pass it off as an ambulance with these software, who are nevertheless extremely successful compared to you and me. How does it work? We modify the inputs very slightly so that a human does not even see the difference, but it will have a significant impact on the neural network and therefore on the response that is given by AI,” he detailed on the sidelines of a conference entitled “intelligence in law” organized in Paris on November 15, 2019 by the law firm Shearman & Sterling, with the United Nations Commission on International Trade Law (UNCITRAL) and the Boston Consulting Group.

Also read – These strange creations are designed to confuse facial recognition software

AI can make mistakes that no human would

The head of Thales clarified that it is possible “to deceive image recognition systems by modifying pixels located on the periphery of the image”. He then gave a simple example: “you have an image with a cat right in the middle, the software tells you that it is a 100% cat. You start editing pixels at the edges of the image, not even on the chat itself. And the system tells you, it’s a 90% cat, 80%… and finally, it’s a frog. This is where it’s maddening, because I haven’t even modified the pixels that make up the chat.”

In fact, the image recognition system takes into account the context, the environment in which the cat is located. “If I process an image where we see a cow and we put it in the sea, the system will say that it is a seal, because of the context, whereas any human would tell you that it is a cow in the sea,” said Christophe Meyer. So ais can make “dumb” mistakes that no human would. This leads those who develop AI to take into consideration the fact that “these systems, however successful, can also be lured,” he added.

To combat this, it is necessary to use several image recognition systems at the same time, which will vote to say whether it is a military tank or not, a soldier or not for example.

Also read — Here’s how an AI engineer imagines the profession of radiologist in the future

The director of research in AI and data of Thales also considered that another complex issue related to the use of AI deserves the attention and reflection of the actors involved (the end user, either the army, the prescriber, or the Directorate General of Armaments, and the industrialist who designs the AI) : who takes responsibility for the final decision between AI and human (and which human?), knowing that some situations do not leave a necessary time for reflection so that the human can weigh the pros and cons.

“The decision of a nuclear ballistic fire on an enemy nation, one can reasonably think that this decision will not be made by an AI. And this is a reflection where I have time to weigh the pros and cons before pressing the button. But in other cases, I won’t necessarily have time. This is typically when you need to brake urgently on an autonomous car. I will not have time to ask a human driver if he agrees for me to brake, otherwise there will be an accident,” detailed Christophe Meyer. Not to mention this question: “At what point do I delegate a decision to the AI? Normally, I do not delegate, except in exceptional circumstances. But in this case, what are these circumstances?”

In any case, France assures that it will never leave the decision to shoot – that someone can kill another person – to an AI, no matter the circumstances.

Receive our latest news

Every day, the main Business Insider news

Go to our cases Get a free quote