AI’s New Frontier – “Countering Enemy Countermeasures”
By Security Television Network, Author: by Kris Osborn, Warrior Maven
Click here for updates on this story
September 18, 2021 (Security Television Network) — Scientists are teaching AI to recognize disguised weapons on the battlefield
(Washington, D.C.) Here’s What You Need to Know: An AI-capable system is only as effective or as extensive as its database.
If an enemy tank were hiding in a forest to deliberately avoid being detected by overhead surveillance drones, it might choose heavily wooded areas where any clear view of the ground was obscured by trees. Better yet, the tank would turn off its engines to avoid emitting a heat signature detectable by infrared sensors.
However, one thing potential adversaries are increasingly adept at understanding is that advanced, artificial intelligence (AI)-enabled computer algorithms can account for all of these variables, compare them to one another, and make a determination as to what the sensors might be detecting based upon a previously compiled database of information.
However, what if an automated or AI-cable sensor system encountered something that was not part of its compiled database, no matter how limitless and vast it may seem to be? Certainly, AI programs can use incredible databases able to compare new input off of millions of variables and previously compiled amounts of information. Perhaps the sensor has discerned something in the past with a similar host of variables and can therefore determine the specifics of the tank through comparative analysis?
This kind of complexity is exactly what potential adversaries seek to exploit, as they are devising methods of spoofing advanced algorithms or essentially confusing them to prevent the AI from making determinations. Computer scientists with Booz Allen Hamilton explain that enemies are developing specific countermeasures intended to confuse or “throw off” algorithms engineered to detect them. One scientist with Booz Allen Hamilton, for example, said perhaps an adversary might do something as simple as put a poster or large piece of cardboard on top of a tank to make it look different. An AI-capable sensor trained with data to identify the shapes, structures and signatures of a tank might simply be at a loss to make an accurate determination by virtue of having encountered something it has not seen before. That is precisely the intent, to offer up a signal or rendering to a sensor that prevents accurate surveillance. An AI-capable system is only as effective or as extensive as its database, so algorithms can at times be challenged to accurately process information or data that introduces variables, images, objects or signals the system has never seen before.
Given this set of complexities, Booz Allen computer scientists are seeking to evolve a new generation of wholistic AI-enable sensing which can simultaneously account for a wide scope of variables to search for patterns, indications or similarities related to a larger overall picture. This means that perhaps if a tank is hidden beneath cardboard, the AI-system could pick up otherwise disconnected cues, clues or indications… all in relation to one another .. that can help analytics to the point such that it can make an accurate determination or identification. In effect, it represents an attempt for computer warriors in the field of AI to develop countermeasures in response to countermeasures and remain in front of the competitive curve.
Ed Raff, Senior Computer Scientist, Booz Allen Hamilton told Warrior that, while AI cannot do this fully quite yet, “it’s something research is working on.”
“There are multiple possible future horizons. Our most effective approaches to date involve attacking our models ourselves and adding those attacks to the training data,” Raff said.
Please note: This content carries a strict local market embargo. If you share the same market as the contributor of this article, you may not use it on any platform.
Dr. James Halldrhall@security20.com(202) 607-2421