- Risongattack quietly modifies the key characteristics, deceiving AI without changing the appearance of the image
- Vision systems in autonomous cars could be blinded by almost invisible image changes
- The foolish attack of the best AI models used in cars, cameras and health diagnostics
Artificial intelligence is increasingly integrated into technologies that are based on visual recognition, from autonomous vehicles to medical imaging – but this increased utility also increases potential security risks, experts warned.
A new method called risingattack could threaten the reliability of these systems by silently handling what AI sees.
This could theoretically make it miss or identify objects badly, even when the images seem unchanged to human observers.
Disappointment targeted by a minimum modification of the image
Developed by researchers from the North Carolina State University, Risingattack is a form of contradictory attack that subtly modifies the visual contribution to deceive AI models.
The technique does not require significant or obvious image changes; Instead, it targets specific features within an essential image to recognition.
“This requires a certain power of calculation, but allows us to make very small targeted modifications to the main characteristics that allow the attack to succeed,” said Tianfu Wu, an associate professor of electrical and computer engineering and author of the study of the study.
These carefully designed changes are completely undetectable for human observers, showing the images manipulated entirely normal with the naked eye.
“The end result is that two images may seem identical to human eyes, and we could clearly see a car in the two images,” said Wu.
“But due to Risingattack, the AI would see a car in the first image but would not see a car in the second image.”
This can compromise the safety of critical systems like those found in autonomous cars, which rely on vision models to detect traffic signs, pedestrians and other vehicles.
If the AI is handled so as not to see a stop panel or another car, the consequences could be serious.
The team tested the method against four widely used vision architectures: Resnet-50, Dennenet-121, Vitb and Deit-B. All four have been successfully manipulated.
“We can influence AI’s ability to see one of the 20 or 30 best objectives it has been formed to identify,” said Wu, citing common examples such as cars, bikes, pedestrians and stop panels.
Although the accent is current on computer vision, researchers are already considering wider implications.
“We are now determining the effectiveness of the technique to attack other AI systems, such as large languages models,” noted Wu.
The long-term objective, he added, is not simply to expose vulnerabilities but to guide the development of more secure systems.
“To move forward, the objective is to develop techniques that can successfully defend itself against such attacks.”
While attackers continue to discover new methods to interfere with the behavior of the AI, the need for stronger digital guarantees becomes more urgent.
Via techxplore