HyperAI
Back to Headlines

New AI Attack Technique, RisingAttacK, Can Manipulate What Computer Vision Systems See

3 days ago

Researchers from North Carolina State University have developed a new method for attacking artificial intelligence (AI) computer vision systems, allowing adversaries to control what the AI "sees" in an image. This technique, named RisingAttacK, has been shown to effectively manipulate the most widely used AI vision systems, posing significant risks to applications ranging from autonomous vehicles to medical diagnostics. Adversarial attacks involve altering input data in a way that is imperceptible to humans but can significantly impact AI decision-making. For example, a hacker could manipulate an AI system to misidentify traffic signals, pedestrians, or vehicles, leading to dangerous outcomes for self-driving cars. Similarly, malicious code could be installed in an X-ray machine, causing an AI system to provide incorrect medical diagnoses. Tianfu Wu, an associate professor of electrical and computer engineering and co-corresponding author of the research paper, explained the motivation behind RisingAttacK. "We wanted to find an effective way of hacking AI vision systems to highlight their vulnerabilities, particularly in contexts that affect human health and safety," he said. "Identifying these weaknesses is crucial for developing robust defenses." RisingAttacK operates through a series of steps designed to make minimal but targeted alterations to images. It begins by identifying all the visual features in an image and determining which ones are most critical for achieving the attack's goal. For instance, if the objective is to prevent the AI from recognizing a car, the technique focuses on the features that are essential for the AI to identify a car. Next, it calculates the AI's sensitivity to changes in these key features, enabling very subtle modifications that can lead to significant disruptions. The researchers tested RisingAttacK against four popular vision AI models: ResNet-50, DenseNet-121, ViTB, and DEiT-B. In each case, the technique was successful in manipulating the AI's perception. The altered images appeared identical to human observers, but the AI saw different objects or none at all. For instance, an AI trained to recognize cars, pedestrians, bicycles, and stop signs could be fooled into missing any of these objects by small, targeted changes. Professor Wu emphasized the versatility of RisingAttacK, noting that it can influence the AI's ability to detect any of the top 20 or 30 targets it was trained on. "The technique is powerful enough to manipulate the AI's perception across a broad range of categories," he said. "This includes not only objects in images but potentially other types of AI models, such as large language models." The implications of RisingAttacK are far-reaching, highlighting the critical need for robust security measures in AI vision systems. As AI becomes increasingly integrated into safety-critical applications, identifying and mitigating such vulnerabilities is paramount. The research team is now working on developing defense mechanisms against RisingAttacK and similar adversarial techniques. "Our next steps involve determining the effectiveness of RisingAttacK against other AI systems and devising methods to counteract its impact," Wu stated. The paper, titled "Adversarial Perturbations Are Formed by Iteratively Learning Linear Combinations of the Right Singular Vectors of the Adversarial Jacobian," will be presented at the International Conference of Machine Learning (ICML 2025) in Vancouver, Canada, on July 15. Industry experts stress the importance of this research, as it underscores the ongoing challenges in maintaining the integrity and reliability of AI systems. They view RisingAttacK as a wake-up call for developers and policymakers, highlighting the need for continuous research and vigilance in the field of AI security. North Carolina State University, known for its contributions to advanced technology and cybersecurity, remains committed to advancing the frontiers of AI research and improving its safety.

Related Links