CS Researchers Highlight Vulnerabilities in AI-powered Wireless Networks

8/24/2023 Bruce Adams

A groundbreaking study led by Professors Deepak Vasisht and Gagandeep Singh from the University of Illinois Urbana-Champaign Department of Computer Science revealed significant vulnerabilities in next-generation wireless systems relying on artificial intelligence. Their research shows that small amounts of noise transmitted by a malicious user can disrupt services provided by AI-power wireless networks.

Written by Bruce Adams

A groundbreaking study led by Professors Deepak Vasisht and Gagandeep Singh from the University of Illinois Urbana-Champaign Department of Computer Science (CS) revealed significant vulnerabilities in next-generation wireless systems relying on artificial intelligence (AI). Their research shows that small amounts of noise transmitted by a malicious user can disrupt services provided by AI-power wireless networks.

Gagandeep Singh in a floral print shirt (left) and Deepak Vasisht in a blue suit.
Gagandeep Singh (left) and Deepak Vasisht

In an age where AI is gaining prominence across economic sectors, there's a growing need to understand its limitations. Vasisht said, “There's a lot of hype around AI in all fields. But there's still a lack of understanding of when these models work and when they don't. And when they don't work, how bad can it get?”

These questions are increasingly important as industry and academia rush to apply modern machine learning tools to wireless networks. Wireless networks have become critical societal infrastructure that are driving interpersonal interactions, powering smart cities, revolutionizing digital agriculture, reshaping education, and advancing healthcare. Beyond communication, their capabilities extend to vital functions like sensing — a cornerstone for innovations like autonomous vehicles. Any disruption in these networks doesn't just imply inconveniences; it translates to financial setbacks and personal losses.

Singh emphasized the research's urgency: “As AI transforms revolutionize wireless networks, we must prioritize real-world robustness and reliability over mere test accuracy. Our work is the first to shed light on the tangible threats of adversarial attacks on ML-based wireless infrastructures. We aspire to drive the community towards creating more resilient models fit for real-world deployment."

Their pioneering paper, "Exploring Practical Vulnerabilities of Machine Learning-based Wireless Systems," unveiled at the USENIX NSDI 2023 conference in April, delves into how slight interference or noise can compromise AI-integrated wireless systems like 4G, 5G, and Wi-Fi. This paper was led by Illinois CS graduate students Zikun Liu, Changming Xu, and Emerson Sie, whom Vasisht noted were “the leaders for this research.”

Industry and academia increasingly see AI as a tool to empower the next generation of wireless networks. AI brings new capabilities to wireless networks, increases data rates, makes network management easier, and reduces delays in such networks. Furthermore, there's a growing interest in using AI to improve the sensing abilities of these networks. For instance, imagine if your cellular base station could pinpoint your exact location. Such precision would be invaluable for applications, especially for autonomous vehicles trying to determine their position on the road.

There is a push to move wireless networks to the cloud, with heavy investments from the two most prominent cloud services providers, Microsoft and Amazon. This makes it easier to deploy machine learning-driven wireless networks. These developments inspire the critical question -- how reliable are these AI-driven networks compared to traditional networks? When they go wrong, how wrong can they go?

To find out, the team built RAFA (RAdio Frequency Attack), the first hardware-implemented adversarial attack platform against an ML-based wireless system. The device operates using a single antenna software-defined radio and does not need real-time access to the client or the base station in the attack setup. RAFA senses an ongoing communication on the wireless medium and introduces small amounts of noise, or perturbation, to the medium that disrupts state-of-the-art ML models. RAFA can reduce communication efficiency and increase location errors by several meters in tested machine learning models.

RAFA, a radio frequency attack device, two black boxes and one white box attached with wires.
RAFA (RAdio Frequency Attack) device

Vasisht offered an analogy, “Imagine a slight buzzing noise that tricks your smart home device into perceiving a command. This is akin to low-power noise, often overlooked by networks. Yet, this innocuous transmission can lead to erratic network behavior.”

Increasingly, it doesn’t suffice for networks to be intelligent; they need to be robust and resilient to errors. Imagine an autonomous vehicle misinterpreting its lane on the road due to signal discrepancies from base stations. Such errors underline that while AI-driven wireless systems have advanced capabilities, their resilience is still a concern. Vasisht emphasized, "Our goal is not to show that we should not use AI. Our goal is to demonstrate that there is more work to be done before we put these things out there.”

You can watch a video of the paper presentation at USENIX NSDI 2023 here.


Share this story

This story was published August 24, 2023.