New DoD Grant to Help Combat Online Misinformation Attacks

5/25/2021 Laura Schmitt, Illinois CS

Tarek Abdelzaher is the principal investigator of a new DoD Basic Research grant that aims to develop a mathematical theory to understand the transmission pathways of misinformation and ways to counteract it with accurate information.

Written by Laura Schmitt, Illinois CS

Illinois CS professor Tarek Abdelzaher and a multidisciplinary team of academic researchers are combining the power of computer science, cognitive modeling, autonomous systems, and adaptive control to help our country combat the weaponization of online misinformation.

Tarek Abdelzaher
Tarek Abdelzaher

The team recently received a $1.5 million, three-year grant from the Department of Defense Basic Research Office to develop a mathematical theory to understand the transmission pathways of misinformation and ways to counteract it with accurate information.

“This project is very important because the Internet, Web, and social media essentially democratized information broadcast,” said Abdelzaher, the Sohaib and Sara Abbasi Professor of Computer Science. “There’s currently no unified mathematical theory to understand how information spread and human beliefs interact in a world where anyone can spread bad information [on a large scale].”

Abdelzaher and his team will address two possible ways to defend against bad information while protecting individuals’ right to free speech. One is a consumer side filter similar to water purification devices, where people can opt to use software to help expose misinformation.

The second way is analogous to the role of chlorination at a water treatment plant, where good information would be injected into the information stream in order to counteract any malicious information.

“[Our] mathematical theory will enable designing these protection tasks more effectively, especially in the face of a determined adversary,” he said.

According to Abdelzaher, the team’s mathematical theory can also help guide the right approach to dealing with misinformation when it’s detected. For example, it can provide insights so military cyber warriors will know whether to expose the bad information, argue against it, or ignore it entirely so it will die on its own.

“There are a lot of considerations here and the theory can help inform what is the best course of action to minimize the damage of bad information,” he said.

An embedded systems expert, Abdelzaher’s role will be to apply physical systems models to social networks to understand how information flows.

Carnegie Mellon University psychology research professor Christian Lebiere (co PI), will model the impact of information operations on how people think and make decisions. In previous work, he has used cognitive modeling to reproduce bias, imperfect memory, and irrationality—attributes that are critical to understanding belief dynamics.

Georgia Tech aerospace engineering professor Evangelos Theodoros will apply his stochastic optimal control algorithms, which have advanced autonomous vehicle technology, to predict and defend against adversarial manipulation of beliefs on social networks. His insights will help the team develop ethical tools for protecting against misinformation attacks.

And, Illinois mechanical engineering professor Naira Hovakimyan (co-PI), will apply her foundational L1 adaptive control framework as a means to counterattack, circumvent, or sidestep an enemy’s misinformation attack. 


Share this story

This story was published May 25, 2021.