Preventing the Spread of Misinformation Online

11/26/2018 Allie Arp, Coordinated Science Lab

Professor Tarek Abdelzaher is working with his students on a unique way to identify social media bots by looking at the larger picture of misinformation campaigns.

Written by Allie Arp, Coordinated Science Lab

Social media bot accounts have been a hot topic around the country since Russian-controlled accounts were suspected to have influenced the 2016 election. These accounts, regardless of origin, share inaccurate and often politically divisive information.

Illinois Computer Science Professor and Willett Faculty Scholar Tarek Abdelzaher is working with his students on a unique way to identify these bots among millions of existing Twitter accounts. Rather than focus on individual account actions, Abdelzaher looks at the larger picture of misinformation campaigns.

Tarek Abdelzaher
Tarek Abdelzaher
“Sometimes if you look at one account by itself it looks normal, but if you look at it compared to others in you realize it’s a bot,” said Abdelzaher, who is also a member of the Coordinated Science Laboratory. “You realize, geez for some reason the groups of accounts tweet exactly the same things at exactly the same time. The odds of humans being so in sync that they tweet the exact same thing at the exact same time is impossible. It’s a race against escalating attacker sophistication, where more complex statistical analysis is used to expose more complex bot group behaviors”

The intentions of the bot controllers vary, but they are frequently used to intimidate, motivate, or recruit people, or to change people’s minds via mass messaging. This is what prompted Abdelzaher to look at groupings of identical tweets rather than single accounts to identify bots. The strategy led Abdelzaher’s team to assist with uncovering an ISIS information campaign last year, showing that there are significant military applications to this technology as well as social.

“The goal of the research was to identify these types of propaganda sources and isolate them,” Abdelzaher said. “We wanted to expose them for what they are. Or at least show that the information they are sharing isn’t genuine, it’s coming from a set of bots coming from such and such foreign country.”

Originally, the technology behind the bot detection was used for communicating social events. The tool, Apollo – also created by Abdelzaher, would gather information about an event on social media and provide a summarized and condensed version of what was happening. Of course, not everything on the internet is true, so additional research was necessary to determine the credibility of information.

The technology – which has so far only been used on the Twitter platform – uses only public information. Any information collected by the tool is already searchable by any twitter user and is collected using Twitter’s public interface in compliance with the company’s user agreements. But the hope is that is can be used across multiple social media platforms.

“The main value-added function is that it cleans up the information a bit, sort of like a spam filter for your email or a Brita for your water,” Abdelzaher said. “That's where all the research is. So, for example, if a disaster occurs and people publicly ask for help by posting on Twitter, Apollo can report that, but it'll be smart enough to detect if something does not seem genuine, in which case it may discard it as misinformation/spam.”

Social media platforms and the people who use them for malicious purposes aren’t going away anytime soon. With better detection research and technology, identifying the real users from the bots will make it more difficult for the nonhuman users to spread their message.

This project is funded by the Army Research Lab’s Internet of Battlefield Things project.


See the original CSL story.


Share this story

This story was published November 26, 2018.