A team of professors from the University of Texas at Arlington is working to create a program that will help stop the spread of fake news.
“What we’re talking about – the President uses that term to refer to media in general – but what we’re talking about are stories that are entirely fictitious or largely fictitious,” said Mark Tremayne, Ph.D., an assistant professor of communications at UT Arlington.
The project to root out fake news on social media is titled, “Bot vs. Bot: Automated Detection of Fake News Bots,” and will eventually result in a computer program that will be designed to alert people when posts they’re seeing, or even comments on social media posts, have likely been generated by automated social media accounts.
The researchers have made assurances that their motivations are not political.
“This is not targeted for or against any one party or any one candidate,” said Christoph Csallner, Ph.D., an Associate Professor in the Computer Science and Engineering Department. “This project is really about national security. You could imagine some real threats [being spread through ‘fake news’ posts] like another country trying to start confusion among residents, or the military.”
“At some point this could be considered a danger to democracy or a danger to national security if these platforms, Facebook and the other social networks, are being used as propaganda tools,” Dr. Csallner added.
Part of the problem of combating automated efforts to spread ‘fake news’ is that bots can simultaneously share posts that contain deliberate falsehoods in a single instant across multiple platforms that will be seen and potentially shared by millions of real people.
“The stuff can be generated automatically by a program,” Dr. Tremayne said. “So you don’t know as you are scrolling through, especially with the comments, you don’t know which one is an actual person [who] sat there and typed them out and which ones were just spit out by some algorithm. And wouldn’t you like to know?”
The challenge, Dr. Csallner said, will be sorting through the massive amount of content that is published on social media and keying in on indicators that increase the likelihood that any particular post was made by a bot.
The team is in its early stage of development for its program, and expects to have a working result within a year.