$7.5 million grant to guard against AI-driven misinformation

September 09, 2024
0018346e-4267-47d6-a510-0cba27263302.jpg

Indiana University researchers will lead a multi-institutional team of experts in areas such as informatics, psychology, communications and folklore to assess the role that artificial intelligence may play in strengthening the influence of online communications — including misinformation and radicalizing messages — under a $7.5 million grant from the U.S. Department of Defense.

The project is one of 30 recently funded by the department’s Multidisciplinary University Research Initiative, which supports basic defense-related research projects.

“The deluge of misinformation and radicalizing messages poses significant societal threat,” said lead investigator Yong-Yeol Ahn, a professor in the IU Luddy School of Informatics, Computing and Engineering in Bloomington. “Now, with AI, you’re introducing the potential ability to mine data about individual people and quickly generate targeted messages that appeal to them — applying big data to individuals — which could cause even greater disruptions than we’ve already experienced.”

The insights from the research — on the interplay between AI, social media and online misinformation — could potentially equip the government to counter foreign influence on campaigns and radicalization, he said.

The five-year effort will unite experts across a wide range of disciplines, including psychology and cognitive science; communications; folklore and storytelling; artificial intelligence and natural language processing; complex systems and network science; and neurophysiology. The six IU researchers on the project, all from the Luddy School, are also affiliated with IU’s Observatory on Social Media. Other collaborators include a media expert at Boston University, a psychologist at Stanford University and a computational folklorist at the University of California at Berkeley.

Specifically, Ahn said, the project will investigate the role of a sociological concept called “resonance” on people’s receptiveness to certain messages. This refers to the idea that people’s opinions are influenced more strongly by material that resonates with them through emotional content or narrative framing that appeals to existing beliefs or cognitive biases, such as political ideology, religious convictions or cultural norms.

Resonance can be used to create messages that bridge gaps between groups, as well as fuel greater polarization, Ahn added. However, AI’s ability to rapidly generate text-, image- or video-based content has the potential to escalate the power of these messages — for good or ill — by tailoring content to people on the individual level.

“This is a basic science project; everything we’re doing is completely open to the public,” Ahn said. “But it’s also got a lot of potential applications, not only to understanding the role of AI on misinformation and disinformation campaigns, such as foreign influence on elections, but also topics such as how can you foster trust in AI, similar to a pilot’s faith in the reliability of AI navigation systems. There are a lot of important questions about AI that hinge on our understanding of its intersection with fundamental psychological theories.”

The team will also be applying AI technology to support its research, he added. The use of AI to create “model agents” — or virtual people who share information and react to messages inside a simulation — will help the researchers more accurately model the way information flows between groups, as well as the effect that information has on the “people” inside the model, he said.

The team also plans to study real-life humans’ physical response to online information, both AI- and non-AI generated, with tools such as heart rate monitors to better understand the influence of their “resonance,” he said.

“There have been a lot of major developments in the area of model agents in the past few years,” Ahn said.

Other researcher have been able to create model agents who “debate” each other in a virtual space, then measure the effect of this debate on their simulated opinions, for example.

The IU-led team’s work will represent a “major departure” from other attempts to model belief systems through the simulation of people’s opinions, Ahn added. The project will apply “a complex network of interacting beliefs and concepts, integrated with social contagion theory” to produce “a holistic, dynamic model of multi-level belief resonance.” This approach was has been outlined in a paper published in the journal Science Advances.

The result would be a system that more closely resembles real-life complexities, where people’s opinions aren’t simply based upon political party but rather a complex intersection of belief systems, or social dynamics. For instance, Ahn said an individual’s social group or attitudes toward the medical industry may predict their opinions about vaccine safety more accurately than political ideology.

IU co-principal investigators on the project are assistant professor Jisun An, professor Alessandro Flammini, assistant professor Gregory Lewis and Luddy Distinguished Professor Filippo Menczer, all of the Luddy School in Bloomington. Lewis is also an assistant research scientist at the Kinsey Institute. Haewoon Kwak, associate professor at the Luddy School, will serve as senior personnel.

Other co-principal investigators on the grant are Betsi Grabe of Boston University, Madalina Vlasceanu of Stanford and Timothy Tangherlini of UC Berkeley. The research project will also involve several Ph.D. and undergraduate students at IU.

Originally posted by News at IU, written by Kevin Fryling