IU's Observatory on Social Media defends citizens from online manipulation – that's not censorship

March 03, 2025
osome.jpeg

When thousands of fake accounts controlled by an unknown actor flood social media with some story, and platform algorithms amplify these messages, real people may be fooled into believing that they are seeing the opinions of fellow citizens. In contrast, the posts shared by real people are buried.

At Indiana University’s Observatory on Social Media, we believe that the public has the right to understand how information spreads on social media, and when inauthentic or deceptive actors manipulate that information.

Research groups like ours, working to understand and address these problems, have been under attack in a broad effort to silence them by falsely equating their work to censorship. We unequivocally reject these accusations. Our work improves transparency and public accountability and provides users with tools they may use – the exact opposite of censorship.

Ironically, those accusing us of censorship are actively trying to stop our work – in other words, to censor us. But our commitment to public interest research remains steadfast. We have created this fact sheet to highlight some of our research and help dispel the false claims beginning to circulate about our work.

The “censorship industrial complex” narrative about our center is based on three false claims: first, that content moderation is the same as censorship; second, that researchers engage in social media moderation; and third, that our center is part of the government. Let’s debunk each claim.

First, it is crucial to differentiate between moderation policies and censorship. Adding a fact-checking label to a false claim is not censorship. Suspending an account that spreads malware is not censorship. Taking down deceptive foreign influence campaigns that impersonate citizens is not censorship.

Second, research about moderation is not the same as moderation. We do not collaborate with government entities or social media platforms on moderation policies or decisions. While we study the vulnerabilities of different groups to online manipulation, our research does not target any group (political or otherwise) and does not censor, suppress, or limit speech (political or otherwise) in any way.

Finally, while academic research is supported in part by grants from federal funding agencies like the Department of Defense and the National Science Foundation, it is independent and non-political.

Our work is open and accessible to anyone. It can support the public, platforms, and policymakers in making informed decisions about social media. Policymakers from across the political spectrum occasionally ask us to provide research insights on pressing digital issues. For example, we worked with then-Senator Mike Braun (R-IN) and his staff to discuss the vulnerability of senior citizens to online fraud, as well as Senator Todd Young (R-IN) to address concerns about AI abuse. With researchers from other universities, we filed an amicus brief urging the Supreme Court to ensure its rulings on social media regulations allow for meaningful transparency to support independent research. These engagements reflect our commitment to informing policymakers with evidence-based research, regardless of political affiliation.

Here are some of the research projects led by OSoMe in recent years:

  • We develop agent-based models to simulate how information spreads on social media. These models help us understand how malicious actors use fake accounts to spread disinformation. We also use our models to study the intended and unintended effects of different moderation policies. Our research helps policymakers balance the harms of delayed moderation of illegal content against the risks of inaccurate labeling, which could lead to censorship.
  • We study the biases in social media algorithms and their impact on harmful content. We found that engagement bias in these algorithms often reduces content quality. In an experiment, we showed that simply seeing social engagement signals makes users more vulnerable to misinformation. We also demonstrated that newsfeed algorithms can be adjusted to reduce exposure to harmful and polarizing content. For instance, a ranking algorithm that highlights content from sources with bipartisan audiences can improve the quality of information that social media users see.
  • We research coordinated online behaviors like influence campaigns by foreign state actors. We analyze the many tactics employed in information operations, like inauthentic amplification by retweets and synchronized posting for cryptocurrency pump & dump schemes. Our center develops advanced machine learning methods that can be used by security operators to detect these types of abuse.
  • We study how automated fake social media accounts, known as social bots, manipulate users. These bots are often used to spread malware and support fraud. We have developed advanced algorithms to detect social bots and assess the credibility of suspicious accounts. Social media platforms can use these methods to maintain the integrity of their information environment.
  • We investigate how people consume news and the impact of partisan, unreliable, and influential sources on public opinion. We found that partisan users on both sides of the US political spectrum are more likely to spread disinformation. We explored how social media mechanisms lead to political echo chambers, which increase vulnerability to manipulation by selectively exposing or suppressing information. We also examined the asymmetry in these patterns, how bad actors exploit them, and how some users are responsible for spreading a disproportionate amount of low-credibility content. This research can help platforms improve their services for users.
  • We examine how online misinformation about vaccines affects vaccine hesitancy. Our research revealed that areas with more misinformation have lower vaccination rates, even when considering political, demographic, and socioeconomic factors. We also develop models to estimate the number of infections and deaths that can be attributed to exposure to antivaccine content on social media. Finally, we use large-scale simulations to give policymakers best- and worst-case scenarios for how misinformation can impact disease spread across the population.
  • We explore the risks of AI, used with good or bad intentions. Our research showed that using models like ChatGPT for fact-checking can increase belief in false claims. We also found that AI models have a liberal bias in credibility ratings and can be manipulated to provide politically biased ratings. This highlights the unintended consequences of AI applications. More concerning, we discovered large networks of fake social media accounts using AI to create profiles and spread scams, spam, and other harmful content. We are working on developing methods to detect these abuses.
  • Some of our current research focuses on how people change their beliefs within social groups. We study how rumors and trends spread through communities and why some ideas stick while others do not. People do not adopt new beliefs immediately – they think about them, compare them to what they already know, and decide if they fit. We use this research to understand how AI can influence thoughts and feelings through personalized content, highlighting its potential to either polarize or unify audiences depending on the social context.

Our Observatory has developed several open-source, public tools. Botometer, the most popular, helped Twitter users recognize social bots and was used by Elon Musk. Fakey is a mobile game to teach media literacy. Hoaxy visualizes information diffusion networks to help study the roles of communities, bots, and influentials in the spread of viral content. We developed dashboards to track the spread of content from low-credibility sources in the context of specific events, like the COVID-19 pandemic and the 2022 midterm election. We provide tools for researchers to access public data from decentralized platforms like Bluesky and Mastodon. Coordiscope visualizes networks of accounts engaged in inauthentic coordinated behavior, like foreign influence campaigns and information operations. News Bridge is a browser extension that utilizes an AI model to provide context for news articles posted on Facebook and generate thoughtful responses aimed at bridging political divides. All of these tools are designed to help citizens and scientists use and better understand our information environment; none can be used to censor content.

Since we are not fact-checkers, some of our studies rely on source credibility ratings from non-partisan fact-checking organizations. These ratings align with those from crowdsourcing systems, such as X’s community notes. Our research results remain consistent even when content is labeled by conservative reviewers.

We regret that some political actors misrepresent our work. It is not the first time. In 2014, similar false claims led to a congressional investigation that found no evidence of political bias in our federally-supported research. More recently, similar attacks targeting fellow researchers at other universities also led to no findings of wrongdoing. Despite these misrepresentations, OSoMe continues to advance the understanding of social media's influence on society and promote a healthier information ecosystem.