A MUST for all Musks: How science can guide effective social media regulation

December 19, 2022

In October 2022, Elon Musk purchased Twitter, declaring that "the bird is freed" and that freedom of expression would be a priority on his platform. Although little is known about Musk's understanding of free speech, many feared that his policies would lead to a neglect in the moderation of harmful content, ranging from disinformation to hate speech. Fomenting these fears, Musk reinstated users who had been banned for violating the platform’s rules on election misinformation and incitement to violence; retweeted fake news about the attack on House Speaker Pelosi’s husband; gutted teams responsible for trust and safety issues; attacked Anthony Fauci; spread disinformation about Twitter’s Trust and Safety Council and the former head of trust and safety; promoted the QAnon conspiracy theory; suspended the accounts of several journalists; and opened the blue check mark, previously reserved for verified accounts, to paid subscribers, leading to a proliferation of fake accounts. In response to public outcry, Musk also deleted some of his tweets and placed the blue check subscription service temporarily on hold.

Science could help policymakers understand which regulations work and what their unintended consequences can be, whether they are internal platform policies or rules imposed by legislation. A rigorous scientific approach could prevent much of the chaos that we are currently witnessing as Musk tries new approaches that scare away advertisers and users.

Researchers have already shown that by eliminating barriers to information sharing and by algorithmic amplification of engagement, social media have facilitated the viral and global distribution of harmful content such as hate speech and disinformation. Social media have been effectively weaponized in modern social, political, civil, and conventional wars. They have been exploited in the spread of lethal disinformation such as hate content fomenting ethnic violence and genocide, Russian information operations influencing Brexit and the 2016 U.S. election, ongoing Russian propaganda about the war in Ukraine, and false claims about COVID health policies.

How are social media companies handling such dangerous manipulations? The climate of uncertainty in the Twittersphere reverberates on other platforms, whose moderation policies during the recent U.S. Midterms were neither clearly communicated nor consistently enforced. The reluctance of social media companies to deal with the challenge of handling harmful content effectively, as well as the lack of clarity and transparency in their moderation policies, have led to renewed discussions regarding the need for regulation of social media platforms.

Platforms such as Facebook, Twitter, and YouTube enjoy a liability privilege because they do not have to take action against illegal content as long as they are not aware of it. However, the platforms establish community standards and complex content governance systems to identify, filter, delete, block, down-curate, or flag problematic content. Twitter and Facebook, for example, have developed moderation policies aimed at reducing harm, even though political, economic, and normative factors stand in the way of consistent enforcement of these policies. But even the existing policies could be quickly erased.

Musk says that the moderation of hate speech and disinformation hinders free speech, but without such moderation we would revert back to the situation of a few years ago, when the information ecosystem was even more flooded by speech polluting and poisoning public discourse. In fact, our research shows that weaker moderation ironically hurts free speech: the voices of real people are drowned out by malicious users who manipulate platforms through inauthentic accounts, bots, and echo chambers.

Musk is not alone. Several Republican states have already tested the water with bills that would prohibit banning of users, or other forms of moderation. So far these have been blocked in the courts. However, Republicans see moderation as a First Amendment issue and will continue to push against it. Ongoing political gridlock seems likely within the U.S., and if the E.U. starts to take a lead as with the Digital Services Act, then this may even incur a backlash. There is a need to find workable evidence-based policy that limits the harm from online hate speech and disinformation before it starts to irreparably damage our democratic institutions.

How can regulators contribute to enhancing the information ecosystem? The legal and technological transformations needed to effectively mitigate harmful social media abuse present formidable challenges. Policymakers have limited access to social media data, statistics, metrics, and algorithms for comprehending and handling online manipulation. They are unable to predict the effectiveness and impact of specific regulations, including their unintended consequences. For example, when is it more effective to add friction to information sharing, versus decrease the visibility of suspicious content, label debunked claims, or suspend bad actors? When are such steps ineffective, or worse counterproductive? We lack tools to answer these kinds of questions. Furthermore, it is difficult to adapt methodologies to the distinct contexts of different countries. Government regulation can have very different goals and consequences in a democratic versus a repressive regime.

To address these challenges, we need a clear, traceable, and replicable methodology to craft and evaluate policy recommendations for preventing and curbing abuse. This requires a transdisciplinary effort. Inputs from media policy and governance research should be used to formulate a set of policy alternatives that are expected to produce effective solutions for the mitigation of online harm. Computational social science methodologies, in turn, should be leveraged to model the effects of moderation policies and quantify their impact.

We are involved in an international research project that aims to generate recommendations and quantitative evidence to classify regulatory policies and assess their expected impact within the information ecosystem. Such an effort may form the basis, for platforms and regulators of any country, to design timely, transparent, and effective policy interventions to mitigate social media abuse. But there is much work to do. Platforms that will be affected by existing and proposed regulatory legislation should support researchers and policymakers in their work to quickly understand these phenomena and reduce harm. Studies of clear and effective regulation, aligned with law, are a must for the current and future Musks of our society.

By Silvia Giordano, Professor, Dep. of Innovative Technologies, Univ. of Applied Science and Arts, Lugano, Switzerland; Filippo Menczer, Distinguished Luddy Professor and Director, Observatory on Social Media, Indiana University, USA; Natascha Just, Professor and Chair, Media & Internet Governance Division, University of Zurich, Switzerland; Florian Saurwein, Media & Internet Governance Division, University of Zurich, Switzerland; John Bryden, Observatory on Social Media, Indiana University, USA; and Luca Luceri, Research Scientist, Information Sciences Institute, University of Southern California, USA.

Image by Koshiro - stock.adobe.com

AdobeStock_542065592.jpeg