OSoMe periodically engages in survey work to examine audience and user trends in relation to social media.
New BotSlayer tool to expose disinformation networks
First announced in September 2019, the new BotSlayer software to expose disinformation networks is designed and developed by OSoMe faculty and students in collaboration with IUNI staff. BotSlayer is an application that helps track and detect potential manipulation of information spreading on Twitter. It can be used by journalists, researchers, civil society organizations, corporations, and political candidates to discover in real-time new coordinated disinformation campaigns. The system is easily installed and configured in the cloud to monitor bot activity around a standing user-defined query. It has already been used to spot a Russian bot network, ISIS propaganda, and high-volume hyper-partisan accounts. Read about how you can join the effort to spot the manipulation of social media.
Twitter bots spread misinformation
Our analysis of information shared on Twitter during the 2016 U.S. presidential election has found that social bots played a disproportionate role in spreading misinformation online. The study, published in the journal Nature Communications, analyzed 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017 -- a period that spans the end of the 2016 presidential primaries and the presidential inauguration on Jan. 20, 2017. Among the findings: A mere 6 percent of Twitter accounts that the study identified as bots were enough to spread 31 percent of the low-credibility information on the network. These accounts were also responsible for 34 percent of all articles shared from low-credibility sources. The study also found that bots played a major role promoting low-credibility content in the first few moments before a story goes viral. We also identified other tactics for spreading misinformation with Twitter bots. These included amplifying a single tweet -- potentially controlled by a human operator -- across hundreds of automated retweets; repeating links in recurring posts; and targeting highly influential accounts. To explore election messages currently shared on Twitter, we also recently launched a tool to measure Bot Electioneering Volume. Created by OSoMe Ph.D. students, the program displays the level of bot activity around specific election-related conversations, as well as the topics, user names and hashtags they are currently pushing. Update: This paper is ranked #3 most read among all articles published by Nature Communications in 2018.
Three new tools to study and counter online disinformation
Researchers at CNetS, IUNI, and the Indiana University Observatory on Social Media have launched upgrades to two tools playing a major role in countering the spread of misinformation online: Hoaxy and Botometer. A third tool Fakey — an educational game designed to make people smarter news consumers — also launches with the upgrades. Hoaxy is a search engine that shows users how stories from low-credibility sources spread on Twitter. Botometer is an app that assigns a score to Twitter users based on the likelihood that the account is automated. The two tools are now integrated so that one can now easily detect when information is spreading virally, and who is responsible for its spread. Hoaxy and Botometer currently process hundreds of thousands of daily online queries. The technology has enabled researchers, including a team at IU, to study how information flows online in the presence of bots. Examples are a study on the cover of the March issue of Science that analyzed the spread of false news on Twitter and an analysis from the Pew Research Center in April that found that nearly two-thirds of the links to popular websites on Twitter are shared by automated accounts. Fakey is a web and mobile news literacy game that mixes news stories with false reports, clickbait headlines, conspiracy theories and “junk science.” Players earn points by “fact-checking” false information and liking or sharing accurate stories. The project, led by IU graduate student Mihai Avram, was created to help people develop responsible social media consumption habits. Apps are available for Android and iOS platforms.
The science of fake news
The indictment of 13 Russians in the operation of a "troll farm" that spread false information related to the 2016 U.S. presidential election has renewed the spotlight on the power of "fake news" to influence public opinion. Filippo Menczer joined prominent legal scholars, social scientists and researchers in a global "call to action" in the fight against it. He is a co-author of a paper featured on the March 8, 2018 cover of the journal Science, calling for a coordinated investigation into the social, psychological and technological forces behind fake news. This is necessary to counteract the negative impact of fake news on society, the authors said. The work was quoted in US News & World Report, PBS NewsHour, Mother Jones, Science News, Futurity, Euronews, Indianapolis Star, El Mundo, The Hindu and many other media.
Hoaxy: A Platform for Tracking Online Misinformation
While social media have brought about a more egalitarian model of information access, the lack of oversight from expert journalists makes the users of these platforms vulnerable to the intentional or unintentional spread of misinformation. We observe hoaxes, rumors, fake reports, and conspiracy theories going as viral as legitimate news online. Media organizations are devoting increasing resources to the production of fact-checking information, which is consumed and broadcast by social media users like any other type of news content, leading to a complex interplay between news memes that vie for the attention of users. To date, there has been no systematic way to study the competition dynamics between online misinformation and its debunking. To address some of these challenges, we launched an open platform for the automatic tracking of both online fake news and fact-checking on social media. The goal of the tool, named Hoaxy, is to reconstruct the diffusion networks induced by hoaxes and their corrections as they are shared online and spread from person to person. Hoaxy will allow researchers, journalists, and the general public to study the factors that affect the success and mitigation of massive digital misinformation. Our early analysis, presented at the WWW 2016 Workshop on Social News On the Web, suggests that the sharing of fake news is dominated by very active users, while fact checking is a more grass-roots activity. Hoaxy has received wide coverage in the US and international press, including by Reuters, CNET, CNN, The Chronicle of Higher Education, The Christian Science Monitor, Quartz, Engadget, Vice, Fortune, Yahoo, Futurity, Daily Mail, El Pais, La Stampa, etc.
Why study fake news and digital misinformation
After the 2016 US elections, the topic of fake news and their spread on social media has become a hotly debated issue. As our group has been studying this phenomenon since 2010, our work has been covered and quoted in the media, analyzing the influence of social bots, the appearance of fake news in Facebook trends, vote suppression attempts, the magnitude of the problem, the potential of fake news in social media to sway elections, online advertising as incentives for fake news, the effectiveness of advertising bans, the steps taken by Facebook, the future of fake news, and the real consequences of conspiracy theories. Our editorial article in The Conversation has been republished widely, including by Time, Scientific American, and PBS. It is good that the problem of digital misinformation is getting the attention it deserves. Research investments are needed toward a deeper understanding of the phenomenon as well as toward socio-technical countermeasure to help mitigate the deceptive manipulation of opinions, without infringing on the free flow of information.
Social bot research featured on CACM, IEEE Computer covers
Research on detection of social bots by CNetS faculty members Alessandro Flammini and Filippo Menczer, former IUNI research scientist Emilio Ferrara, and graduate students Clayton A Davis, Onur Varol, and Prashant Shiralkar was featured on the covers of the two top computing venues: the June issue of Computer (flagship magazine of the IEEE Computer Society) and the July issue of Communications of the ACM (flagship publication of the ACM).
Social bots are often benign, but some are created to harm, by tampering with, manipulating, and deceiving social media users. They have been used to infiltrate political discourse, manipulate the stock market, steal personal information, and spread misinformation. The detection of social bots is therefore an important research endeavor. The IEEE Computer paper titled The DARPA Twitter Bot Challenge (preprint) presents lessons learned from the social bot detection challenge organized by DARPA, in which our team placed third among many large academic and research teams. The CACM article titled The Rise of Social Bots (pdf) reviews the potential threats of social bots and a taxonomy of the different detection systems proposed in the literature, including our own Botometer tool.
Our paper Online Human-Bot Interactions: Detection, Estimation, and Characterization, presented at ICWSM 2017, includes the technical details of our algorithm and an analysis of bot behavior, as well as an estimate of what portion of social media accounts may be bots. These findings were covered by CNBC, Wall Street Journal, New York Times, Forbes, PC Magazine, Sky, NBC News, CBS News, ABC News, The Times, Bloomberg, Slate, Vice, Mother Jones, Yahoo Finance, Sacramento Bee, SFGate, San Francisco Examiner, etc.
Observatory on Social Media launched
The power to explore online social media movements — from the pop cultural to the political — with the same algorithmic sophistication as top experts in the field is now available to journalists, researchers and members of the public from a free, user-friendly online software suite released today. The Web-based tools, called the Observatory on Social Media, or “OSoMe” (pronounced “awesome”), provide anyone with an Internet connection the power to analyze online trends, memes and other online bursts of viral activity. An academic preprint paper on the tools is available from the open-access journal PeerJ. The OSoMe project also provides an API to help other researchers expand upon the tools, or create "mash-ups" that combine its powers with other data sources. For example, a mash-up of the OSoMe and BotOrNot APIs allows to study how social bots manipulate online discourse on a given topic. (In the retweet network shown here, large red nodes represent influential bots that affected conversations about #brexit.)
“This software and data mark a major milestone of our research project on Internet memes and trends over the past six years,” said Filippo Menczer, director of the Center for Complex Networks and Systems Research and a professor in the IU School of Informatics and Computing. “We are beginning to learn how information spreads in social networks, what causes a meme to go viral and what factors affect the long-term survival of misinformation online. The observatory provides an easy way to access these insights from a large, multi-year dataset.”
Best poster and best presenter prizes
Congratulations to Clayton A Davis, who won the best presenter prize at the 25th International World Wide Web Conference's Developers Day Workshop! Clayton presented BotOrNot: A system to evaluate social bots, a paper coauthored with Onur Varol, Emilio Ferrara, Alessandro Flammini, and Filippo Menczer, describing our latest API developments with the BotOrNot system. Previously our poster on BotOrNot won the Best Poster Award at the 2015 Conference on Complex Systems.
BotOrNot passes a million hits within a week of launch
Social bots have been circulating on social media platforms for a few years, and if you frequent online social media, you've probably come across them whether you know it or not! To learn more about social bots, we built BotOrNot, a tool to analyze a Twitter user's behavior and compare it to the behavior of known bots. BotOrNot is publicly available both as a web service and through an open API that was used over a million times within a week of its launch. Work on BotOrNot has been covered in The Wall Street Journal, MIT Technology Review, Frankfurter Allgemeine Zeitung, BBC, ABC News, Washington Post, Politico, New Scientist, Wired, etc.
Instagram to predict fashion model success
Predicting popularity and success in cultural markets is hard due to strong inequalities and inherent unpredictability. A good example comes from the world of fashion, where industry professionals face every season the difficult challenge of guessing who will be the next seasons’ top models. A recent study (DOI: 10.1145/2818048.2820065) by graduate student Jaehyuk Park, research scientist Giovanni Luca Ciampaglia (also at the IU Network Science Institute), and research scientist Emilio Ferrara (now at the University of Southern California) is now showing that early success in modeling can be predicted from the digital traces left by the buzz on social media such as Instagram. The study has been accepted for presentation at the 19th ACM conference on Computer-Supported Cooperative Work and Social Computing (CSCW’16). The work has been covered in the media by the MIT Technology Review, Die Welt, Fusion, Vogue UK, Harper's Bazaar, and CBS News.
Towards computational fact checking
Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. In our paper, Computational Fact Checking from Knowledge Networks, we showed that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. This work received coverage in Nature, The Wall Street Journal, Wired, Motherboard, Pacific Standard, Fusion, Gizmodo, Spiegel, Deutschlandfunk, Il Sole 24 Ore, Publico, etc.
On the cover of Neuron
Work by Olaf Sporns, YY Ahn, Alessandro Flammini, and colleagues was featured on the cover of Neuron. In the paper Cooperative and Competitive Spreading Dynamics on the Human Connectome, the authors present a simulation model of spreading dynamics, previously applied in studies of social networks, that offers a new perspective on how the connectivity of the human brain constrains neural communication processes. Local perturbations in a social network can trigger global cascades (orange and turquoise epicenters in background image). In the case of the brain, the spreading of such cascades follows organized patterns that are shaped by anatomical connections revealing how interactions among functional brain networks may give rise to the integration of information.
ACM Web Science 2014 Best Paper Award
Congratulations to Onur Varol, Emilio Ferrara, Chris Ogan, Fil Menczer, and Sandro Flammini for winning the ACM Web Science 2014 Best Paper Award with their paper Evolution of online user behavior during a social upheaval (preprint). In the paper, the authors study the pivotal role played by Twitter during the political mobilization of the Gezi Park movement in Turkey. By analyzing over 2.3 million tweets produced during 25 days of protest in 2013, the authors show that similarity in trends of discussion mirrors geographic cues. The analysis also reveals that the conversation becomes more democratic as events unfold, with a redistribution of influence over time in the user population. Finally, the study highlights how real-world events, such as political speeches and police actions, affect social media conversations and trigger changes in individual behavior.
Social bots and The Good Wife
A social bot is a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior. On August 11, 2013, the New York Times published an article by Ian Urbina with the headline: I Flirt and Tweet. Follow Me at #Socialbot. The article reports on how socialbots are being designed to sway elections, to influence the stock market, even to flirt with people and one another. Fil Menczer is quoted: “Bots are getting smarter and easier to create, and people are more susceptible to being fooled by them because we’re more inundated with information.” The article also mentions the Truthy project and some of our 2010 findings on political astroturf.
Inspired by this, the writers of The Good Wife consulted with us on an episode in which the main character finds that a social news site is using a socialbot to drive traffic to the site, defaming her client. The episode aired on November 24, 2013, on CBS (Season 5 Episode 9, “Whack-a-Mole”). Good show!
More tweets, more votes
Truthy team members Karissa McKelvey and Johan Bollen collaborated with IU Department of Sociology members Joseph DiGrazia and Fabio Rojas on the paper More Tweets, More Votes: Social Media as a Quantitative Indicator of Political Behavior published in PLoS ONE. The paper aimed to see if the share of social media attention garnered by political candidates is significantly and reliably correlated with electoral performance. Their research suggests that indeed, holding other factors constant, candidates who received a larger share of attention on Twitter were more likely to win than their opponents.
Popular news outlets including Wall Street Journal, NPR and Washington Post picked up the story, many focusing in particular on potential ramifications the research will have on the current methods of political polling. More press links for this paper can be found at Karissa's website.
Geography of Twitter trends
One might think that online social media, operating on a global scale via the Internet, wouldn't be affected much by geography. In fact, authors Emilio Ferrara, Onur Varol, Fil Menczer, and Sandro Flammini, show in their paper * Traveling trends: social butterflies or frequent fliers?* that online social media trends follow similar patterns as epidemics and disease patterns, exploiting the same pathways as human travelers to diffuse across the country.
The research identified three distinct geographical clusters in the US information flow (east coast, midwest, and southwest) as well as global patterns in the flow corresponding to main air traffic hubs. They conclude that travel hubs act as trendsetters, generating topics that eventually trend at the country level, then driving the conversation across the country. This work has received press attention from sources including Washington Post and Seattle Times.
Winner of WICI Data Challenge
Congratulations to Przemyslaw Grabowicz, Luca Aiello, and Fil Menczer for winning the WICI Data Challenge . A prize of $10,000 CAD accompanies this award from the Waterloo Institute for Complexity and Innovation at the University of Waterloo. The Challenge called for tools and methods that improve the exploration, analysis, and visualization of complex-systems data.
The winning entry, titled Fast visualization of relevant portions of large dynamic networks , is an algorithm that selects subsets of nodes and edges that best represent an evolving graph and visualizes it either by creating a movie, or by streaming it to an interactive network visualization tool. The algorithm will be available as an interactive demo on this website, and will allow users to create, in near-real time, YouTube videos that illustrate the spread and co-occurrence of memes on Twitter. Przemek and Luca worked on this project while visiting CNetS in 2011 and collaborating with the Truthy team. Bravo!
Meme competition & virality
In our paper on Competition among memes in a world with limited attention in Nature Scientific Reports, Lilian Weng and coauthors Sandro Flammini, Alex Vespignani, and Fil Menczer report that we can explain the massive heterogeneity in the popularity and persistence of memes as deriving from a combination of the competition for our limited attention and the structure of the social network, without the need to assume different intrinsic values among ideas. The findings have been mentioned in the popular press, including Information Week, The Atlantic, and the Dutch daily NRC.
A follow-up effort by Lilian Weng, Fil Menczer, and YY Ahn and published in Scientific Reports explores how virality of a meme can be predicted by analyzing the structural diversity of the early retweet network. This work was reported on by Scientific American among others. More information about this work can be found at Lilian's Website.