The Covid-19 pandemic and protests against police violence have created endless opportunities for the spread of misinformation. Stuck in their homes, many people spend their days glued to social media, trying to stay informed about all the things happening in our world. Those conditions make us especially vulnerable to what researchers have called an “infodemic,” where we have difficulty sifting through the flood of information to understand what is real or fake, trustworthy or unreliable, evolving fact or settled truth, information or disinformation.
Researchers like Jennifer Schradie, PhD, who is a sociologist and assistant professor at the Observatoire Sociologique du Changement at SciencesPo in Paris, can help us make sense of this challenge. The author of the book, The Revolution That Wasn’t: How Digital Activism Favors Conservatives, Schradie looks at how inequalities, ideology, and institutions shape participation on social media and in our new information society.
Schradie spoke with GEN about the connection between information and trust, and what the news media can do to fight back against it.
GEN: Early on in the Covid-19 pandemic, experts often gave conflicting guidance on everything from social distancing to wearing a face mask based on what information was available at the time. For everyday news-consumers, how can we sift through all the different — at times conflicting — information we receive over time?
Jennifer Schradie: Everyone comes to news and information with a different context. The focus tends to be on ideological difference (which obviously is there) but there’s also things like different levels of education or internet access. We come to information in community — whether it is on public platforms or in smaller networks. Some people are aware of ways to verify if a story is true, but I think it comes down “what mediators do I trust?” Do I trust the New York Times or Fox News? This idea that people would have the time and resources to discern complex academic research and delve into those details is so incredibly onerous and challenging. We all rely on mediation — whether it is colleagues, friends, trusted community leaders, or news sites.
So essentially, we need trusted mediators. Then it would follow that mediators need to build trust?
Exactly. That’s where things become difficult. On one hand, the internet has allowed for so much information. On the other hand, it’s contributing to us no longer having that universal outlet that the majority of us are looking to. That’s not to say the three major news networks I grew up with were perfect. They were extremely problematic in many ways. But I think how trustworthy news is more important now than ever.
Many of us get our news from social media. Social media makes the access of news quick and easy, but the downside is navigating through misinformation. What are some of the ways social media incentivizes the spread of misinformation?
The short answer is it’s faster. Institutions have less of a chance to say “wait a minute…” Social media allows for a much more efficient way to spread misinformation than before. The internet is about distribution, and distribution used to be costly. Now it’s very cheap to disseminate on social media, though not everyone necessarily has an audience like those who have the most resources.
Social media is also designed for outrageous information. General things are not as sensational as the sexy, quick, dramatic information that the news media tends to want to report. That’s even more the case on social media — the more outrageous info gets passed on. It’s a toxic combination of efficiency and virality that makes for a recipe for the spread of disinformation.
You wrote an op-ed about President Trump’s response to being fact-checked on Twitter about his voter-fraud claims. There are arguments on both sides about social media fact-checking content, but what responsibility do you tech companies have when it comes to misinformation or disinformation?
Platforms have a huge responsibility. I don’t think it’s the only solution and I don’t think it’s only a technological solution. It comes through legislation, which requires people to organize to make it happen. There are hundreds of researchers who are studying misinformation. It would not be that difficult for Twitter or Facebook to come up with better plans. And on top of that, they already have incredibly sophisticated algorithms that could help tackle this, but they are choosing not to because viral information is more profitable for them.
Journalists have to verify and present facts, but there is also this need for balance. The problem is these two pulls can create false equivalency. How can the media overcome the challenge of reporting truth and facts, but knowing that what we put out will be filtered and reframed on social media?
We all know about the outrage industry and what goes viral on social media. What I found in my research was that conservatives, both media outlets and grassroots activists, were much more effective in engaging online because of focused messaging. Whether or not they are accurate, simple, and focused messages are more shareable. And of course, the more outrageous it is, the more it is shared.
But the responsibility of journalists also presents the problem of false equivalency. One example is the Tom Cotton op-ed. But also, how do you report on a president who promotes the use of a treatment or prevention of Covid-19 that is extremely dangerous? I heard it reported on NPR something like, “Trump said this. Some doctors say that is not necessarily the safest way to approach this.” To me it was the ultimate example. The solution is to get rid of the first statement and contextualize that doctors agree that this is super dangerous. Contextualizing from the very beginning is the role of journalists and there’s no way around that.
Whether it is about Covid-19 or reports of what is happening during the protests, are there ways in which journalists can combat “fake news,” misinformation and disinformation?
I don’t want to minimize fake news at all. It’s real and can have dangerous implications, but the idea that we are injected with information and that we are robots believing it no matter how ridiculous goes against 50 years of communication research. Using vaccinations as an example — to be clear, I support vaccines 100% — but one of the things I don’t like about framing in how to support more vaccination was that it is science versus these crazy people.
With the internet and social media, sometimes we are so focused on the individual, but we are all a part of communities — a church, a community group, a university, a workplace, etc.
These institutions are really key in the stopping of misinformation. Societal context matters.
Across the country and in different parts of the world, protests erupted over the killing of George Floyd. One troubling thing we have seen is the targeting of the free press. How has this changed the conversation about and coverage of the protests?
With the targeting of journalists, they’ve, for the first time, felt what activists and the Black community has been saying for years about police violence. There’s a number of journalists reporting being specifically targeted, not just caught in the cross fire. There’s a different conversation now that reflects a major shift since journalists have been targeted. The fact so many New York Times journalists spoke out en masse against the Cotton op-ed, it’s unheard of. It will be interesting to see how media institutions wrestle with this when they have people on the ground being faced with police brutality.
This interview has been edited and condensed for clarity.