Social Media Broke America. Here’s How to Fix It.

This is the draft of the op-ed I first sent to the Boston Sunday Globe for publication in its Ideas section. It was never published in this form.

Senator Daniel Patrick Moynihan once said, “Everyone is entitled to his own opinion, but not his own facts.” But that was before social media. Now, thanks to social media, each of us can have our own set of facts. The completely different sets of facts – and falsehoods – that social network sites show to conservatives and liberals have deepened and hardened the partisan divide in America. Social media is a toxic force attacking our shared reality and making the nation virtually impossible to govern. Unless we channel and regulate its power, our common idea of America is doomed to shatter.

Three companies control much of the news America consumes. They are not media companies like Fox or Disney or CNN. They are social networks: Facebook, with its main site and its subsidiaries Instagram and WhatsApp; Google, which owns YouTube; and Twitter. According to the Pew Research Center, 72% of Americans now use social media and 59% say they get their news there. But the news they view is skewed. Conservatives see content that other conservatives liked; liberals see liberal content. This bias creates “filter bubbles” – separate realities composed of completely different sets of facts. False information from sites without journalistic standards also spreads quickly within those bubbles. That false and polarized information amplifies the divisions in the electorate.

Eli Pariser, who eloquently described this problem nine years ago in his book The Filter Bubble, agrees that the problem is worsening. “When you have this much disagreement about who even won the election,” Pariser says, “that’s a pretty good sign that we are polarized.”

Current efforts to rein in these companies’ power focus on breaking them up. Last week the US Federal Trade Commission and attorneys general in 46 states (including Massachusetts) initiated antitrust suits against Facebook, accusing it of anticompetitive behavior. The FTC took similar action against Google earlier this year. Massachusetts Senator Elizabeth Warren and David Cicilline, U.S. Representative from Rhode Island, have called for undoing Facebook and Google’s acquisitions. But while these actions may punish the companies, they won’t change the algorithms. We’ll just have more, slightly smaller social networks with their own sets of divisive filter bubbles spreading false and extreme content.

How big is the problem?

To start, recognize that social network sites are literally addictive. When you see a notification, a “like,” a favorite, or a share, your brain generates a small hit of dopamine – the same neurotransmitter that mediates chemical addictions. The designs of social networks amplify anything that boosts interaction, such as features that raise the visibility of popular posts, suggest “what to watch next,” and offer an endless scroll of stimulating, infuriating, and ultimately addicting content — as anyone who’s ever picked up their phone to pass a little time and found half an hour inexplicably gone can testify. This is deliberate. Speaking to an audience of Stanford Students, the former vice president of user growth at Facebook, Chamath Palihapitiya, explained, “I feel tremendous guilt . . . The short-term, dopamine-driven feedback loops that we have created are destroying how society works.”

Filter bubbles exist because these companies have determined that we’re more likely stay engaged if we view content that taps into our biases – and the more outrageous, the better. Aggregate thousands of unvetted content sources and millions of users and you’ve created a powerful machine that amplifies our differences and minimizes our commonalities.

The pernicious results are many.

There’s the viral spread of outlandish conspiracy theories like QAnon, which claims that anti-Trump Democrats are part of a pedophile ring, or the video “Plandemic,” which claimed that COVID-19 was engineered in a lab to generate profits.

There’s the emergence of groups that incite violence against protestors. Kyle Rittenhouse, the shooter who killed two people during protests in Kenosha Wisconsin, was allegedly a member of one such group.

There’s the impact on elections. In the ten days before the US 2016 election, according to Oxford University researchers, voters in eleven swing states received more “fake, junk, and hyper-partisan information” on social media than professionally produced news.

There’s the divorcing of “facts” from sources, which obscures the questionable nature of much of the content. According to Pew, people who consume news via social media are capable of identifying the originating source only 56% of the time. Researchers at Columbia University showed that 59% of links shared on Twitter weren’t even clicked – they were shared simply on the basis of their outrageous headlines. Northwestern University researcher Jacob L. Nelson found that 27% of traffic to fake news sites came from Facebook, compared to only 8% for legitimate news sources.

George Colony, CEO of Cambridge-based Forrester Research, has closely observed the behavior of tech companies for nearly 40 years. He characterizes the result of social media as, not artificial intelligence, but “artificial stupidity.” “Facebook and Twitter have been studying addiction for a decade and a half. The primary result of driving this addictive behavior online is that it amplifies your biases, amplifies your viewpoints, and doubles down on your beliefs,” he says. “It is a narrowing of the American mind.”

Philip M. Napoli, a professor of public policy at Duke University’s Sanford School and author of Social Media and the Public Interest, calls the current scenario “the spiral of partisanship.” And he points out that social media doesn’t just amplify the most partisan parts of the likes of Fox News and MSNBC, it contributes to them. Fringe content spreading on social networks may originate on sites like The Daily Wire or Occupy Democrats, alongside posts from prominent “thought leaders” or washed-up actors, and then percolate up into opinion segments on partisan news networks. Then clips from those networks are once again recycled through the self-reinforcing funnel of social media.

Those who’ve watched social media since its inception are horrified. Charlene Li, founder of Altimeter Group and my coauthor on the 2008 social media bestseller Groundswell, recalls that in Facebook CEO Mark Zuckerberg’s first briefing, he kept repeating, “Facebook is a utility. It doesn’t know good from bad,” and then he explained that how people use it was not Facebook’s problem. According to Jonathan Taplin, director emeritus of USC’s Annenberg Innovation Lab and the author of the tech-skeptic book Move Fast and Break Things, “YouTube is the worst in terms of policing its material; it’s a cesspool and they couldn’t give a f— what’s going on.” Just before the 2020 election, Kate Starbird, a University of Washington professor of human-computer interaction who tracks social media disinformation, reported that she had watched false content shared tens of thousands of times on social networks before the companies made any visible effort to address it. As she tweeted, “By the time we [can refute disinformation] — even in cases where platforms end up taking action — the false info/narrative has already done its damage.”

Spokespeople for Google and Twitter pointed to efforts they’d made to reduce partisan extremism and the spread of fake news. Google’s spokesperson explained that YouTube’s recommendation system now tends to go from extreme content to more mainstream content, not the reverse, and that content that comes close to violating guidelines is only a fraction of 1% of what’s watched or recommended. Twitter’s spokesperson pointed out that the company adds warnings on misleading information from political figures like the president, and that tweets with such labels are “de-amplified” in the company’s recommendation systems. (There’s some evidence that users are more, not less, likely to spread content that’s labelled questionable.)

But despite years of promises to improve things, social media sites are both addictive and toxic to the healthy, multi-perspective discourse that is necessary for America to continue as a viable, unified nation. It’s time for regulators to take action.

Fortunately, there is a simple solution that won’t require years-long antitrust actions, abrogating the First Amendment, killing the sites altogether, or even meddling in their algorithms. And there’s a precedent.

In 1949, the US Federal Communications Commission created the Fairness Doctrine, requiring radio (and eventually television) broadcasters who expressed a point of view to allow “qualified representatives of opposing viewpoints” airtime as well. If you watched local TV in the 60s, 70s, and 80s, you saw such segments on the local news. The rationale was that with only a few channels to choose from, the public deserved more diversity of perspective. If a broadcaster failed to live up to the doctrine, it was at risk of losing its government broadcast license, ending its ability to operate.

The Fairness Doctrine was suspended in 1987; with hundreds of channels and thousands of Web sites now slinging news, TV no longer needs it. But there is only one Facebook, only one YouTube, and only one Twitter, and their algorithms together ensure that each individual sees mostly news and opinion consonant with their own biases – the same lack of diversity that the Fairness Doctrine was intended to resolve.

For social networks there is no broadcast license to suspend. But there is another source of leverage and rationale for regulation. “Section 230” is the provision of the Communications Decency Act that enables social networks to operate at all. It creates a liability shield so that sites like Facebook that post user-generated content can’t be held responsible for what appears there. Without it, Facebook, Twitter, Instagram, YouTube, and similar sites would get sued every time some random Karen posted that Mike Pence was gay or their neighbor’s Christmas lights were part of a satanic ritual.

Both President Trump and President-Elect Biden have called for revoking Section 230, but that would likely shut down not only social network sites but any place people can post content on the Internet, like community message boards and knitting groups on Pinterest (as well as the online comment section of this newspaper). The solution is not to cancel Section 230, but to make social networks pay for it – by increasing diversity of content.

Historically, Supreme Court opinions regarding First Amendment protections for problematic speech have recommended, not shutting it down, but stimulating “counterspeech.” Justice Oliver Wendell Holmes wrote, “The ultimate good desired is better reached by free trade in ideas—that the best test of truth is the power of the thought to get itself accepted in the competition of the market.” And Justice Louis Brandeis suggested, “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”

It is within our power to do this. And people want it: according to Pew, 62% of people say social media companies have too much control over the news people see, and 55% say the result is a worse mix of news.

Last year, Facebook generated $70 billion in advertising; YouTube, around $15 billion; and Twitter, $3.5 billion. We already know that these social networks have calculated which users are conservative or liberal – that’s how they create the filter bubbles. So let’s require them to set aside 10% of their total ad space to promote diverse sources of content. What would happen if the FCC required Facebook’s targeting algorithm to show free ads for mainstream liberal news sources to conservatives, and ads for mainstream conservative news sites to liberals? The result would be a sort-of tax, paid in advertising, to compensate for the billions these companies make under the government’s generous Section 230 liability shield, and intended to counteract the toxicity of their algorithms.

Ads work, which is why media companies already pay millions to advertise on social networks. But these ads would be different. MSNBC, offered free ad units for conservatives, would not show the same old liberal content – because no conservative would ever click on that. Instead, it would have to figure out how to create and advertise content attractive and interesting to those outside its natural audience. Fox News would grapple with the same conundrum from the opposite side of the political spectrum. The net result would be outreach on the values and content that people share, rather than the values and content that divide them.

If you wonder if such content could make a difference, consider the experience of Diane Hessan, who has been tracking and discussing politics with a diverse collection of 500 voters and writing about it in the Globe’s opinion pages for the last four years. Hessan told me that her voters have sent her thousands of pieces of “news” from sites with obvious biases like the Gateway Pundit or the California Globe. But when she raises appropriate questions about the credentials and veracity of content on those sites and responds with links revealing facts that contradict them, her voters are often shocked enough to consider a different perspective. “Holy cow, I had no idea,” they say.

Because a counterspeech advertising program leverages – rather than upends – these sites’ algorithms, it’s likely to be far more palatable to tech giants than more drastic alternatives, like repealing Section 230. Because it fits neatly into the existing site experience, it won’t disrupt users. And because it connects Americans to content that challenges the one-sided narrative, it has the potential to take us back to a shared reality, instead of each of having our own filtered set of facts.

This is a path back to sanity from the toxic shattering of reality that Facebook, Twitter, and Google have promulgated. We can get to work on it as soon as Joe Biden’s FCC commissioners are confirmed. And we can put the power of social networks to work pulling the nation together, instead of tearing it apart.

Josh Bernoff is the author or coauthor of six books on media and marketing. He blogs daily at bernoff.com.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

9 Comments

  1. As someone who occasionally raged at Globe editors during my 22 years there for slashing twl after twl of my copy, I think I get the rationale for how this evolved to the terrific essay they wound up publishing Sunday: Assume Ideas readers know about filter bubbles, addictive, polarizing social media crap like QAnon, and the amoral greed of Facebook et al. and get right into Josh’s Solution for Social Media.
    If it wouldn’t jeopardize your ongoing relationship with the Globe, it would be interesting to hear more about your take on their take on your draft as submitted. Clearly a lot of great reporting and smart POV from thought leaders got left on the floor, but the net result was a focused essay that’s earned huge engagement with readers. Two and three-quarters cheers for Globe Editors on this one.

    1. I’ll get into the editing process in tomorrow’s post. I believe this is an excellent essay, and I also believe the essay that they published is excellent. Which is better? That’s up to the reader to decide, and now that both have been published, they can decide for themselves.

  2. Your cure is novel and has the substantial advantage of not being worse than the disease.

    However, to quote the old cartoon Pogo, “We have met the enemy and it is us.”

    Social media is simply a mirror that reflects humanity. The image is ugly, ignorant, hateful and profane. The lesser angels of human nature come crawling out on social media like cockroaches in the night.

    There are matters of fact and matters of opinion. Infectious diseases are matters of fact and a perfect example of a topic where I favor aggressive censorship. However the anti-vax movement is powerful and has the Constitution on its side.

    Political and economic matters and even climate science are more debatable. Beliefs in these areas are more emotional, analogous to religion or sports fandom, and less susceptible to rational arguments based on empirical evidence.

    US politics have devolved to a winner take all tenor. I often suggest a thought experiment. Take a statement, like “The stimulus payments should be increased to $2,000 from $600” and ask people if they agree or disagree. Then attribute it to Trump and see how many people change their view.

    There is much ad hominem aspect to public opinion that is immune to facts. So it has been since the Stone Age and so it shall remain in the Silicon Age. Changing the human race must be done from within. In less scientific times, this was the role of religion, now largely banned from the public square, although I was somewhat startled to here a religious reference from the President-Elect yesterday. No doubt, the secularists are setting social ablaze as they did when Trump held up a Bible. Or are they?

  3. “And Justice Louis Brandeis suggested, ‘If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.'”
    – – –
    I would say that a Fairness Doctrine approach could be very helpful, but that giving falsehoods a broader platform could easily cancel any positive effects. Fact-checking must be performed, and outright lies need to be blocked. Simply labeling them only encourages the gullible to spread them, according to studies. Many readers would be immune to any sort of discussion, since for them, it’s all about the confirmation bias. I’m a staunch advocate for free speech, but too many have been falsely screaming “fire!” in our crowded theater for too long. Social media has been extremely weak in terms of countering lies – both grandiose and insidious.

    And why ads? Why not articles? If the platforms are forced to display out-of-bubble ads, might they not actually profit from this “tax”? But I suppose the compliance algo could determine that through percentages.

    Overall, great article.

    1. I thank you for a thoughtful comment.

      You suggest “Fact-checking must be performed, and outright lies need to be blocked.” But this passive voice comment reveals the challenge with such a program: who is to determine what constitutes a lie? Lies are not as easy to identify and flag as you might think. The current warning labels only flag a small proportion of lies — those associated with the election. A program of blocking lies is very complex and difficult to implement, and the devil is in the details.

      You say “Why ads? Why not articles?” But no one proposes posting full articles on sites like Twitter and Facebook. Instead, we post links to articles. A link to an article is functionally the same as an ad. So I don’t think we’re talking about different things.

      If platforms display such ads — and if the law requires them to do so, without the advertiser having to pay — then I don’t see how they’re going to profit. There’s no profit in free ads.

  4. At first glance, applying this reformulation of the fairness doctrine to social media sounds like a solution. However, the details get ugly in the context of factions that confuse opinion with fact. If my chosen bubble is authors that cite primary sources, does the fairness doctrine dictate that FaceBook supply me with 10% unsupported bullshit to balance out my predilection for facts? If I “hide” any ads from the Epoch Times, does that mean they have to be presented to another person until a sucker is found?

    On the positive side, as an advertiser, I would not be opposed to having some known portion of my ads presented to people outside of my target audience for a lower rate. Assuming the medium provides me with the same ability to track interest, there is value in the opportunity to identify new niches.