Fivethirtyeight made suggestions on how to fix Facebook. Would they work?

On fivethirtyeight.com, Kayleigh Rogers wrote a piece called “Facebook’s Algorithm Is Broken. We Collected Some Suggestions On How To Fix It.” Let’s try to figure out whether these suggestions would work.

I’ll analyze each suggestion in the article from four perspectives:

  1. Would it solve Facebook’s major problems right now, specifically the viral spread of misinformation and the proliferation of hate and divisiveness?
  2. Would Facebook management consider it?
  3. How would Facebook’s algorithm react? (As I’ve written before, the algorithm is really running the show at Meta — the management is focused on giving the algorithm what it wants.) In this analysis, “what the algorithm wants” is my way of describing changes that would retain the simple, unfettered social spread that currently characterizes Facebook.
  4. Could the government include these suggestions as part of the way that it regulates social media?

Limiting the number of reshares

Noah Giansiracusa at Bentley suggests limiting the number of times any article can be reshared, as a way of limiting disinformation. Karen Kornbluh of the Digital Innovation and Democracy Initiative at the German Marshall Fund, suggested limiting shares for content that spreads quickly, since such spreading is common for misinformation. And Laura Edelson, a Ph.D. student at NYU, suggested slowing the spread of content that scores higher on downstream interactions (that is, long chains of shares).

Would it solve problems? Facebook’s internal research found that limiting deep reshares could reduce misinformation by 25% or more. So, this could actually help.

Would Facebook management consider it? Sure. The existence of the internal research shows that the company considered it.

Would the algorithm favor it? The algorithm doesn’t want to get in the way of people’s desire to share popular content. So the algorithm would resist such a change, meaning that Meta would also resist it. (The last thing they want is to throw wrenches into simple activities like sharing.)

Would the government regulate this? Doubtful. It would be very hard to legislate or regulate the number of times something is shared.

Sanction bad actors who share questionable content

Nathaniel Persily of the Stanford Cyber policy Center suggested identifying those who repeatedly share misleading content and slowing the spread of their future posts.

Would it solve problems? Not very well. Such users would simply re-register under new names.

Would Facebook management consider it? Yes, they likely would.

Would the algorithm favor it? Bad actors are bad for the social network as a whole. So I think the algorithm would have no problem with the idea of removing and deprecating posts by proven losers.

Would the government regulate this? That would be tricky. I shudder to think of how the government could enforce such an idea.

Allow users more control of what content they want to see

Provide more granular user controls.

Would it solve problems? No. Users don’t want to spend time deciphering settings.

Would Facebook management consider it? Yes.

Would the algorithm favor it? So few people would actually manipulate such controls that the algorithm wouldn’t care.

Would the government regulate this? They could require such controls. This would feel like a good idea, but would ultimately be ineffective.

Make the algorithm prioritize positive content

Former Facebook data scientists Roddy Lindsay would like the algorithm to prioritize content that people think is “good for the world.” Harvard researcher Jinyan Zang suggested focusing on qualitative metrics (like positivity or content that people say they like) over quantitative metrics (like shares and clicks).

Would it solve problems? Potentially. It would cause more positive content to spread.

Would Facebook management consider it? I think this would be a relatively popular fix with Facebook management. The challenge is how define such content algorithmically.

Would the algorithm favor it? In theory, this is just another knob to twist on the algorithm. But good content spreads slower than divisive content, and the algorithm prefers things that spread faster. So in the end, the algorithm would find such interference to be an obstacle to the simplicity of its function.

Would the government regulate this? God help us if the government is deciding which content is positive.

Get rid of the algorithmic newsfeed

Go back to a reverse chronological newsfeed, with the newest content first. This is what whistleblower Frances Haugen suggested.

Would it solve problems? Yes, because pernicious viral content would have no advantage over other content.

Would Facebook management consider it? A Facebook newsfeed defined this way would get fewer interactions and be less addictive — and that means people spending less time on Facebook. So they’d be against it.

Would the algorithm favor it? Never. The algorithm will not step aside in favor of a simple chronological newsfeed.

Would the government regulate this? It could. But defining which sorts of algorithmic listings are permissible and which are not is probably more technical than government regulators could easily make rulings on.

Change Facebook metrics

Former Facebook integrity staffers Jeff Allen and Sahar Massachi suggest changing Facebook’s metrics to focus on something other user engagement.

Would it solve problems? This is like the mother of all algorithm tweaks — it encompasses all the others. While the devil is in the details, it’s certainly possible to imagine a set of metrics that would reduce viral spread of misinformation, hatred, and anger.

Would Facebook management consider it? Not in a million years. Less engagement means less interaction, which means less time spend on Facebook and less profit.

Would the algorithm favor it? A new algorithm would replace the old one. That’s an awfully big change. The current algorithm wouldn’t favor this.

Would the government regulate this? It’s hard to see how. Again, this level of detail is beyond the blunt tools that governments typically use.

In summary, few of these changes would be likely work in practice

I could see limiting the length of resharing chains, which would help a little, and punishing people who regularly share misinformation by limiting the reach of their posts.

I still believe other changes would work better: opening up Facebook’s algorithm to academic research so researchers can better understand the sources of its toxic effects; implementing a fairness doctrine to show people political content that they disagree with; prosecuting Facebook for withholding material financial information; and greatly increasing Facebook’s spending on content moderation.

Fivethirtyeight’s suggestions are too wimpy. It’s time get serious here, and give the government the tools to stop the worst of what Facebook is helping to spread.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

One Comment

  1. British quiz show: Facebook ranks angry designations a 5 and likes a mere 1 in people’s feed. So the unpleasantness is amplified.
    https://www.youtube.com/watch?v=gOA4RdUJVrc at 19:33 (appropriately enough)
    The team that came up with the emoji reactions were responding to users’ desire for more ways to express sentiment than a mere “like.” The team didn’t implement a dislike button because they thought that would bring too much negative energy to a social media network!
    https://www.businessinsider.com/the-reason-facebook-introduced-emoji-reactions-2015-10