|

Facebook is secretly rating your trustworthiness. Excellent!

Photo: Terry Johnston via Wikimedia Commons

According to the Washington Post, Facebook is rating your trustworthiness on a zero-to-one scale. Not only that, they won’t tell you how they rate people. Here’s why that’s a necessary first step in squashing the spread of fake news.

First, let’s get this out of the way. Facebook is biased. All social media and search algorithms are biased. They make choices, and those choices reflect the choices of the people who designed them, consciously or unconsciously.

The Facebook algorithm’s primary bias is towards sharing. In other words, Facebook replicates each user’s own bias. If you love dogs, your feed is full of dogs. If you love Trump, your feed is full of pro-Trump messages. That maximizes viewing and addiction and therefore, ad value. It ignores truth.

In my opinion, Facebook should have a bias toward truth. It should show more of what’s true, and eliminate what’s false.

But in this world of “alternative facts” and “truth is not truth,” does the word “truth” have a meaning any more?

Already, the Facebook algorithm deprecates articles that were debunked by fact checkers. But there are many more articles than fact-checkers, so that’s not enough.

So why not have people identify what is true? Well, it turns out, like our President, people on Facebook like to call things “fake news” if they disagree with them. For example, Trump’s former attorney Michael Cohen said on Tuesday that Trump approved payments to keep women from talking about affairs with him. Cohen’s statement happened — it is a fact. But if you are pro-Trump, you might report an article about Cohen’s statement as fake, even though it happened. Apparently, according to the article in the Post, this is exactly what is happening. Here’s what Facebook product manager Tessa Lyons told the Post:

“I like to make the joke that, if people only reported things that were false, this job would be so easy!” Lyons said in the interview. “People often report things that they just disagree with.”

So some people report false news based on facts that are wrong, and others report false news based on what they disagree with. The former are helping with fact checking and identifying truth, the latter are just reinforcing Facebook’s tendency to show you what you believe, rather than what is true.

So, how to tell the difference?

Rate people on the trustworthiness of their reporting — whether their “false news” items tend to be all just articles they disagree with, or whether they flag only articles that include actual falsehoods.

The fact that Facebook is doing this is not only unsurprising, it’s inevitable. If you’re a conspiracy theorist or a partisan reporting everything you disagree with, your reports aren’t useful. If you carefully check what you report, your reports are useful. For Facebook to have a bias toward truth, it must have a bias toward trusting people who have shown they can be trusted.

The final and most controversial element of this scheme is that the trust algorithm is secret. While people get upset with this, it’s also inevitable. Facebook can’t reveal the elements of its trustworthiness score, or you’ll find a way to game it, especially if you’re working for a foreign government hoping to influence people in US elections.

But while their trustworthiness score is secret, it’s pretty obvious what it will be.

If you never have flagged an article as false, or you’ve just started, your trustworthiness score is low. Facebook can’t tell whether to trust you or not.

If you’ve flagged articles that have mostly turned out to be false, as cited by other users, independent fact checkers, and Facebook’s own fact-checking staff, then your trustworthiness score will be high.

If you’ve flagged articles that have mostly turned out to be true according to fact checkers, your flagging is a misleading signal. Your trustworthiness score will be zero.

It is possible to game this system — you just have to behave like a reasonable and careful person for a while, and then suddenly go rogue. But that’s time-consuming to develop, and once you go rogue, your score will drop quickly. So gaming the system will be time-consuming and mostly ineffective. That’s the way Facebook wants it.

I predicted this exact scenario

In December of 2015, almost three years ago, I suggested how Facebook could implement a truth icon. My scheme included not only fake news reporting, but weighting the scores of those who flagged articles based on their trustworthiness — exactly as Facebook is doing.

Two years later, when Facebook’s Tessa Lyons proposed a new system for flagging fake news, I suggested improving the solution this way: “Weight ‘Fake’ reactions higher if the person who posts them not only documents them this way, but has a history of identifying fake news from all political viewpoints, not just one side.”

And a month ago, when Mark Zuckerberg had dug himself into a hole based on Holocaust deniers, I again suggested identifying the most trustworthy checkers and elevating their opinions in the algorithm.

By waiting three years before doing the right thing (and weathering multiple data scandals in the meantime), Facebook has squandered its own trustworthiness. A secret trustworthiness score for users is actually the best solution to flagging and getting rid of fake news in a world that’s drowning in it. It’s still the right solution. It’s just a shame that Facebook will have to go forward with it in an environment where its own carelessnes with users’ data makes it harder for many of us to see the merit of this solution.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.