Facebook, Fake News and Verification

Facebook has a fake news problem. In the aftermath of Donald Trump’s election as the 45th President of the United States, the social network has come under sustained criticism for failing to prevent the spread of misinformation and disinformation. Numerous articles have been written dissecting the phenomenon, the possible impact it had on the US election and what Facebook should be doing about it.

But what has been fundamentally missing from the discussion is the responsibility of users to verify the content they consume online; particularly on social media where content is shared by “trusted” friends and family. As an open source intelligence (OSINT) centre, our day-to-day work involves critically evaluating publicly available information to verify the accuracy, veracity and reliability of sources and content. 

Facebook has defended its platform using three important, if questionable, arguments:

  1. Facebook is a technology (not media) company. Facebook argues its function is to connect users and it has no responsibility to editorialise or be the “arbiters of truth” as Mark Zuckerberg put it. Yet Facebook has done exactly that as numerous conservative Americans and former Facebook employees have claimed.
  2. Fake news is a tiny fraction of content posted to Facebook. In the general sense this is true, fake news is dwarfed by the other content posted by Facebook’s 1.8 billion users. However, it is clear that fake news, at times, outperformed factual content on the site.
  3. Fake news did not swing the election to Trump. Did fake news on Facebook impact the election? Almost certainly yes. Did it change the outcome? That is a more difficult question and we need more data and analysis to answer it. However, political parties, companies and other organisations that spend billions every year on Facebook advertising will be interested to hear that Zuckerberg doesn’t think Facebook content necessarily influences people.

Mark Zuckerberg posted a status addressing the problem and outlined a series of steps Facebook would take to tackle fake news, misinformation and disinformation on the site. These steps include: stronger detection, easy reporting, third-party verification, flagging fake news, stronger ad policies to undermine the economics of fake news and more.

However, none of these proposed steps directly address the critical issue of how to encourage users to think more critically about the information they see and share online. So what can users do to verify information?

Verify the Source

Fake news on Facebook is a business and it makes its money by attracting the greatest numbers of shares, likes and comments. The headline, the content, the tone and every other feature is designed to entice the user to share it. Much has been made of how Macedonia became a hub for fake pro-Trump/anti-Clinton news on Facebook in the run up to the election.

The critical user should question everything about a piece of content, starting with the source. A fundamental part of critical analysis of sources involves asking questions like: Who posted the content? When was it posted? Does the source have a particular bias? Does it link to factual content? Does it have a reputation for fake news? This helps you understand who created the content, when they created it and why they did so.

Checking the source is even more important as fake news is now being produced on a mass scale through Amazon’s Mechanical Turk, a crowdsourced marketplace for work. As the screenshot below shows, users are encouraged to write in the style of Rush Limbaugh, Alex Jones and Milo Yiannopoulos and to “Have fun with this HIT. Get snarky, get mad, voice your opinion! You are a reporter now!” [my emphasis].

Amazon Mechanical Turk HIT for a new conservative news website.

Amazon Mechanical Turk HIT for a new conservative news website.

Verify the Provenance

Fake news spreads because it is specifically designed to be shared by tapping into our fears, hopes, prejudices and preconceptions. But often these fake stories will distort an existing factual news story or they may simply be completely made up. The question to ask of every story is “how do they know that?” To help you answer this there are basic things you can do, including: check the url, check the author, search key phrases from the article to see if they have been taken from elsewhere and reverse search images.

You can use these and other techniques to answer a fundamental question: is this an original piece of content or has it been taken from elsewhere?

Use Trusted Third Parties

Part of Facebook’s answer to the fake news problem is to place more emphasis on trusted third parties who verify content. Fact-checking websites such as SnopesFull Fact and FactCheck critically evaluate news, speeches, statements and other claims to verify their veracity and accuracy. However, these sites get hundreds of requests every day for fact checking and they only have limited resources. This means users have to fact check too by using the techniques described here and in useful resources like the Verification Handbook.

This short FactCheck video on how to spot bogus claims summarises the key points of verifying content:

With the Oxford dictionary announcing “post-truth” as its word of the year and with misinformation and disinformation a daily fact of life, users face the challenge of separating fact from fiction, often with little or no training in OSINT. It is tempting to simply rely on news organisations, social networks and other people to do the hard work of verification for us. But as the Facebook case shows, we cannot blindly trust these sites to verify every piece of content.

The task for all of us is to think more critically about the information we consume. Instead of “you are a reporter now” we should be thinking along the lines of “you are a fact checker now.”

Leave a Reply

Your email address will not be published. Required fields are marked *