One of the more disturbing aspects of Google and Facebook’s propagation of false information when the Las Vegas shooting news first broke was Google’s explanation of how a false story naming the wrong person as the shooter rose to the top of its “In the news” section.
The false story originated on 4chan and got picked up by Gateway Pundit, a conservative website that has published false information in the past. The 4chan forum posts and the Gateway Pundit story began to show up in Google’s news area due to the way Google’s algorithm is programmed to handle news.
Google sent out a statement apologizing for the error, and gave us this explanation:
“We use a number of signals to determine the ranking of results—this includes both the authoritativeness of a site as well as how fresh it is.”
One would think that a forum site like 4chan would never pass the authoritativeness test, but that is where one would be wrong.
When Google talks about authoritativeness, they don’t use the same standards that we normally use in discussions about the authoritativeness of a source. When you’re determining a source’s authority, you might look at how long they have been in business, their past record, or their expertise in their field.
But Google isn’t talking about authoritative sources. Google is talking about link authority. Link authority is determined by the number of incoming links to a website. So, in Google’s economy, the more outlandish the claim or headline, the more authoritative a website will be, because those are the types of links people share.
A far-right or far-left website filled with conspiracy theories and misinformation can generate backlinks at a much faster pace than an actual news or information source that does research and has been in the business of journalism for several decades, because gossip moves faster than news.
Facebook’s algorithm is plagued with the same problem, only instead of backlinks, Facebook’s algorithm calls something trending based on how many people comment or share it. Again, this prioritizes outlandish gossip and clickbait over substantive and informative work, because people don’t talk about substance nearly as much as they rant about things that outrage them.
Google and Facebook are tech companies, who despite doing everything in their power to limit the influence and reach of publishers over the course of the last decade, never actually considered themselves to be publishers. Because of this, their approach to news comes from a tech perspective. Rather than employ human gatekeepers to manage the flow of news and curate the stories the way an editor would, they rely on automation.
While automation has its place — even within the world of journalism — an algorithm lacks the news judgment and discernment of a trained editor. Machines are not yet capable of the type of critical thinking required to determine the accuracy of information, and they lack the ability to question a source. Instead, the algorithms deployed by Google and Facebook prioritize the popular, which just allows unverified hearsay masquerade as news.
The solution to this problem is to employ human editors and curators to check the algorithm’s work and to add a human touch to the curation process. They don’t have to completely move away from the algorithm, but the heavy lifting needs to be done by humans. Someone has to be able to look at a story and say, “This is popular and trending, but is it true and accurate?” It takes a person to verify information, especially in a breaking news scenario.
Traditional news publishers need to pay attention to this problem as well. As newsroom budgets tighten and advertising continues to decline, many publishers are looking toward automation to save the day. Watch what is happening to Google and Facebook and don’t make the same mistake. Automation has its place, but too much automation will destroy your credibility as a news organization, and while Google and Facebook are big enough and wealthy enough to be able to survive that, most publishers are not.
The world still needs gatekeepers. No matter how much those outside of journalism claim we’ve moved past it, the algorithmic handling of breaking news is proving time and time again that gatekeepers are necessary to control the rampant spread of fake news and false narratives.