News used to be reliable.
While "fake news" has always existed, we enjoyed a halcyon period in the second half of the 20th Century where the news media in industrial democracies produced a reliable, shared reality. People disagreed, but their opinions were mostly based on credible facts produced by solid journalism standards, fact-checking and editorial integrity.
Then the web arrived, followed by the social web. Now, instead of three reputable news sources, you hear facts and ideas from thousands of sources of varying reliability. These appear before your eyeballs by invisible means -- by the compatibility of the content with the secret algorithms used to determine what spreads widely -- and what doesn't.
A broad range of people with political, commercial or anti-social interests have been evolving their techniques for gaming the social algorithms for ever-accelerating the spread of fake news. Dana Boyd calls it "hacking the attention economy."
Why semi-fake news is the worst kind
Here's the problem: Patently false news is ridiculous and, as such, harmless. The worst kind of fake news is a Russian export called "disinformatzya."
Guardian journalist and Russia specialist Luke Harding says the Kremlin's "disinformatzya" tactics were honed by the KGB during the Cold War. They've been around for decades, but only in Russia.
What's new is that in recent years Russia has been applying "disinformatzya" to English-language audiences. We're not used to "disinformatzya," and so it's extra effective.
The goal of "disinformatzya" isn't to get people to believe lies, but to "confuse and bamboozle everybody else by floating conspiracy theories, so there are 10 different explanations for an event," Harding said in an interview with NPR's Terry Gross. The Russian government does this through the publication of semi fake news and also "by hiring armies of trolls," which spread misinformation and disrupt online conversations with abusive, hateful posts, thus preventing rational conversation.
Inside Russia, "disinformatzya" seeks to erode trust in all institutions so that a "strong man" like Putin can present himself as the only solution.
Outside Russia, "disinformatzya" helps Russia destabilize and weaken rivals, while simultaneously enabling the Russian leader to convince the Russian public that the West is just as corrupt and dysfunctional as Russia is.
The worst part is that this manipulative tactic of using "disinformatzya" to deliberately create confusion and havoc works. And because of that, it's being instinctively picked up by American and European organizations, trolls and politicians.
That's what's new. That's why "fake news" is such a big problem now.
Muddying the waters is that the "fake news" label is being thrown around irresponsibly.
"The National Catholic Register" slammed an article in The Atlantic about the political effect of ultrasound technology as "fake news."
President Trump started calling CNN "fake news" after that network reported information the president disagreed with.
A post this week in Canada's Financial Post went full circle on "fake news." The author of the piece slams the Canadian government for an intended crackdown on "fake news," saying that the government itself spreads "fake news." The article is accompanied by a warning that the piece may itself be "fake news."
There's no question that "fake news" is a "thing" now. Australia's Macquarie Dictionary named "fake news" their "Word of the Year."
CNN created a "fake news" beat position -- not to create fake news, but to research, report and expose it.
Governments in Germany, Canada, the Czech Republic and elsewhere are setting up commissions, review boards and committees to tackle the fake news problem. These efforts are nice, but won't have the desired effect, because they don't affect the spread of fake news on social sites.
Fake news is technology's fault. Can technology fix it?
What Silicon Valley can do
The fake news problem has gotten so bad lately that Silicon Valley companies and others are scrambling for technology solutions to a technology-created problem. Google this week announced that it had banned some 200 publishers from its AdSense network. Google didn't publish a list of banned sites, which resulted from a change in the company's policy on misleading content. Google has now added a category for sites that impersonate news sites.
While the word "ban" sounds strong, the fact is that Google is merely withholding its advertising services from these sites, which are free to pursue other advertising opportunities.
Facebook reportedly updated its Trending service to reduce the viral sharing of fake news. It took an interesting approach, too. Facebook's algorithms will now eject news that's trending where that news comes from a single source. Unless a report gets pickups (original stories based on the reporting of another publication) in other publications, it won't be allowed in the Trending section.
Unfortunately, this appears to be easy to game. Fake-news sites need only to publish multiple sites, and do pickups on their own content -- something many of the fake news sites already do. Facebook also promised to de-personalized Trending topics that show the same list to all users.
The French newspaper Le Monde built a database called "Decodex" containing 600 fake-news websites that goes online in February. The database was compiled by the publication's fact checkers. Most of the sites are fake, but foreign sites like America's "Breitbart" and the "Daily News Bin" are also included. Users can paste in URLs to see if an online news piece is fake or, better yet, use a Chrome or Firefox extension that alerts readers with a color-coded system. Le Monde plans to also launch a Facebook Messenger bot for fact-checking.
This approach might be effective. Researchers at the University of Cambridge found that "inoculation" is the best way to prevent people from believing fake news. (Studies show that when fake news is corrected, they continue to remember and believe the fake news.) But when study participants were given true news, then told that specific groups were circulating fake news, followed by exposure to the fake news, they continued to believe the true news. In other words, identifying fake news only works if the fake news is identified before exposure.
A 19-year-old Stanford University student, Karan Singhal, created a "fake news detector" using artificial intelligence. It analyzes 55 different metrics, including writing style, layout and domain name. You can try it for free.
Climate scientists and technologists are also collaborating on fighting fake news with a new site called climatefeedback.org. The site gets climate scientists to do reviews on stories about the climate. The scientist/reviewers add notes and links to the articles, and add a credibility score.
The beauty of this approach is that each article is judged independently (instead of branding an entire publication as "bad"). For example, the site slams the Guardian for this article but praises it for this one. Better still, the site essentially teaches media criticism and skepticism from a scientific point of view.
One of the most aggressive actions comes from Snap. The company updated its content guidelines for Snapchat publishers to battle fake news, according to a report in The New York Times this week.
In advance of an expected IPO, Snap wants to cut down on clickbait. Snapchat's "discovery" section now bans profanity, sexual or violent content and misleading or fraudulent headlines.
But Snap also targets fake news. The company says all content must be fact-checked and accurate and that publishers can't impersonate entities or people. It's not clear how Snap will police the guidelines. But to the extent that they are policed, they're among the most stringent of all the social sites.
Snap is doing the right thing by saying: We're a publisher. We're responsible for content. And that's the only responsible policy.
What not to do
The attitudes of Twitter and Facebook, meanwhile, are irrational and toxic. They reserve the right to ban, censor or delete any content they want -- Facebook based on its "Community Standards" and Twitter based on "The Twitter Rules." Both Twitter and Facebook tend to take action only when shamed in the court of public opinion. But when it comes to fake news, they throw their hands up and say they're not publishers and that they support free speech -- or they make minor tweaks to filtering to placate critics.
Recently, a series of analyses found that fake news didn't determine the outcome of the recent election, and therefore we shouldn't worry about fake news. But who wants to live in a backwards world where people believe journalism is false while propaganda, conspiracy theories and fake news are truth; where scientific research is considered political, but claims by politicians about science are not?
It's time for Google, Facebook and Twitter to get serious. Fake news can and must be identified, banned, deprioritized, labeled or otherwise disempowered.
The only thing that can save us from the scourge of fake news and "disinformatzya" is Silicon Valley.