Social Networks

Can technology help stop fake news spreading?

Being blamed for everything from Brexit to the 2016 US presidential election results, fake news is increasingly seen as a genuine danger to democracy and social stability. Technology absolutely contributes to the problem -- but on the flip side, it can help us combat fake news and missinformaton, according to graph database expert Emil Eifrem, co-founder and CEO of Neo4j.


Fake News it seems is everywhere – a worrying trend that has crept up on us and is threatening everything from democracy to world peace.

Fake news is totally false information, photos or videos that have been created on purpose to confuse and misinform the public. It isn’t just consumers that fake news has fooled. The White House Director of Social Media shared a video purporting to show a severely flooded Miami International Airport after being hit by Hurricane Irma in 2017. It was actually of Mexico City Airport after Tropical Storm Linda, which devastated the Mexican capital.

Technology is undoubtedly a major contributing factor in the fake news phenomena. Technology and social media has made it easy, convenient and cheap to create and disseminate information. Suddenly we all have the power to report and publish. Technology companies, however, are battling to fight the fake news onslaught. Facebook, read by its 2.2 billion users, has hired more human moderators and is looking to provide more information on news sources in its ongoing fight against fake news, for example. At the same time YouTube is looking to better moderate video content using enhanced algorithms. But the fake news still keeps coming. 

The big problem with fake news is that it comes in many guises – from different governments, countries and individuals with varying agendas. The enormous volume of fake news and fake news sites has created a skeptical public who have less trust than ever in the mainstream media. Technology may have helped spawn the fake news epidemic, but ironically it may also be the answer to checking it.


Counter fake news measures

Spotting fake news isn’t as easy as it sounds. But graph databases may be the answer. With the unique power to navigate and map relationships in data, graph databases have shown their capabilities in a several high profile investigative journalism projects, including the Panama and Paradise Papers[1].

More recently, graph analysis was used across tweets from Russian Twitter trolls to uncover how the accounts were rapidly spreading fake news. During the 2016 US presidential elections Twitter was used to propagate fake news, presumably to influence the final outcome of the election. The House of Intelligence committee released a list of 2,752 false Twitter accounts they thought were being operated by a known Russian troll factory, dubbed the Internet Research Agency.

Despite these accounts and Tweets being removed from Twitter, journalists at NBC News managed to create a subset of these tweets using a graph database to link up the connections and see exactly what had been happening where and how.

A couple of days later Special Counsel Robert Mueller indicted 13 Russians and three Russian entities, including The Internet Research Agency, in an alleged conspiracy to defraud the United States, including meddling with the 2016 presidential election. 

The NBC reporters open sourced the data to enable others to learn from the dataset and encourage those that have been caching tweets to contribute to a combined database.

The big question is what was behind this elaborate fake news campaign. Using graph databases, NBC reporters were able to seek out hidden connections between, Twitter accounts, posts, flags and websites. The inference worked in a number of ways. False Twitter accounts with fake profiles of what appeared to be regular US citizens, used common hashtags. Reply tweets were posted to popular accounts to gain overall visibility and quickly pick up followers.

Another breed of Russian troll account professed to be local news outlets. These accounts magnified reports of violence. Finally, other Russian trolls set themselves up as seemingly genuine local political parties.

By analyzing the data, NBC reporters were able to see that most of the original tweets in the Russian troll network were actually written by a very small number of people. Each cluster concentrated on a certain political group – such as those with left or right leanings; only around 25% of the tweets were original, the rest were re-tweets designed to amplify their messages across the Twitter network. Some accounts averaged 7 tweets a day, others trebled that figure.  


Lessons learned

The Russian troll factory shows what a small group of people can achieve globally using the power of social media. But what can governments and social media platforms learn from this to stop it happening again? Firstly, it is a matter of connections. In our hyper connected digital world it is almost impossible to spot relationships in datasets without a technology specifically designed to show up these connections. Secondly, it is imperative that once these connections have been connected the patterns of behavior can be understood.

By adopting a so called ‘connections-first’ approach to analyzing these highly complex datasets, governments and technology companies can proactively work to intercept and prevent this dangerously intrusive phenomenon before it has a chance to feed prejudices, topple governments and derail our societies.



« Everything you need to know about… Cloud computing


Lack of customer data hampering commerce in Africa »
IDG Connect

IDG Connect tackles the tech stories that matter to you

  • Mail


Do you think your smartphone is making you a workaholic?