Why the fake news problem needs an AI and a blockchain solution

AI and blockchain has huge potential in fighting misinformation and fake news.

IDGConnect_fake_news_AI_blockchain_shutterstock_740740771_1200x800
Shutterstock

The Russian invasion of Ukraine has given new impetus to online fake news to the extent that anti-misinformation firm NewsGuard is currently tracking 201 sites that it claims are spreading myths about the war. It’s not a new problem. War, of course, only accelerates it and the tactic is as old as the hills - the British famously set-up a transmitter called Aspidistra in 1942 to transmit programmes that would try to convince the German people that the war was going badly for their country.

The problem is that misinformation appears to be growing and it’s not all about Russia and Ukraine. There was a spike during the height of the Covid-19 pandemic, as well as recent elections and referendums in the US and Europe. Where there’s an opportunity to polarise opinion, or to steal data, it seems fake news will increasingly emerge.

The worrying development though is that AI is now being used to disseminate fake news. Julia Ebner, senior research fellow at the Institute for Strategic Dialogue recently detailed how Russian AI was used to identify, amplify and exploit grievances online, in order to undermine peace in Western nations. Speaking at a recent Cityforum webinar, she said that in effect, Russian was using AI to weaponise extremist ideologies and conspiracies.

AI, though, has huge potential in fighting it too. We need it, because we have to remember; we are all prone to being duped. As the World Economic Forum warned in its recent report  'The Ability to 'Distil the Truth', it’s not just those with “with poor science knowledge, low cognitive abilities and a tendency to be accepting of weak claims,” that believe false stories. No one is immune, which is why a technology solution it the best way to go.

“AI can improve the coverage and accuracy of finding fake news, especially when we focus on textual data,” says Mohammad Mahdavi, professor for data science at GISMA Business School in Germany. “If we narrow down the news domain enough, AI can be trained to automatically identify specific kinds of disinformation, such as those who claim the earth is flat. For wider news domains, like all fake news or propaganda around the Ukraine war, AI can facilitate the fact-checking process for humans.”

Dr. Adnan Masood, chief AI architect at digital transformation firm UST agrees. He says that while fake news, misinformation and disinformation pose a serious problem, fortunately AI can have a big role to play.

“AI can harness encoder/decoder models to track how text is generated and determine what has been written by humans and which social media conversations are purely artificial,” he says. “Tell-tale signs of bot driven publications can be accomplished by looking at the signatures of NLG algorithms, occurrence of certain keywords (“Donbas” and “denazification” and “bioweapons” are favourites of Russian propagandists), while simultaneously employing tools for natural language identification.”

For example, it’s estimated that 45% of Russian internet traffic is directed by bots, adds Masood, so clearly targeting bot traffic is an understandable line of attack. But it’s not the only line. As Mahdavi suggests, the technology should help humans root out any potentially fake claim online, using AI to help gather evidence, putting pieces together to identify fake news and misinformation. Unfortunately, AI is not there yet.

Madhavi says that this is not yet a task that machines can do in a fully automated way, without any human intervention, especially when the news domain is wide, and it can cover any disinformation from an event such as the Ukraine war.

“The main problem here is that the machine needs training data to learn fake and real news,” says Madhavi. “When the news domain is wide, collecting enough training data to teach the machine all kinds of disinformation in that domain is a challenging task. Nevertheless, the machine can still work as a decision support system along with human beings to speed up the fact-checking process. Machines can still collect and provide relevant evidence and leave the final validation task to humans.”

Perhaps we also need to look at blockchain, get to the source of the issue and enable stories and information to be ‘validated’ from reputable sources. Of course, that in itself is a divisive topic that spans political and social theory. The point is, blockchain could at least ensure that a story pretending to be from a reputable source would be spotted immediately, wouldn’t it?

Masood says that distributed ledger and blockchain technology could have a significant role here. He suggests it could be used to provide an extra layer of authentication “that validates stories across multiple modalities by tracing them back to a reputable source,” he says.

But how would that work?

“It has enormous potential to provide validation for stories by organising content onto immutable digital ledgers that assign non-fungible and transparent tokens to journalists,” he adds. “Distributed ledgers provide the ownership traceability necessary to understand where content is actually coming from. By creating credibility scores for media outlets and even individual journalists, blockchain technology has the power to restore trust and confidence in the media. Because blockchain systems rely on a decentralised ledger to record information in a way that’s constantly being verified and re-verified by each user, they are nearly impossible to alter or falsify.”

It's an interesting thought, putting media on a path that could almost reward honesty. Masood floats the idea of a “reputation system” which would track publications through lineage and provenance. Original content owners and creators would be verified and traceable via a distributed ledger.

It’s doable but culturally challenging. With AI, there’s also the issue of data bias. What are the parameters for identifying fake news? Who decides? Governments? Technology leaders? Interestingly, Full Fact, another anti-misinformation site, recently released a 10-point plan for regulators to consider. It focuses more on the mindset and less on the technology but for this to work, it will need a bit of both and there lies an opportunity for innovation and change.