Misinformation and fake news are nothing new, yet in recent years the terms have become truly mainstream and taken on a life of their own. Now touted daily, fake news is seemingly everywhere, particularly social media sites where it spreads rapidly thanks to easy shareability and sub-standard moderation. Today, even claiming that a piece of content is misinformative could turn out to be fake news itself.
Despite increased awareness of the issue, many are still asking what more can be done to curb the problem. One approach where debate rages fiercely is the application of emerging technology. As advances are made to artificial intelligence, machine learning, and cybersecurity, could we see a future where fake news is dealt a decisive blow? Or will these technologies be used to propagate more false narratives and skew the truth even further?
Faking facts
As we continue into the digital era, the opportunities for fake news to present itself increase. Pascal Geenens, Director of Threat Intelligence at Radware attributes this rise to the fact that “before social platforms such as Facebook, Twitter, Reddit etc. news was created and delivered by radio, TV, newspapers and the recipient of the news only had very limited ways of responding or interacting with it”. In the past, the limited availability of news and information generally had a positive impact on the quality of information that was presented to the public. As more ways of interacting with and consuming information emerge, more opportunities are created to present it in a different context and display altogether new viewpoints. In isolation, these new voices and inaccurate viewpoints might not reach many people, however in a digital environment where content can be shared quickly and easily, misinformative articles can rapidly find their way to the screens of many users.
Websites like Facebook, Twitter, and LinkedIn are areas where people connect with one another. For many users, the connections on their social media profiles are the digital profiles of people that they know in the real world. Personal connections like these are often highly trusted by users, sometimes to their detriment. If an individual sees one of their close, trusted connections sharing a piece of information, they are more likely to believe that it is accurate, and so are more likely to share it with their own followers and connections. This can escalate very quickly and lead to fake news stories flooding social media sites.
This problem is often exacerbated by some of the algorithms that social media companies utilise to keep users scrolling on their sites. These algorithms identify the type of content that users usually engage with and ensures that moving forward they see similar types of content. A flaw of these algorithms is that they often cannot differentiate between accurate information and something false. So, when a piece of misleading content proves popular, social media algorithms do not hesitate to promote them to their users, particularly if the ‘story’ lines up with the type of content that a user normally engages with. These algorithms can also lead to the creation of online echo chambers where users only see and engage with content that agrees with their worldview. In these types of environments, misinformation and fake news can be accepted as the truth very quickly and can even lead to more extreme viewpoints and actions if left unchecked.
Arguably the most high-profile example of this type of scenario happened in 2018 when WhatsApp was pressured into minimising the number of groups a message could be forwarded on to. This came after unfortunate incidents in India, where flash mobs came together to lynch members of the public after messages circulated on the platform falsely identified them as criminals. Drawing parallels with the echo chambers created on social media sites, these messages were often forwarded onto friends and family members who trusted them as they came from a ‘reliable’ source.
AI bots are also subject to misuse online and can be used to spread misinformation. Geeners argues that “computer or human bots can distribute messages with similar news stories but from multiple accounts, in different languages, and originating from multiple geographies”. Given that many bots are online 24/7, 365 days a year, they can share information at any time of the day, greatly increasing the potential of misleading content to go viral. Coupled with their ability to communicate with one another over large geographical distances, bots have the potential to facilitate truly global misinformation campaigns. Many bots are set up to respond to specific keywords or phrases. In these instances, individuals that wish to spread fake news can simply mention these words in their post, and the bot will pick it up and reshare it. With effective programming, even a small bot network can reach a significant number of users in virtually no time at all.
Further exacerbating the current situation, Chad Anderson, senior security researcher at Domain Tools, explains that only a small amount of finance is needed to set up robust misinformation campaigns, “anyone can launch grassroots campaigns with a number of tools for a small fee, organisations have AI that write fake local news articles spreading misinformation”. Low-cost entry points have always been a temptation for cybercriminals to take advantage of, and with some AI now capable of creating misleading articles, it has never been easier for individuals to take advantage. To make matters even worse, Anderson argues that even small-time campaigns with minimal funding pale in comparison to “well-funded state and corporation sponsored misinformation campaigns [designed] to sway voters and consumers”.
Talking truth
Despite the significant role that some technology plays in the proliferation of fake news and misinformation, there are plenty of companies and individuals earnestly trying to make technology the solution. Dr Ian Brown, Head of Data Science at SAS UK&I talks about the advances made in data analytics and how “conducting continuous analysis is key if social media platforms are to react quickly and responsibly to fake news”. In recent years, analytics has become more integral to business operations for its ability to rapidly gather insights from large data sets. As analytics becomes ever more advanced and capable of real-time monitoring, it could be used to monitor content and provide insight as it goes live in a digital space. Robust analytics frameworks could identify common phrases and keywords associated with misinformative articles, greatly increasing the speed at which moderators can evaluate the content. Even without moderation, analytics could be used to flag content as ‘potentially misleading’.
This type of scenario came to the foreground recently on Twitter. The tech giant recently flagged one of President Trump’s tweets as potentially misleading after he posted a video advocating the use of hydroxychloroquine to treat Covid-19 cases, despite scientists widely agreeing that it has no such benefit. Anderson sees this as a big step in the right direction, as flagging can prompt “people to automatically question the information put in front of them”. If users are made aware that the content they are seeing could be misleading, there is more chance that they do not share the message themselves, effectively limiting its spread. Taking things a step further, Anderson believes that web browsers should put “a banner at the top of sites that have recently stood up or are known to spread disinformation”. These banners could then be taken down if and when websites can prove that the information on their sites is truthful.
Brown also believes that AI-based decisioning systems could be used to ensure news and social media sites are behaving responsibly. He claims that AI is already advanced enough to “trace the source of problematic material” and should be used to alert companies. In instances where misinformation originated on an organisation’s own site it can be reviewed and removed where necessary. If the content originated elsewhere, the company can flag it as such and even remove reference to it if need be. Brown sees this type of situation working best on some of the larger information platforms that may benefit from extra moderation due to the volume of content that they consume and promote.
For many years, CAPTCHA programs have caused frustration online as they tediously identify whether a potential user is human or not. However, businesses today are re-purposing their CAPTCHA programs with the specific aim of combating misinformation. When applied correctly, CAPTCHA programs should be able to help social media companies spot the tell-tale signs of fake news almost instantly. And it’s not just social media sites that could benefit from this approach, Metro Brazil was able to repurpose it’s CAPTCHAs to help educate its readers on the possibilities of fake news – instead of identifying whether there were cars in a specific square, readers were asked to highlight a piece of fake news.
Elsewhere, the UN are working with companies to try to stop the spread of misinformation surrounding Covid-19. The SAP Innovation office in Asia was able to develop a chatbot-based application with the goal of providing real-time, accurate information on Covid-19 to users alongside personalised guidance on how to stop the spread of the virus. In the future, this technology could be adapted to address other popular subjects of misinformation as and when they appear. And, as chatbots become more sophisticated, they could offer highly specific insight to users about the credibility of the content they are consuming.
Overall, misinformation and fake news faces an interesting future. Without significant changes to the way that many digital outlets operate, misinformation will probably always have a breeding ground for success, and there will undoubtedly be instances where misleading content slips through the cracks. However, these instances are likely to become less frequent as technology is created that can identify the hallmark traits of fake news with ever increasing accuracy. And, this technology will be reinforced by a digital consumer who is thoroughly educated and aware that the content they are consuming may not be 100% reliable.