News

Can AI Bring the Facts Back?

Paul Brown
06.08.2019 Published: 06.08.19, Modified: 06.08.2019 12:08:30

Misinformation, alternative facts and fake news. These buzzwords dominate our social media feeds, as well as various debates in the public eye. Whilst the politicking has involved mistruths, and spin, the ease, speed and scope with which this misinformation can spread is fairly recent. It’s made even easier through the use of bots – fake accounts programmed to share content that’s not only highly divisive but also often misleading or false. In turn, this can destabilise politics and lead to increasingly polarised societies.

Social media can facilitate this. According to a recent survey by 68% of young people periodically get their news from social media, where anyone can post or share anything and call it news. How many of us would take the time to double-check a headline we see on our Facebook feed – especially if it’s shared by friends we know and trust? It’s easy to see how we may be vulnerable to fake news.

Yet, there is a belief that Artificial Intelligence (AI) can potentially solve this mess of misinformation – but this optimism is somewhat misplaced

AI cannot overcome psychology

Simply put, we believe what we want to believe. Once we decide we like a certain politician, we tend to ignore content that challenges our opinions; we dismiss it as slander from the “other side”, we unfollow critics and supporters. This phenomenon is called cognitive dissonance and it refers to the discomfort we feel when presented with information that contradicts our beliefs.

Likewise, when we see something that confirms our existing opinions, we accept it without thinking too hard about how true it is, nor do we go out of our way to verify any facts and figures we’re presented with. AI can’t overcome this. Especially when these little digital habits are picked up by various algorithms that then show us even more similar content – regardless of its accuracy. The very architecture of social media and targeted advertising means our views are constantly being reinforced rather than challenged.

Who programs the AI?

At the end of the day, algorithms are still programmed by people. An AI is only as unbiased, as rational, as its developers and as the data it’s fed. Human biases have a way of sneaking into our algorithms. Training an AI-based on historical data allows those human biases to come through, replicating exactly the sort of prejudices and biases AI is proclaimed to tackle.

Training an AI to spot fake news is a more delicate task than it appears. For one, it would have to be trained on fake content from a very broad range of sources and topics – if data comes exclusively from stories about a single party, it will suggest that stories surrounding that politician and party are probably fake. Similarly, the criteria on when a story passes into the threshold of “fake” news is also debatable – how many false statistics does it take? How does an AI judge the credibility of a journalist? Or a website? An AI is not a database of objective knowledge and truth, so it’s worth considering what it would be confirming its facts against too. Regardless, all of these questions demand a vast swathe of data to be analysed, which although necessary and possible, is hardly glamorous.

Finally, the use of AI in combating fake news is reactive, rather than preventative; content can spread very far and wide before it gets debunked, by which point, the damage will be done. Furthermore, it won’t really stop future content from emerging either, not unless we humans engage more critically with all the content we consume rather than just the parts we disagree with, and so undermine the potency of fake news as a political tool.

So to finally answer the question posed above, AI can’t bring the facts back – alone. It must be underpinned by humans being open to opposing views and checking the facts ourselves. AI can give us a helpful nudge, but it can’t do our thinking for us.

Keen to delve into the world of technology? Kick-start your career in AI today with FDM Group

If you enjoyed this post, check out more of our This Week in Tech News articles:

Past events

Insights for Organisations

Is your business ready for AI?

FDM Consultant Jonathan van Kuijk works in the Workplace Technology department for a retail client.

Find out more
Insights for Organisations

From data to action: strategies for tackling financial crime in the UK

The UK loses a staggering £8.3 billion each year to financial crime, the government's Economic Crime Survey (ECS) has revealed.

Alumni

FDM Alumni's fast track journey to TechSkills accreditation

Alice Watkins is an FDM Alumni working as a Business Analyst for a global banking client.