Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

News

Can AI Bring the Facts Back?

Preeta Ghoshal
06.08.2019 Published: 06.08.19, Modified: 06.08.2019 12:08:30

Misinformation, alternative facts and fake news. These buzzwords dominate our social media feeds, as well as various debates in the public eye. Whilst the politicking has involved mistruths, and spin, the ease, speed and scope with which this misinformation can spread is fairly recent. It’s made even easier through the use of bots – fake accounts programmed to share content that’s not only highly divisive but also often misleading or false. In turn, this can destabilise politics and lead to increasingly polarised societies.

Social media can facilitate this. According to a recent survey by 68% of young people periodically get their news from social media, where anyone can post or share anything and call it news. How many of us would take the time to double-check a headline we see on our Facebook feed – especially if it’s shared by friends we know and trust? It’s easy to see how we may be vulnerable to fake news.

Yet, there is a belief that Artificial Intelligence (AI) can potentially solve this mess of misinformation – but this optimism is somewhat misplaced

AI cannot overcome psychology

Simply put, we believe what we want to believe. Once we decide we like a certain politician, we tend to ignore content that challenges our opinions; we dismiss it as slander from the “other side”, we unfollow critics and supporters. This phenomenon is called cognitive dissonance and it refers to the discomfort we feel when presented with information that contradicts our beliefs.

Likewise, when we see something that confirms our existing opinions, we accept it without thinking too hard about how true it is, nor do we go out of our way to verify any facts and figures we’re presented with. AI can’t overcome this. Especially when these little digital habits are picked up by various algorithms that then show us even more similar content – regardless of its accuracy. The very architecture of social media and targeted advertising means our views are constantly being reinforced rather than challenged.

Who programs the AI?

At the end of the day, algorithms are still programmed by people. An AI is only as unbiased, as rational, as its developers and as the data it’s fed. Human biases have a way of sneaking into our algorithms. Training an AI-based on historical data allows those human biases to come through, replicating exactly the sort of prejudices and biases AI is proclaimed to tackle.

Training an AI to spot fake news is a more delicate task than it appears. For one, it would have to be trained on fake content from a very broad range of sources and topics – if data comes exclusively from stories about a single party, it will suggest that stories surrounding that politician and party are probably fake. Similarly, the criteria on when a story passes into the threshold of “fake” news is also debatable – how many false statistics does it take? How does an AI judge the credibility of a journalist? Or a website? An AI is not a database of objective knowledge and truth, so it’s worth considering what it would be confirming its facts against too. Regardless, all of these questions demand a vast swathe of data to be analysed, which although necessary and possible, is hardly glamorous.

Finally, the use of AI in combating fake news is reactive, rather than preventative; content can spread very far and wide before it gets debunked, by which point, the damage will be done. Furthermore, it won’t really stop future content from emerging either, not unless we humans engage more critically with all the content we consume rather than just the parts we disagree with, and so undermine the potency of fake news as a political tool.

So to finally answer the question posed above, AI can’t bring the facts back – alone. It must be underpinned by humans being open to opposing views and checking the facts ourselves. AI can give us a helpful nudge, but it can’t do our thinking for us.

Keen to delve into the world of technology? Kick-start your career in AI today with FDM Group

If you enjoyed this post, check out more of our This Week in Tech News articles:

Case Study

Find out how we helped a major coffee chain cut contractor costs by 35%

News

For the next generation of everyday pioneers

We're continuing to break down the barriers that hinder women in tech. Discover the steps we're taking, the women we're empowering, and the future we're enabling in our blog.

Meet the pioneers  Stock photo