“I think the doctors have got it wrong on smoking.”
This isn’t a comment from the 1950s. It was said in 2016 by British pro-Brexit politician, Nigel Farage. He later doubled-down with a tweet that said:
“The World Health Organisation is just another club of ‘clever people’ who want to bully us and tell us what to do. Ignore.”
The initiative began in the very early days of the post-truth crisis, and today it thinks differently about what’s happening to give concerned people the words and tools they need to protect both truth and rational debate.
“We have some of the best minds from everywhere; law, politics, philosophy, media, science, medicine – all wrestling with how we can protect the integrity of the information that all those disciplines are built on,” Professor Enfield says. “For instance, imagine a world where you can’t trust peer reviewed science. That’s a crisis that’s already unfolding.”
The initiative develops this new way of thinking through group discussions on campus, public forums around the country and by contributing to media outlets to point out emerging problems. A term that is becoming part of more conversations is ‘cognitive literacy’.
“Human thought processes have some glitches that can be exploited. Cognitive literacy is about understanding how your thinking works so you can be aware of manipulation
Human cognitive glitches include – but are not limited to:
Promoting false facts and information has become a ready tool of partisan media, vested interests, and politicians around the world. Powered by the internet, it is a force that is undermining the nature of truth itself.
Professor Enfield sees this situation as highly dangerous as it affects the fundamentals of decision-making in areas like taking action on climate change.
In terms of Farage, Professor Enfield thinks he’s signalling - a word that has taken on another layer of meaning in the post-truth age.
“Farage doesn’t care if his statement is true or not,” says Professor Enfield. “He is signalling what his broader, anti-establishment views are. What he says isn’t supposed to be informative or even plausible. It’s more a rallying point for people who might be attracted to his world view.”
Professor Enfield investigated the possibility of using artificial intelligence to scan statements made in parliament and in the media, to identify inconsistencies, contradictions and misleading ideas.
But how do you teach a machine to understand the many nuances of the word ‘truth’? And how can a machine identify the language of non-truth, especially when someone is actually believing their own lie?
“Unfortunately, there’s no technical solution,” says Professor Enfield, plainly. “We have to be able to do this ourselves. It has to be people who care about truth.”