Fake news is an insidious and global epidemic. Intentionally posted false content, referred to as disinformation, is now widespread across digital and traditional media formats. It is also woven into the fabric of social media platforms.
This explosion has been fuelled by political and business agendas, social sharing, clickbait and deepfakes. It has had a significant impact, subverting democracy and public discourse. Such activity also skews markets and reduces confidence in news itself. For example, trust in U.S. mass media is at its lowest point in more than five decades, according to a Gallup poll.1
Truth and accountability in news is now in question, especially since social media platforms including Facebook and Instagram have stopped using third party fact-checkers. Increasingly sophisticated generative Artificial intelligence (AI) tools are also supercharging fake content.
The psychology of fake news
“Fake news poses a real threat to reputation, which is extremely important to businesses and governments. Going forwards we will see more disinformation shared on social media, some will target corporations. In a few years’ time it will be almost impossible to detect whether content is authentic or not,” explains Patrick Haack, Professor of Strategy and Responsible Management at HEC Lausanne.
“It makes crisis communication more challenging because you need to monitor social media constantly, which can also turn into a firestorm within minutes. Traditional response strategies do not work either.”
This is because many people have a fundamental confirmation bias, where they tend to search for, interpret and favour information that supports their own beliefs. In some cases these beliefs are not based on truths. This is particularly the case with social media, where people only see the news that confirms their biases, which are then amplified in echo chambers.
“We rarely see online content that challenges our beliefs. It means that we see a growing polarisation within the population, who are embedded in different realities. These polarised groups also think they are less impacted by fake news than those on the other side of the argument,” details Haack.
Research by Professor Haack and others looked at how fake news affects our first-order judgement. This is what we personally think of a person or company. And how it affects second-order judgement, which is based on what we think others think.2
They found that the negative effect of fake news is larger for second-order judgements. This means that even if we realise a piece of news is fake, we may still be influenced by the reactions of others, who believe the information is true. This has huge implications for understanding and responding to peoples’ behaviour.
“Fighting fake news with accurate facts is not enough, because even though an individual may not believe a news story, if that person believes that other people think that it is true then they may change their behaviour, where second order judgements impact first order judgements,” details Haack.
“This explains how bank runs occur. I may believe my bank is solvent, but if I hear a rumour that others are worried about the bank’s future and are getting their money out, this influences my behaviour and I also remove cash from my account, even though I know there’s no truth in the bank’s demise.”
One key recommendation from Haack’s research is to devise response strategies that not only target first order judgements, but peoples’ second order judgements as well. This is where the testimonies of others, particularly peers and industry experts, discrediting fake news, could be a vital form of social proof.
AI tool against disinformation
Challenging misleading information, is also the research focus of Liudmila Zavolokina, Assistant Professor in Information Systems and Digital Innovation at HEC Lausanne, who, together with her team, has created a generative AI tool that can analyse digital news online and then highlight, in real-time, propaganda and disinformation.3 The aim of the tool is to encourage news readers to engage in critical thinking.
“People need to be more active in how they digest their news, such an AI tool can counteract disinformation, since it helps readers question what they’re reading. One thing we’re experimenting with is how readers could also see the other side of any argument using an AI tool, which would also be helpful in combatting confirmation bias,” she states.
New research by Zavolokina and others show that while the AI tool boosts critical thinking when its used, this increase disappears when a person does not have access to the tool.4 This is because people use it as a “crutch” to detect propaganda. What they may need is new skills and active learning to be able to better detect disinformation.
“We also want to look at whether we can productise this tool creating support for media organisations and governments, to help them so they can accurately detect propaganda campaigns in the future,” says Professor Zavolokina.
References:
- Five Key Insights Into Americans’ Views of the News Media, Gallup, February 27, 2025
- Fooling Them, Not Me? How Fake News Affects Evaluators’ Reputation Judgments and Behavioural Intentions, Simone Mariconda, Marta Pizzetti, Michael Etter, and Patrick Haack, Business & Society, Sage Journal, August 16, 2024
- Think Fast, Think Slow, Think Critical: Designing an Automated Propaganda Detection Tool, Liudmila Zavolokina, Kilian Sprenkamp, Zoya Katashinskaya, Daniel Gordon Jones, Gerhard Schwabe, Proceedings of ACM CHI Conference on Human Factors in Computing Systems, February, 29, 2024
- Effective Yet Ephemeral Propaganda Defence: There Needs to Be More than One-Shot Inoculation to Enhance Critical Thinking, Nicolas Hoferer, Kilian Sprenkamp, Dorian Christoph Quelle, Daniel Gordon Jones, Zoya Katashinskaya, Alexandre Bovet, Liudmila Zavolokina, Proceedings of CHI, Human-Computer Interaction, March 11, 2025