How to tackle the comms risk of AI super-powered mis- and disinformation
While AI is a powerful catalyst for workflow efficiency and creativity, it is also an accelerant for the spread of false or misleading content. Comms practitioners will have to deal with both sides of artificial intelligence’s rapid integration into the systems of communication that surround us. The ability to embed AI as an extra asset in the PR toolbox also comes with dangers caused by malicious actors intent on causing harm.
For the Vuelio webinar ‘AI, Disinformation and the Risks They Pose for Communicators Today’, Thomas Barton, Executive Director of the Council for Countering Online Disinformation (CCOD), joined us to explore how AI is fundamentally changing the scale and speed of disinformation.
With a background spanning geopolitical intelligence at Polis Analysis and years in the political sphere, Thomas explained how PRs can protect organisational reputation and maintain integrity when the very tools we rely on are used to distort the truth.
Defining the threat: Misinformation vs. Disinformation
Before tackling the technological risks, Thomas stressed the importance of understanding the specifics:
‘To boil it down to simple terms: disinformation as the spreading of false information with the deliberate intent to deceive. There is a malicious motive behind the circulation of that content. With misinformation, the key difference is that there is no intent; someone may spread misleading content without the deliberate intention of doing so. It is essentially a question of intent.’
How AI can be used to scale-up deception
While disinformation is as old as human history (Thomas cited examples ranging from Octavian’s smear campaigns against Mark Antony to Cold War subversion), generative AI has increased scalability and persuasiveness.
‘Traditionally, running a disinformation campaign required considerable staff and budget for manual processes,’ said Thomas. ‘Now, a sole malicious actor with the power of an AI platform can manufacture and spread false content at scale in almost real-time. This allows for a technique called ‘flooding the zone’ – overwhelming a user’s feed with so much low-quality disinformation that they become disorientated and eventually withdraw from the information environment entirely.’
Beyond volume, Thomas warned that AI-generated content is becoming increasingly difficult to distinguish from reality.
‘Synthetic-authored tweets are often found to be several percentage points more believable than those authored by humans. AI is actually better at deceiving people than we are. Furthermore, we can now use AI to scrape publicly available data to build personas of specific individuals, tailoring false narratives to their biases and interests to provoke a stronger emotive response.’
Real-world consequences: From stock markets to deepfakes
While mis- and disinformation is still most discussed through the political lens, its impact on corporations is already big – and only growing.
Thomas highlighted several instances where false information led to tangible financial hits, including a fake tweet that significantly impacted Eli Lilly’s share price and a similar hit to Pepsi. AI has only increased the sophistication of false information:
‘A really important case study is what happened at Arup. An employee was brought into a video call where their colleagues were impersonated in a deepfake. This is not a theoretical threat; it is a real-world form of fraud that organisations must wake up to.’
Practical steps for comms teams
How can teams protect themselves? Thomas argued that the next cybersecurity-style evolution for businesses must be information integrity:
‘Proactivity is critical. It starts at the top; senior executives and boards must understand how disinformation manifests as a corporate risk, whether through stock manipulation or brand trashing. Once leadership is across it, the whole workforce needs protection. Media literacy training should be mandatory.’
Another way to fight the use of AI for malicious intent? AI for good.
‘AI can be a nightmare for spreading falsehoods, but it is also a tool for good. Serious organisations in 2026 should be using LLM-based detection tools to monitor potential campaigns in real-time. These tools allow you to identify, defend against, and ultimately disrupt false narratives before they take root.’
Stopping the spread once it starts
When a false narrative does begin to spread, the instinct to react immediately can be counterproductive. Thomas advises a measured assessment of the threat level before engaging.
‘The challenge is to avoid giving oxygen to false narratives. If you respond to every low-scale bot, you may inadvertently spread the content further. However, if a campaign is orchestrated and poses a reputational crisis, you must fight fiction with facts. The best approach is to ensure your response is data-driven and leveraged through trusted third parties who can legitimise your record-correction.’
Positive counter-forces
Despite what can seem like a scary landscape, Thomas was optimistic about the growing ecosystem of counter-forces, including startups across the world developing deepfake detection and watermarking technologies. However, he believes that technical solutions must be paired with legislative change:
‘Longer-term, we need regulatory policy that addresses the root cause: the platforms. We should have more control as users – toggles that allow us to filter for content regulated by the broadcasters’ code or international journalistic standards. I don’t subscribe to the narrative that the web is “ruined”. We just need the right guardrails.’
Thomas’s final message to the PR and comms industry was a call for corporate social responsibility:
‘Business has to step up. If you are a socially responsible business in the age of AI, you must show you are willing to fight back against these risks. It is in your commercial interest to ensure we all benefit from a clean information environment.’
Find out more about Thomas Barton and his work with the Council for Countering Online Disinformation (CCOD) and Polis Analysis by getting in touch via LinkedIn.



Leave a Comment