AI is the new PR & comms stakeholder
Artificial Intelligence impacts PR & comms in two distinct ways. It transforms how practitioners work with massive volumes of media, and at a greater degree of sophistication and personalisation. At the same time, it’s also a powerful force shaping how information is surfaced, interpreted, and acted upon across the entire communications industry. What’s changed isn’t just ‘how’ you do the work, but your reason for doing it in the first place.
Not sure if you agree? The current landscape makes clear just how important factoring in AI’s influence is. According to Gartner’s latest predictions for 2026, the mass adoption of public Large Language Models (LLMs) as a replacement for traditional search is expected to drive a significant increase in PR and earned media budgets by 2027. To add to this, BCG’s AI Radar global survey found that corporate investments in AI have doubled since last year.
From the PRCA’s recent green paper on responsible AI, to the CIPR’s focus as part of its survey for 2026, the sector is rapidly investing in this future. And with the EU AI Act deadline on the horizon, the urgency for robust governance and planning has never been higher. As Rupert Younger, Director of the Oxford University Centre for Corporate Reputation, put it as long ago as 2024: ‘AI is not just a technology, it has become a stakeholder’.
To navigate this new reality, we were joined by Dr Anne Gregory, Professor Emeritus at the University of Huddersfield, and Stuart Bruce, PR Futurist and Co-founder of Purposeful Relations, for our latest webinar, ‘AI as the new PR & comms stakeholder’.
In the session, we explored how this new stakeholder is redefining reputation, influence, and strategy.
What kind of stakeholder is AI?
While many across the comms industry still view AI as a digital assistant for finding efficiencies and speeding up elements of our daily responsibilities, Anne argues it has moved into a more active role:
‘In one sense, AI is a compliant assistant, helping us along the campaign creation trail from research to identifying and prioritising stakeholders, tracking sentiment. But even here, it’s doing that in your name and in your organisation’s name. You have to have a stake in its work, because it certainly has a stake in yours.’
‘We’re becoming increasingly dependent on these tools, and they’re shaping our practice and behavior, but AI is much more than just an assistant. It’s a powerful actor in the information ecosystem.’
While AI lacks anything approaching human intentionality (for now…) its algorithmic processes produce significant real-world consequences. It shapes organisational perceptions and mediates engagement with individuals, often presenting summaries that are believed more than traditional sources.
‘AI is becoming a very strong stakeholder,’ said Anne. ‘ I like Dr. Nici Sweaney’s definition of these agents and AI, it’s an accidental stakeholder’.
Stuart Bruce added that there is still significant confusion regarding what AI actually means for practitioners. Purposeful Relations’ research with 72Point revealed that 44% of UK consumers trust AI answers – a figure nearly double the 24% who trust social media influencers:
‘If you look at where budgets and effort goes at the moment, it’s going to social media influencers; it’s not going into what’s happening with AI answers, which are becoming a lot more influential and persuasive.
‘Anne talked about accidental stakeholders – you’ve actually also got the accidental AI users, because even those people that aren’t using AI, they’re still going to be seeing those AI overviews in search. This is where we talk about ‘zero click’, because people are often seeing those answers and going no further.
‘It’s not just about visibility, it’s actually also about accuracy – how your organisation is being portrayed, your leadership, and your people. You’re going to want your particular perspective to be coming out in AI answers.’
The dangers of underestimating AI’s role as a stakeholder
If AI is treated only as a tool or assistant, organisations face substantial reputational risks. Anne warned about the danger of underestimating AI’s power to curate and shape truth:
‘For a lot of people, it has become a source of truth. Maybe PR people are more skeptical of AI than others… but the world isn’t peopled by AI experts or public relations experts. Even though we know these summaries are often incomplete and biased, we tend to believe them. If we don’t regard AI as an influential stakeholder, we could be putting ourselves in jeopardy.’
Anne pointed out the difference with this stakeholder and stakeholders as they’re currently understood, particularly the media.
‘There’s an interesting difference here. If you’ve got a beef with a journalist and you think they’ve not represented you fairly, you can go and have a conversation with that journalist, and you can present them with a case. You can even go to the editor and get some sort of redress. You can’t do that with AI, not in the same way at all.
‘AI is a very powerful and influential stakeholder, but not one that you can necessarily influence back directly. Once a narrative is set, it becomes really, really difficult to counter it. Which is, of course, where PR comes in.’
Trouble can also come if comms practitioners fail to make full use of traditional tools in the PR kit that came way before AI: getting a story out to as many influential sources as possible.
For a practical example, Stuart shared the story of a university industrial dispute. The AI’s narrative was dominated by the trade union’s perspective, because the union had provided multiple touchpoints – website statements, social media, and media quotes. The university, in comparison, having viewed the situation as negative, only responded directly to journalists:
‘The trade union gave them half a dozen quotes – the university gave them one. It just wasn’t credible. This is what AI as a stakeholder actually means. The more touchpoints that AI can find to verify that a piece of information is a fact, the more likely it is to be included in that AI answer.’
Shifting narratives and the speed of change
Anne reflected on the speed of adoption, noting that CIPR’s AIinPR 2018 literature review could not have predicted the current reach of generative models. She admitted that while the PR industry was initially slow to adopt and adapt, it has quickly developed an ‘obsession with tools’ rather than considering the broader implications:
‘We didn’t realise that AI is a stakeholder for the whole organisation. We are only now waking up to the fact that we have an enormous role in the governance of these systems. At the end of the day, we’re talking about the legitimacy of whole organisations.’
Stuart emphasised the need for PR and comms teams to factor AI’s influence into strategies now, particularly to curtail false narratives, misinformation, and disinformation:
‘If organisations aren’t doing something now, it’s too late.
‘NATO published a paper on misinformation and disinformation and one of the concepts that NATO talks about a lot is “pre-bunking” and “inoculation” – making sure that your information is out there. And that’s what you need to do with AI – it’s too late to wait, and watch, and see. You actually need to be making sure that it understands your perspective now – it’s not just as simple as dealing with a truculent journalist or an activist group. AI is influenced by a multitude of sources.’
Navigating governance and internal responsibility
With a lack of one source of truth regarding the ethical use of AI, Stuart highlighted the importance of ‘living and breathing’ internal governance and responsibility, involving continuous training and feedback loops:
‘Too often what people try to do is create an AI policy, and on its own, that’s fairly meaningless. Governance is something entirely different. The policy only means something if you’ve done some training to go with it.’
Stuart introduced the concept of a ‘social license’ for AI — gaining trust from other stakeholders, internal and external, for how an organisation embraces the technology.
‘It’s making sure that it’s not just about how you as PR people or comms people are using AI, but how the organisation is embracing it. How on earth do we get trust from all of our other stakeholders for the things that we might want to do with AI? How do we bring our employees with us? How do we make sure that we’re using it in the most sustainable way possible?
‘What are we doing to address issues around bias and inclusivity, fairness and access? The answer is going to be different for each organisation.’
What can the industry do now to work with this new stakeholder?
Both speakers offered critical advice for practitioners to follow now. Anne urged the comms industry to continue to question what AI offers and evolve their approach as the technology changes:
‘Constantly ask yourself three questions: Why are we using AI? How is it built? And who is it going to be affecting?
‘Remember, it’s not just an agent at your service; it’s an equaliser of power that takes a stake in you and your organisation as much as you use it to influence others.’
Stuart expressed concern that hype, and confusion, around terms like Generative Engine Optimisation (GEO) is turning a portion of comms people off of thinking about the ramifications of AI as a stakeholder:
‘There is a lot of hype, but the fundamental point remains: you must renew your communication strategy. If comms people aren’t thinking about this this year, they’re going to be in real trouble.’
Simple tips for AI-friendly outreach
When asked for tips on making media outreach more AI-friendly, Stuart was adamant: do not write for machines.
‘We should still 100% be writing for humans. However, it’s possible to write for humans in an AI-friendly way so that AI can understand and read it as well.’
He identified three factors AI prioritises:
Recency: AI likes fresh content to supplement its training data. If you have a research report, keep refreshing it with new aspects.
Relevance: AI recognises specialist niche titles and trade media. Some syndicate titles that practitioners sometimes sneer at are actually vital because AI uses them to fill data gaps.
Reputation: AI looks for ‘proof points,’ like whether a spokesperson has a matching biography on the website or a consistent LinkedIn profile.
Stuart suggested that practitioners must broaden their scope of stakeholders. While first-tier earned media remains important, much of it is hidden behind paywalls. AI will look elsewhere for information:
‘When a comms team is doing outreach, if the CEO has got a limited time to do interviews, it changes the priority of the ones we’re going to accept. When we talk about owned media – not just necessarily talking about your own owned media – often we’re talking about partners, suppliers, or customers, and what they’re publishing and sharing.
‘We are in public relations – the key word is “relations”. Sometimes we focus on two small a segment of stakeholders. We might look at the media, we might look at politicians, but it needs to be a lot broader than that.
‘We really need to understand all of the relationships that an organisation has and think about whether we can manage those relationships in a better way, but also what impact that’s going to have on AI answers, because it is going to have an impact on both.
‘Fundamentally, organisations need relationships to exist. You can’t exist in a vacuum, so it’s important that we get this right.’
And to finish on a positive note: Anne saw great opportunity for public relations’ new stakeholder in AI – bringing new ways to connect and relate:
‘That’s one positive thing that AI can help us with. Look at the spread of relationships that are going to help us get traction with a whole range of other organisations, and influential people.’
For more on how AI is speeding up the spread of information – and challenges to the comms industry – check out our previous webinar ‘AI, Disinformation and the Risks They Pose for Communicators Today‘ with Thomas Barton, Executive Director of the Council for Countering Online Disinformation and CEO of Polis Analysis.



Leave a Comment