As the fragile issue of trust continues to shape organisational performance and effectiveness, the work of internal communication has never been more important.
This quarter’s curated roundup of AI news offers plenty of food for thought as we continue to explore the ethical considerations of AI adoption.
In the past few months, several cautionary tales serve as potent reminders of the importance of exercising diligence when using GenAI to create content.
Various global brands have recently come under fire for using GenAI in their marketing endeavours.
Skechers, for instance, faced a consumer backlash after publishing a full-page advertisement in U.S. Vogue's December issue that clearly used AI-generated imagery. The ad, featuring two Fifties-style women shopping, drew criticism for its incongruous execution. The women were wearing high heels (in lieu of Skechers), had ‘melting’ faces, were missing certain clothing elements, and the background scenes featured non-sensical text. The Skecher product featured looked very much shoehorned in against the rest of the content.
Coca-Cola received similar criticism for its AI-generated Christmas 2024 ad, intended to reimagine their beloved 1995 ‘Holidays Are Coming’ campaign. Despite featuring traditional festive imagery of snowy streets and smiling people, the advert was labelled "soulless" and "devoid of creativity" by critics.
While other companies such as Toys R Us claim their use of GenAI in advertising has been cost-effective and "successful", it’s becoming clear brands risk alienating consumers when they prioritise ‘efficiency’, and therefore profit, above emotional authenticity.
Such missteps highlight a growing risk for business as consumers become increasingly adept at detecting AI-generated content. When they feel deceived, brand trust is placed at risk.
Marketing experts suggest these negative responses stem from AI's perceived incompatibility with authenticity and community-focused values. They warn that low-quality use of AI signals corner-cutting to customers, thus damaging brand perception.
This is particularly relevant given that 59% of Gen Z already harbour concerns about AI's societal impact.
The message is clear. Companies run the risk of alienating consumers when they fail to exercise sound judgement over the use of GenAI in their commercial endeavours. And those that prioritise revenue over authentic, human-created content may well end up paying a high commercial price in the long run.
Are tech companies now encroaching on traditional governmental roles – and in so doing, posing a threat to the democratic rule of law?
That’s the argument posited in The Tech Coup: How to Save Democracy from Silicon Valley, a new book by Marietje Schaake, a Stanford HAI Policy Fellow.
While this might seem like a ‘macro’ issue, we believe it has real-world consequences for internal communication professionals trying to achieve cut-through in an era of misinformation, disinformation, attention deficit, rising cynicism, apathy and disengagement.
The Dutch former MEP warns that tech companies are increasingly assuming governmental power and influence, particularly in the digital realm. Big tech can now control sensitive intelligence capabilities and make unilateral decisions affecting global communications, as seen when Elon Musk controlled Ukraine's Starlink access.
The Tech Coup highlights how much of the messaging presented about technology is controlled by the Silicon Valley behemoths themselves. It’s becoming clear they’ve evolved into powerful entities that increasingly reach far beyond traditional corporate boundaries, controlling multiple aspects of our lives, from communication to infrastructure.
Schaake warns their influence now rivals nation states, with companies like Google and Facebook cleverly positioning themselves as underdogs whilst amassing unprecedented power. This even more concerning when politicians often appear intimidated by technology. It can easily lead to impotent oversight – particularly in the realm of the burgeoning AI sector, where it is being deployed faster than we can collectively learn about how best to use it.
What’s clear is that these shifts can bring significant risks to responsible, authentic communication and public trust. The situation may be set to worsen if Trump’s second presidency strengthens tech interests through deregulation.
Rather than showing "humility" towards tech giants, Schaake argues that it’s time for governments to reassert democratic control to protect public interests from the blatant over-reach that primarily serves the corporations pushing it.
Experts predict that by 2035, AI systems may develop ‘consciousness’.
With this development it's no longer confined to the realms of sci-fi, an eminent philosopher believes it has the potential to create pronounced social division between those who believe in AI’s ‘sentience’ and those who don't.
Just as we’ve witnessed populist politicians position certain societal developments as culture war issues, Professor Jonathan Birch of the London School of Economics warns these attitudinal differences could create "social ruptures". He predicts different societal groups will fundamentally disagree on whether AI systems can experience feelings.
A growing school of thought argues that as AI systems become potentially conscious and capable of independent agency, we will need to seriously consider their welfare rights – much as we do for humans and animals. And while it’s not yet a ‘done deal’ that AI systems will become conscious, the significant possibility does demand careful consideration.
Either way, it highlights a unique challenge for internal communicators who will need to be open to the fact that internal cohorts may well harbour differing opinions on this key development.
Some may even develop emotional connections with AI systems, while others continue to view them as purely functional tools.
While it may seem a while off, acknowledging this is increasingly germane to our efforts to support our tech colleagues with the ongoing responsible, ethical implementation of GenAI at work.
Clearly, an increasingly delicate balance will need to be struck, between promoting the use of AI tools in our working lives, whilst accommodating colleagues' differing beliefs in its potential.
In the recent United Healthcare lawsuit, relatives of two of its (now-deceased) clients alleged the US insurance firm knowingly used a faulty AI algorithm to deny elderly patients coverage for necessary extended care.
This brings into sharp focus the importance of organisations fostering the right culture, skillsets and oversight around AI adoption, rather than simply implementing the technology, and assuming the technology will ‘know’ to take the right course of action.
The lawsuit and controversy surrounding the firm’s awareness that the algorithm it uses has a staggering 90% error rate were precursors to the fatal shooting of its CEO in New York in early December 2024.
As AI use goes mainstream – whether overtly or covertly – internal communicators clearly have a vital role to play in supporting its safe and responsible use.
There are several areas where internal communication can make a difference to how AI is adopted and leveraged:
The key take-away here? Successfully harnessing AI isn't just about the technology. It's equally about the human systems within which it operates. Internal communicators are superbly positioned to help organisations move beyond treating AI governance as merely a compliance exercise and build a culture of innovation.
At the start of January, OpenAI CEO Sam Altman forecast that 'virtual employees' could join workforces this year, with these ‘AI agents’ capable of performing autonomous tasks, such as scheduling meetings and booking travel.
Major players like Microsoft and McKinsey are already adopting this technology and reporting significant productivity gains. Predictions suggest up to 30% of current work hours could be automated by 2030. Microsoft's HR Virtual Agent, for instance, has already saved 160,000 advisor hours by handling routine queries.
As touched on above, some colleagues will be more disposed to believing in AI sentience moving forward. The emerging possibility of AI consciousness may lead to important considerations about its welfare and moral status. If some colleagues acquire virtual workplace colleagues in the not-too-distant future, what are the ethical and practical implications for internal communicators?
Clearly, this signals a critical transition period. Transparent communication about the role and limitations of AI agents will be essential for successful integration.
It also presents some crucial challenges. What is the best way to communicate these changes sensitively, manage colleague concerns around automation and job security, and help organisations develop policies for human-AI collaboration?
As AI agents become co-workers, maintaining healthy workplace cultures via clear messaging about their implementation and accompanying safety measures will be mission critical.
Given the pace at which GenAI is evolving, it’s unsurprising that colleagues feel increasingly overwhelmed. Talk of virtual colleagues and AI agents, the potential development of digital consciousness and the, at times, unchecked, egregious behaviour of big tech might easily make us feel powerless. It can even feel like we’ve already entered a dystopian machine-mediated future.
But if anything, these developments highlight the need to have the tools at our disposal to navigate this emerging new landscape responsibly and ethically.
That’s why the IoIC is preparing resources to help our profession address the ethical issues surrounding AI adoption – because those within our sector are uniquely placed to act as the interlocutors of the impact of these developments within our organisations.