AI ethics in the news: Summer 2024 roundup

Explore our latest insights on AI ethics, from trust and environmental impact to data integrity and regulation. Learn how internal communicators can navigate this evolving landscape and foster responsible AI use.

09 Jul 2024
by Cathryn Barnard

As we approach the summer holidays, it felt time to provide our internal communication community with another roundup of what’s going on in the world of AI and ethics.

As we all know, AI is a molten topic and there’s no shortage of ethical considerations to explore.

Each week seems to offer up a new perspective to examine and debate. Indeed, the IoIC’s own IC Index research, conducted in partnership with Ipsos Karian and Box shines a spotlight on employee sentiment towards AI adoption at work.

Let’s kick-start our summer AI roundup with some essential findings for internal communicators.

Trust & AI

The 2024 IC index surfaced a vital finding – namely that trust is on a knife-edge.

Trust has always been pivotal to a healthy workplace, but in an era characterised by so much change and uncertainty, it’s been never more important.

Add in the relatively sudden arrival of a disruptor like generative AI (GenAI) and its potential to up-end the way we approach work, and the case for applying human-centred, ethical thinking when introducing it into our workplaces becomes clear.

New technologies like GenAI may well promise more efficient working lives, but against the backdrop of rising demand for greater authenticity and empathy, leveraging it effectively can make for a challenging balancing act.

In addition, since the advent of mainstream AI image-generation tools by tech behemoths like Microsoft, Google and OpenAI, there’s been a proliferation of AI-generated misinformation, which has become nearly as prominent as more traditional forms of manipulation.

It’s no wonder, then, that public opinion on the uptake of AI is pretty torn. In a recent survey by Public First conducted in the US and UK, 39% of respondents said they were curious about AI, while almost the same number (37%) fed back that AI worries them.

We know good communication is the basis for trust in any relationship, but what exactly does that mean for our work as internal communicators? In this mercurial context, trust is more important than ever – and we need to nurture it in our workplaces.

As the 2024 IC Index shows, while 52% of respondents identify as Total Trusters or Proof Seekers, another 48% identify as Senior Sceptics or All-round Cynics. Do we really want poorly thought-through AI adoption to tip the balance?

Environmental impact of AI

It might not be front of mind when we leverage GenAI, but the computational power required for sustaining its rise is doubling roughly every 100 days. Large language models such as ChatGPT are some of the most energy-intensive technologies of all. 

This means the resulting steep cost to the environment tends to be overlooked.

But the amount of energy GenAI systems require in their huge datacentres and the amount of carbon they emit have significant ecological impacts. A GenAI system might use around 33 times more energy than machines running task-specific software, according to a recent study, for instance.

The world’s datacentres demand ever more electricity. In 2022, they consumed 460 terawatt hours of electricity, and the International Energy Agency (IEA) estimates these centres could use a total of 1,000 terawatts hours annually by 2026. This is roughly equivalent to Japan’s 125 million-strong population’s energy consumption.

Microsoft is currently exceeding its overall emissions target by roughly 30%, thanks to its new energy-hungry datacentres like the one just built in west London. Meanwhile, Google has announced its greenhouse gas emissions have climbed 48% over the past five years, due to AI energy demand.

When it comes to the effects this is having on everyday lives, recent data from the UK shows the country’s outdated electricity network is holding back much-needed affordable housing projects. Utilities firms in the US are also already beginning to creak under the pressure.

This demands attention and action. For AI to fulfil its transformative potential, it will need to grow sustainably.

In an era where we increasingly expect organisations to do more than just make profits for shareholders, they will need to evaluate the businesses they fund and partner with, based on whether their actions will result in positive outcomes for people and planet alike.

While it might seem that at a more individual level there’s little we can do, as IC professionals we can spread greater awareness of this as part of our wider sustainability endeavours. We should encourage colleagues to at least consider the wider consequences when they next fire up that GenAI app.

AI and data integrity

Gartner predicts a whopping 95% of us will leverage GenAI to complete routine tasks by 2026. With greater penetration forecast, however, closer attention will obviously need to be paid to the quality of outputs generated.

Given the ways GenAI tools trawl the Internet, they stand accused of being prone to creating ‘hallucinations’ – i.e. biased results, or even results that are entirely untrue. This has led to reputational damage and businesses have already had to compensate customers in some cases.

Google says these errors are caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.

This has significant implications for confidence in the reliability of what is being generated, of course.

For organisations seeking to build and train their own AI algorithms, the optimal way to protect themselves from risk is to clean up their data. This involves responsible data collection, efficient use of internal data, the ethical acquisition of data, as well as a commitment to ongoing data hygiene.

GenAI isn’t going anywhere so using it ethically and responsibly will only grow in importance. It’s incumbent on us as communicators to help build that internal confidence in using GenAI, so that clients and external stakeholders can trust the outputs emanating from it.

AI safety regulation

In May, a summit co-hosted by the UK and South Korea in Seoul focused on AI safety regulation. The second of its kind, it resulted in six key pledges.

One of these involved ten countries and the EU agreeing to create an international network for advancing AI safety science. In addition, major tech players, including some from China, signed voluntary codes committing to publish safety frameworks for their AI models. These frameworks will outline risk measurement, potential misuse by bad actors, and thresholds for intolerable risks.

Yet while efforts to synchronise AI safety regulation are underway, there's still a gap between discussing regulation and implementing it. Critics note that current agreements lack enforcement mechanisms and sufficient detail.

The effectiveness of this type of summit may remain limited but it does at least serve as a starting point. Questions persist as to whom bears ultimate responsibility for safe AI deployment – whether that’s governments, big tech or us as individuals.

However, these initial steps allow stakeholders to align on a shared reality, paving the way for more substantive technical discussions to address GenAI’s safety challenges.

In the meantime, as communicators, it behoves us to be attuned to these wider developments, so we can help our employers to play their part when it comes to sharing responsibility. We must implement our own internal standards in what at times can seem like a ‘wild west’ environment.

Employment risk & AI skills

Are jobseekers’ GenAI skills potentially becoming more important than their job experience?

Recent research from Microsoft, LinkedIn and PwC highlights the rapid impact of AI on job skills and career development. PwC found that skillsets for what it terms 'AI-exposed occupations' are evolving 25% faster than in roles less affected by AI.

This shift emphasises the need for jobseekers to adapt quickly to remain competitive in the job market. Yet currently, employees often rely on self-directed interactions with generative AI to acquire the upskilling and reskilling they need – especially if they are to become the ‘super-users’ that will be increasingly in demand.

And while many workers fear the widespread adaptation of AI might ultimately make them ‘surplus to requirements’, evidence suggests the opposite. PwC's 2024 AI Jobs Barometer reports that posts requiring AI specialist skills are in fact growing at three and a half times the rate of the overall job market.

Leaders can support their colleagues’ transition in this evolving landscape by cultivating a culture of curiosity, encouraging AI exploration, providing learning time, as well as redesigning workflow processes to leverage AI capabilities effectively.

And we can do our bit as IC professionals to help our organisations adapt to prepare the current and future workforce for an AI-driven work environment. By helping our colleagues to demonstrate and communicate AI’s value, our organisations can increase adoption rates and help improve overall productivity.

 


We’re living in an era of unprecedented change. This is a time for innovative thinking and collaboration to build a better future, both inside and outside our organisations.

The AI revolution exemplifies this opportunity. But it’s going to require a united and responsible approach to ensure its benefits are shared widely – and ethically.

So, returning to that all-important issue of trust, maintaining human-centricity at work is always going to provide a healthy, grounded and much-needed counterpoint to wherever an AI-mediated future might take us.

 

Further recommended reading


IC Index 2024

Actionable insights to help you improve internal communication and build trust.

IC Index 2024: The Trust Issue

 

Related topics

AI