AI ethics in the news: Autumn 2024 roundup

How can internal communicators ensure AI-generated content fosters trust and engagement? In this Autumn AI roundup, we explore the importance of AI ethics, transparency and critical thinking in the workplace. Discover how to tackle bias, improve data quality and build a communication culture that embraces the benefits of AI while safeguarding human values.

04 Oct 2024
by Cathryn Barnard
Defining AI ethics for the future

As internal communicators, we champion quality and standards across all organisational communication and knowledge sharing. This broader focus is ever-needed in today's AI-driven landscape, where information flows rapidly and from various sources.

We can probably all recognise the opportunity GenAI brings us – we will have examples of when we’ve used it effectively to uncover helpful insights and streamline communication processes. However, employee distrust in AI-generated content highlights the need for more robust quality assurance measures.

Yet, as our IC Index 2024 surfaced, there are mixed feelings at play. Many people don't know how they feel about GenAI, with one-third of employees distrustful of the internal messaging written by AI. This further complicates its adoption, of course.

By encouraging the prioritising of standards within our organisations, as internal communicators we can foster a culture of trust and reliability. This approach helps mitigate the risks associated with misinformation and ensures that all internal content, regardless of its origin, meets high-quality benchmarks.

To underpin this, embracing comprehensive AI ethics principles – incorporating transparency, accountability, fairness, privacy, safety, human-centred values and inclusivity – is set to be increasingly vital. And these principles should guide not only AI-generated content but all internal communication output.

By taking a proactive stance on quality and standards, IC professionals can lead the way in creating a more trustworthy, efficient and ethical communication ecosystem. This holistic approach will ultimately enhance employee trust, improve information flow – and potentially contribute to a more informed and engaged workforce.

Tuning into bias

As AI technology rapidly advances and becomes ever more integrated into our daily work lives, it becomes more incumbent on us to ensure that artificially generated IC output is carefully scrutinised for potential bias, to ensure fairness and accuracy in workplace communications.

AI systems can inadvertently perpetuate existing biases present in their training data. Not putting the proper checks in place for potential bias – ideally by those in groups most subject to bias – could lead to unfair treatment or misrepresentation of certain cohorts within an organisation.

Additionally, as highlighted by growing concerns about 'digital colonialism', AI development is currently dominated by major powers, primarily in the Global North. This concentration of influence – resulting from US and Chinese hegemony in the AI arena in particular – can lead to systems that don't always adequately represent diverse perspectives or address the needs of all employees.

Technofeudalism is the title of Yanis Varoufakis’ latest book, in which the former Greek finance minister argues that we’re in an era where tech giants have overthrown capitalism at the expense of everyday people and workers. This concept suggests that they wield disproportionate power, potentially influencing our lives in ways that may not align with our organisations’ – or our own – values or goals.

OpenAI, for instance, is currently seeking investment to fund future development that would see the company valued at $150 billion. Given that the company behind ChatGPT was only founded in 2015, that would put it on a par with established global giants like Cisco, Shell and McDonalds.

Whatever the myriad reasons behind the skewed perspectives AI can generate, it's going to be crucial to implement diverse and inclusive AI practices to mitigate the risks emanating from them. Outside of our organisations, there are growing calls for an internationally developed human rights framework. Bringing together the best thinking from the Global South and Europe could create a safer, more sustainable and more equitable vision for the future of AI.

Until then, we will need to ensure AI-generated content is thoroughly examined to address potential biases. Leveraging AI effectively can lead to improved communication output of course. But in doing so, we also need to keep a weather eye on what’s being generated to avoid marginalising the diverse cohorts that enhance our organisations.

Why high-quality data should be powering our AI

Good AI comes from good data and ensuring data quality is an organisation-wide issue. Yet many companies have neglected it, which can lead to challenges with information and data integrity, and even the veracity of outputs, of course.

This poses significant risk when implementing AI systems. To mitigate this, organisations must prioritise data quality efforts, ensuring that the information fed into AI models is current, relevant and error-free.

AI models, despite their growing sophistication, are prone to hallucinations, errors and data drift. That means that fact-checking and verifying AI-generated internal communication content is crucial for maintaining organisational integrity and trust. Overlooking this can lead to the spread of misinformation within a company.

As we mentioned above about bias, the quality of AI output is directly tied to the quality of its training data. But by implementing rigorous verification processes, companies can harness the benefits of AI-driven communication while safeguarding against potential pitfalls. This approach not only protects the organisation, but also strengthens a culture of accuracy and reliability in all internal communications.

GenAI isn’t going anywhere so using it responsibly and effectively is only going to grow in importance. It will be increasingly incumbent on us as communicators to support our colleagues to build their confidence in leveraging GenAI, so that internal and external stakeholders alike can trust the outputs emanating from it.

Greater transparency, greater trust

As companies increasingly integrate AI into their operations, concerned shareholders are pressing them to become more open about how they’re using the technology and the safeguards they’re putting in place. But what about approaches to transparency within our own organisations?

AI systems bring both opportunities and risks that need careful management, and it behoves us as internal communicators to push for greater transparency in the GenAI-produced content circulated within our organisations.

Greater transparency leads to greater trust. If a sizeable tranche of our colleagues already distrusts internal messaging written by AI, by openly labelling any AI-generated content as such, our organisations can address this scepticism head-on and nurture a culture of openness.

It also aids in risk mitigation from incorrect, biased and discriminatory outputs. Tagging synthetic content allows for easier monitoring, quicker error detection and more effective human oversight.

Clearly labelling AI-generated content will also enable our organisations to stay ahead of new requirements borne of the evolving regulations landscape. The EU's AI Act, for instance, will require detailed documentation of AI systems. A proactive approach now could help reduce future compliance burdens.

By pushing for transparency and the proper labelling of AI-generated content, we can help build a more trustworthy, ethical, and resilient AI ecosystem within our organisations.

The art of critical thinking in the age of AI

ChatGPT has just announced it is releasing its ‘o1’ model, a new large language model trained with reinforcement learning to perform complex reasoning. o1 ‘thinks’ before it answers and can produce a long internal chain of thought before responding to the user.

While that all sounds like it could lead to game-changing artificial intelligence, no clever prompt we type into an AI tool will ever replace the importance of human critical thinking.

In the age of AI, upholding the art of critical thinking within organisations is paramount, particularly in our own internal communication endeavours. While AI tools offer impressive capabilities, they lack the nuanced understanding and flexibility of human thought – key vital skills that we bring as IC professionals.

As we’ve touched on before, AI platforms, despite their advancements, continue to display significant shortcomings in areas such as accuracy and bias. Gen AI also lacks contextual understanding, ethical reasoning and emotional intelligence – all vital components of effective internal communication. So, human intervention is still very much essential to identify and rectify these issues, ensuring the quality and reliability of AI-generated content.

By maintaining strong critical thinking skills, communicators can ensure messages are not only accurate but also appropriate and empathetic, and aligned with organisational values.

In the rapidly changing future of work landscape, innovation and adaptability are invaluable skills for problem-solving and strategic decision-making. Critical thinking fosters creative, flexible approaches, while a machine would struggle to always replicate these effectively.

So, while we can celebrate GenAI as a powerful tool – a helpful addition to our armoury, if you like – the application of critical thinking will be pivotal in driving effective internal communication for the foreseeable future.

+ + + + +

We touched on the importance of trust in our last AI update, as it was a standout issue that surfaced in this year’s IC index.

We believe trust should be a watchword in all our internal communication efforts, as it’s key to maintaining our human-centricity at work.

And being across the themes we’ve highlighted in this update underpins and helps reinforce that. So that means being live to bias, ensuring the quality of the data we’re putting in and getting out, being guided by ethics, leaning into greater transparency, and ensuring critical thinking always takes pride of place, no matter how mind-blowing the technology is becoming.

By emphasising these key elements in how leverage GenAI as internal communicators, we can help our organisations embed a productive working culture where we:

  • Encourage unbiased, diverse perspectives and avoid groupthink
  • Improve the quality of AI-generated content through expert human oversight
  • Navigate complex ethical considerations more effectively
  • Engender greater trust in GenAI outputs by being transparent about it
  • Leverage critical thinking to evaluate and flex our communication strategies

These are just five of the ‘human’ aspects of leveraging GenAI more effectively and responsibly. These are very live areas of focus for us right now at the IoIC, so do stay tuned for news on more of these.

 

Further recommended reading

Related topics

AI