The beta launch of ChatGPT in November caused many internal communication professionals to sit up and take notice, me included. The chatbot is the latest piece of AI (artificial intelligence) that’s promising to transform the way we work, and more specifically the way we, or should I say computers, write.
The beta launch of ChatGPT in November caused many internal communication professionals to sit up and take notice, me included. The chatbot is the latest piece of AI (artificial intelligence) that’s promising to transform the way we work, and more specifically the way we, or should I say computers, write.
This new technology from OpenAI, the brilliant minds behind DALL-E, not only answers questions like a typical chatbot but it can take a brief and instantly create an article of comparable quality to one written by a human.
This is exciting and terrifying in equal measure. Whereas tools such as Grammarly help to bolster our writing confidence, ChatGPT could arguably eliminate the need for us to write altogether. So, what does that mean for internal communication?
The gift of time
Earlier this month, Jonas Bladt Hansen wrote an excellent, balanced article on Medium where he gave ChatGPT a series of prompts and briefs including asking it to create a channel overview and a social media policy – the results were impressive.
Even more impressive was the way that it remembered information he gave it in a previous brief and used that context to inform a future one. Smart indeed.
There is a clear advantage here regarding time – when your to-do list is never ending, having technology that can take some of that burden and produce high quality content, is a godsend. You could finally have the headspace to do the more strategic, high impact activities that you’ve pushed to the back burner time and time again to manage the day-to-day.
But what if you enjoy writing articles, constructing channel matrices, or creating comms plans? After all, for many of us, it was a love of writing that brought us into this profession in the first place. The obvious answer is that you don’t have to outsource everything to AI. There will undoubtedly be content that needs a human (for now at least) to add nuance or to simply build trust with employees as the message needs to come from a person not a machine.
And trust is another interesting angle to consider. As internal communicators, we have an ethical responsibility to ensure the content we share is accurate, which in turn supports a culture of trust. Yet, how reliable is AI at the moment?
A question of trust
In January 2023, Beth Collier, Communication, Creativity and Leadership Consultant, shared an experience on LinkedIn where she asked Google: “Which tennis player has spent the longest time ranked at number 1?”.
The answer Google provided was Roger Federer with 237 consecutive weeks. Yet, when she changed the questions to ‘female tennis player’ it turns out that Steffi Graf achieved 377 weeks in total. Hmmm.
I decided to test this on ChatGPT. The answer I got to the first question was Novak Djokovic and then when I added in the word ‘female’ I got the answer Serena Williams. When I asked specifically about Steffi Graf it conceded that she was indeed way ahead in the overall rankings. I then asked ChatGPT why it gave me the wrong answer and it sweetly apologised, which was nice.
While Djokovic will undoubtedly pass Graf in the very near future (at time of writing, he is at 376 weeks), it does highlight that AI tools are reliant on information on the internet being accurate and unbiased. And unfortunately, it isn’t always. It’s also important to note that ChatGPT can only search up to 2021. On the bright side these tools are constantly learning so it will get better. The more we use these tools, the more useful they will be to us in the long run as it learns from the information we input – as long as we remember that the data we share on there, might not remain ours for long.
Whose content is it anyway?
An important aspect of AI is security around confidential information. Do employees understand that currently confidential company information shared on ChatGPT is unlikely to remain confidential? And what the consequences of that are? Microsoft and Amazon have already provided guidance to employees on this very topic.
I’ve also seen a lot of chat about how to know if students are cheating on essays. But what about inadvertent plagiarism? If you’ve briefed ChatGPT, you might not know where it’s sourced its information from and how much of it is verbatim. This blog by Erie Astin on Medium is quite illuminating – not only does she show that the content generated by ChatGPT might not be original, it also might not be distinguishing between out of date sources and current sources of information.
This presents a real ethical issue for internal communicators. No doubt, this issue will be addressed but for now, relying solely on AI for your content is likely to end badly.
So where does this leave us?
In conclusion, AI isn’t going to take our jobs just yet. It will bring huge opportunities and we need to be curious and open-minded about how it will fit into, and evolve, our existing approaches to internal communication. We also need to question and be conscious of ethical considerations as these tools become more entrenched in our day to day lives.
If you want to find out more about AI in internal communication, I recommend joining the IoIC webinar with Dan Sodergren on 27 February – having seen Dan speak at IoIC Live 2022, I am confident it will be thought-provoking as well as insightful. I hope to see you there.