Billy Partridge: ChatGPT brings risks along with any rewards

ChatGPT didn’t so much ‘arrive’ on the scene as ‘explode’ into it.

Its arrival served as a catalyst for a much broader conversation about generative artificial intelligence (AI), large language models and the pace of change in general.

Since ChatGPT entered the mainstream, AI has preoccupied businesses across Scotland, and it is preoccupying consultancies like ours. We have been watching its development for a long time, while considering its uses and its risks.

Hide Ad
Hide Ad

Risk and reputation are close bedfellows and AI is certainly testing that relationship. The use of AI in communications raises questions about client confidentiality, GDPR and data privacy obligations, ethics and accuracy, and even security – it has been reported that ChatGPT has already had its first security breach, for example.

​Billy Partridge is a UK Board Director and Head of Scotland at Grayling​Billy Partridge is a UK Board Director and Head of Scotland at Grayling
​Billy Partridge is a UK Board Director and Head of Scotland at Grayling

How organisations use AI in future will demonstrate their risk appetite. Will they use AI-generated content unchecked? Could it be used to manipulate public opinion, or to create deepfakes that could cause harm?

Part of the risks of AI lies in how easy it is to change previously unchangeable things. I recently read about Google’s latest AI-powered translation software, which translates video content into a different language and synchronises it to match the lilt and tone of the speaker’s lips. Given automated translations are themselves fraught with inaccuracies and rarely give headroom for nuance, colloquialism and sarcasm, there is the real possibility of causing offence (or worse) by virtue of a ‘translated’ video actually amounting to a deepfake whose intent differed from the original.

And that’s just an unintended consequence. That’s not a bad actor with malice at heart. Or an activist looking to disrupt for a cause. Or a shareholder or employee with a point to prove.

Scottish businesses must now get to grips with both how they use the tools and how they defend against them. Now that it is being commercialised and brought to market globally, the forces of capitalism will stretch the credibility and lawful use of AI to its limit. Until regulated, companies must watch and wait, experiment and build while never losing sight of the threats that have always existed, but which now could look very different to before.

How organisations use AI may impact others’ perceptions of them. What you say about it matters, too. It’s a sensitive subject because it has possible impacts on entire professions and roles. I feel strongly that, in times of technological change, the brands who succeed in forging close, emotive connections with their audiences are those likely to perform best. After all, character and connection are not easily replicated through robotic processes. The whole point of differentiation is the avoidance of similarity, repetitiveness and stating the obvious.

What to do next? Many businesses are watching and waiting, and indeed they are right to do so: there are more developments coming from Microsoft and others that will bring AI right into our day-to-day working lives.

Yet to gain true competitive advantage, it is important to go beyond testing and learning, prodding and probing. To genuinely progress reputationally in an AI-enabled world, businesses must form clear opinions, outline tangible steps to take, and, where possible, build products that make the most of this incredibly rich resource.

Billy Partridge is a UK Board Director and Head of Scotland at Grayling

Comments

 0 comments

Want to join the conversation? Please or to comment on this article.