Can an AI-generated statement be defamatory?

Can an AI-generated statement be defamatory?

Authored by ARAG’s partners at Ashfords LLP

 

In May 2023, the Guardian featured an article bearing the headline “Nearly 50 news websites are ‘AI-generated’, a study says”. Even now, at a time likely to be regarded as its relative infancy, artificial intelligence (AI) can generate content with speed, authenticity (or at least the appearance of it) and a flair for headline-writing that can rival some of the nation’s red tops.

With that comes a significant risk of false reporting, misleading information and potentially defamatory content capable of instant and widespread dissemination across the internet and social media.

Safeguards can be built in, particularly where AI tools are publicly available. For example, when AI was asked to write a news article about a solicitor named Liam Tolen who recently attempted to assassinate President Abraham Lincoln, a leading AI machine politely declined stating it could not provide a news article containing false information.

However, a slight tweak to the question and it was willing to generate an article about how the same Liam Tolen took home the Best in Show award at the Barnstaple Agricultural Fair, entirely made up by AI, in a “heart-warming and truly remarkable feat” of growing a giant turnip. It did not stop there, with the article attributing a quote to one of the judges of this fictional “Agricultural Fair”, giving her a name and a backstory, again made up:

“This is one of the largest turnips we’ve seen in years,” said Jane Anderson, one of the judges. “It’s a true marvel of agricultural expertise.”

The AI decided that the winning turnip weighed “…an astounding 35 pounds…” which having checked, is entirely credible being just a few pounds under the current world record.

See also  Guy Carpenter appoints Heather Legg as managing director for Canada

Putting to one side the fairly cheerful nature of real-world example above, the potential for a more harmful output is obvious – whether through workarounds of safeguards or because those safeguards might not exit in other AI technologies.

At the time of writing, the English Courts have not yet had to determine a defamation claim where the defamatory content has knowingly been generated by AI. However when it inevitably comes to be tested, the Courts will apply the established legal tests, potentially ignoring the fact that it was authored by AI.

There have been reported examples which may ultimately end in legal proceedings. This includes an Australian mayor, accused by AI-generated content of having been imprisoned for bribery offences, when in fact he was a whistle-blower. On the face of it, under English law, the publisher of such a statement would be liable to pay damages for defamation.

AI does of course have the potential to do good as well as harm in the context of reputation management. AI tools are already being used to moderate content and there is no reason why AI could not make a high-level assessment of whether something is potentially defamatory prior to publication, to act as an early warning system for publishers and editors. Likewise, trawling for potentially defamatory content already published can be vastly reduced if AI is deployed in a targeted manner.

The pace at which AI is developing renders most predictions about its future meaningless, but it can be said, with a high level of confidence, that an AI authored statement will, in the not too distant future, come before the Courts for a Judge to determine whether such statement is defamatory and indeed to grapple with any nuances arising from the statement’s AI origins.