Like what you've read?

On Line Opinion is the only Australian site where you get all sides of the story. We don't
charge, but we need your support. Here�s how you can help.

  • Advertise

    We have a monthly audience of 70,000 and advertising packages from $200 a month.

  • Volunteer

    We always need commissioning editors and sub-editors.

  • Contribute

    Got something to say? Submit an essay.


 The National Forum   Donate   Your Account   On Line Opinion   Forum   Blogs   Polling   About   
On Line Opinion logo ON LINE OPINION - Australia's e-journal of social and political debate

Subscribe!
Subscribe





On Line Opinion is a not-for-profit publication and relies on the generosity of its sponsors, editors and contributors. If you would like to help, contact us.
___________

Syndicate
RSS/XML


RSS 2.0

Libelled by the bot: reputation, defamation and AI

By Binoy Kampmark - posted Wednesday, 26 April 2023


Cometh the new platform, cometh new actions in law, the fragile litigant ever ready to dash off a writ to those with (preferably) deep pockets. And so, it transpires that artificial intelligence (AI) platforms, for all the genius behind their creation, are up for legal scrutiny and judicial redress. Certainly, some private citizens are getting rather ticked off about what such bots as ChatGPT are generating about them.

Some of this is indulgent, narcissistic craving – you deserve what you get if you plug your name into an AI generator, hoping for sweet things to be said about you. Things get even comical when the search platform is itself riddled with inaccuracies.

One recent example stirring interest in the Digital Kingdom is a threatened legal suit against the OpenAI chatbot. Brian Hood, Mayor of Hepburn Shire Council in the Australian state of Victoria, was alerted to inaccurate accusations about bribery regarding a case that took place between 1999 and 2004. It involved Note Printing Australia, an entity of the Reserve Bank of Australia. Hood had worked at Note Printing Australia and blew the whistle on bribes being made to foreign authorities. He was never charged with the crime itself. However, answers generated by ChatGPT suggested otherwise, including the claim that Hood was found guilty of the said bribery allegations.

Advertisement

In a statement provided to Ars Technica by Gordon Legal, the firm representing Hood, more details are given. Among "several false statements" returned by the AI bot are claims that Hood "was accused of bribing officials in Malaysia, Indonesia, and Vietnam between 1999 and 2005, that he was sentenced to 30 months in prison after pleading guilty to two counts of false accounting under the Corporations Act in 2012, and that he authorised payments to a Malaysian arms dealer acting as a middleman to secure a contract with the Malaysian Government."

James Naughton, a partner at Gordon Legal, is representing Hood. "He's an elected official, his reputation is central to his role," stated the lawyer. "It would potentially be a landmark moment in the sense that it's applying this defamation law to a new area of artificial intelligence and publication in the IT space."

In March, Hood's legal representatives wrote a letter of concern to OpenAI, demanding that they amend the outlined errors within 28 days, threatening a defamation action against the company in the event they refused to do so.

The question here is whether ChatGPT's supposedly defamatory imputations might fall within the realm of liability. The bot's functionality on generating facts is currently sketchy, and any user should be familiar with that fact. That said, opinions on the subject of reputational liability remain mixed.

Lawrence Tribe of Harvard Law School does not regard the notion as outlandish. "It matters not, for purposes of legal liability, whether the alleged lies about you or someone else were generated by a human being or by a chatbot, by a genuine intelligence or by a machine algorithm."

Robert Post of the Yale Law School looks at the matter from the perspective of the communication itself. Defamation would not take place at the point the information is generated by the bot. It would only happen if that (mis)information was communicated or disseminated by the user. "A 'publication' happens only when a defendant communicates the defamatory statement to a third party."

Advertisement

Not so, claims RonNell Andersen Jones of the University of Utah. "If defamatory falsehood is generated by an AI chatbot itself, it is harder to conceptualise this within our defamation law framework, which presupposes an entity with a state of mind on the other end of the communication."

In terms of defaming a public figure, "actual malice" would have to be shown – something distinctly at odds in the ChatGPT context. Jones points us in a possibly different direction: that the function, or otherwise, of such a system could be seen through the prism of product liability.

Those based in the US might resort to Section 230 of the Communications Decency Act, that most remarkable of provisions that provides internet service providers immunity from legal suits regarding content published by third parties on the site. The appeal of the section is evident by how many attacks have been made against it, be it from campaigning liberal celebrities with bruised reputations or Donald Trump himself.

But the original drafters of the law, Oregon Democratic Senator Ron Wyden, and former Rep. Chris Cox, a California Republican, are of the view that chatbot creators would not be able to avail themselves of the protection. "To be entitled to immunity," Cox suggested to The Washington Post, "a provider of an interactive computer service must not have contributed to the creation or development of the content at issue."

When Ars Technica attempted to replicate the various mistakes supposedly generated by ChatGPT, they came up short. Ditto the BBC. This might suggest that the generated errors have been corrected. But over the next few weeks, if not months, expect a number of thick, all-covering disclaimers to ensure that AI bots such as ChatGPT are not subject to liability.

As a matter of fact, ChatGPT already has one: "Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output." Whether this satisfies technologically illiterate courts remains to be seen.

 

  1. Pages:
  2. 1
  3. 2
  4. All


Discuss in our Forums

See what other readers are saying about this article!

Click here to read & post comments.

8 posts so far.

Share this:
reddit this reddit thisbookmark with del.icio.us Del.icio.usdigg thisseed newsvineSeed NewsvineStumbleUpon StumbleUponsubmit to propellerkwoff it

About the Author

Binoy Kampmark was a Commonwealth Scholar at Selwyn College, Cambridge. He currently lectures at RMIT University, Melbourne and blogs at Oz Moses.

Other articles by this Author

All articles by Binoy Kampmark

Creative Commons LicenseThis work is licensed under a Creative Commons License.

Article Tools
Comment 8 comments
Print Printable version
Subscribe Subscribe
Email Email a friend
Advertisement

About Us Search Discuss Feedback Legals Privacy