menu-control
The Jerusalem Post
The Jerusalem Post: Business and Innovation

Can ChatGPT and other AI be sued for libel? - interview

 
 A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken Feb. 8, 2023. (photo credit: REUTERS/FLORENCE LO/ILLUSTRATION/FILE PHOTO)
A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken Feb. 8, 2023.
(photo credit: REUTERS/FLORENCE LO/ILLUSTRATION/FILE PHOTO)

Radio host Mark Walters was accused of defrauding and embezzling funds but the information, which came from OpenAI, was false.

An American radio host filed a libel lawsuit against ChatGPT developer OpenAI on June 6 over false information about him by the platform, which according to attorney Roy Keidar opens up new legal challenges to artificial intelligence technologies.

A journalist researching a lawsuit between The Second Amendment Foundation v. Robert Ferguson in May was told repeatedly by OpenAI that Georgian radio host Mark Walters had defrauded and embezzled funds from the foundation. 

The problem is that this was completely fabricated by ChatGPT, according to Walter’s lawsuit. Walter or even financial criminality aren’t mentioned in any of the case documents. 

Walters argued that ChatGPT injured Walter's reputation by giving this false information to the journalist.

Advertisement

The AI platform can sometimes make up facts, said the filing, a phenomenon OpenAI was aware of and called a "hallucination."

 Roy Keidar.  (credit: NICKY WESTPHAL)
Roy Keidar. (credit: NICKY WESTPHAL)

"This probably won't be the only one, but this is the first lawsuit that we know of in this subject in the United States," said Keidar, partner, emerging technologies at the Arnon, Tadmor-Levy law firm.

 Keidar said that he didn't think that the suit had a high chance of succeeding, since it was difficult to demonstrate how widespread the information was, and how OpenAI had issued warnings and disclaimers about the platform.

"While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice," a ChatGPT login alert warns. On the homepage, there are two other warnings, one further clarifying that incorrect answers may be produced about "people, places, or facts."


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


Keidar said that the case is different from those before it, because the technology is so revolutionary. Generative AI information is completely different from search engine results. 

Search engines and social media platforms currently have protection for the results they curate under Section 230, a US code that has been interpreted to mean that such websites aren't liable for third-party content produced and shared by users. Kaidar said that it isn't clear yet if this protection extends to machine learning and AI, which uses third-party content to learn but doesn't give you the original material. It also gives its own analysis, interpretations and conclusions. 

Advertisement

AI brings new legal challenges

Keidar noted another recent incident in which ChatGPT had given professionals false information -- a New York Lawyer had used the chatbot during research and presented a judge with fabricated legal precedents. The attorney said that he didn't intend to deceive the court -- he didn't know that the platform could be wrong. Early in June, a Texas district judge banned AI-generated filings. 

"It becomes a big issue now that more people are using and relying on these kinds of tools in everyday life," said Keidar. 

Keidar said that it was conceivable that journalists like in the Walters incident could news story drafts using AI but fail to fact check and proofread -- leading to the proliferation of fake news and libelous content. These journalist could face serious lawsuits themselves.

"The question is who is responsible," said Keidar. There are disclaimers on the platforms, and "the responsibility is passed onto the user, 'you're in charge of the input,  and then it's your responsibility if you rely on the output."

On the other hand the low confidence of correct information presents it as if it were fact, and it's unknown how much is incorrect hidden in the backend of things. 

Keidar said that there needed to be a balance in the platform policies to protect them during the development of the technology and also the protection of the user. He recommended that regulators in the US begin considering these policies.

As more people use ChatGPT and other AI platforms, and brand new legal challenges to existing laws are going to arise. Jurists and regulators should prepare -- but perhaps shouldn't ask AI how to do so.

×
Email:
×
Email: