Misinfo expert accused of using AI in court testimony
Misinfo expert accused of using AI in court testimony
Occurred: November 2024
Report incident 🔥 | Improve page 💁 | Access database 🔢
A Stanford professor specialising in misinformation has been accused of fabricating expert testimony in a court case regarding Minnesota's ban on political deepfakes, raising concerns about the reliability of AI-generated content in legal contexts.
Jeff Hancock, a professor at Stanford University and an expert on AI and misinformation, provided an expert declaration in a Minnesota court case challenging the constitutionality of a new law banning political deepfakes.
Intended to support the law, his testimony came under scrutiny after it was revealed that two of the academic citations he included do not exist.
The plaintiffs in the case argue that these citations were likely generated by an AI model, such as ChatGPT, and refer to fictitious studies titled "Deepfakes and the Illusion of Authenticity" and "The Influence of Deepfake Videos on Political Attitudes and Behavior".
The controversy arose when attorneys for the plaintiffs, who were contesting the deepfake ban, discovered that the cited studies could not be found in any academic databases or journals.
They argued that Hancock's references exhibit characteristics typical of AI "hallucinations," where an AI generates plausible-sounding but false information.
The implications of the situation extend beyond Hancock's individual case, raising important questions about the integrity of expert testimony in legal proceedings.
Critics argue that if expert declarations can be fabricated using AI, it undermines trust in judicial processes and calls for stricter standards regarding the use of AI-generated content in legal contexts.
The fallout from the incident may influence future legislation surrounding AI and misinformation, as well as the ethical standards expected from experts providing testimony.
Deepfake
Deepfakes (a portmanteau of 'deep learning' and 'fake') are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media.
Source: Wikipedia 🔗
Operator: Jeff Hancock
Developer: OpenAI
Country: USA
Sector: Business/professional services
Purpose: Generate legal cases
Technology: Chatbot; Generative AI; Machine learning
Issue: Accuracy/reliability; Ethics/values; Mis/disinformation
Page info
Type: Issue
Published: November 2024