Turnitin AI writing detection tool found to be mostly inaccurate
Turnitin AI writing detection tool found to be mostly inaccurate
Occurred: April 2023
Report incident 🔥 | Improve page 💁 | Access database 🔢
A much hyped AI-writing detection tool was found to be inaccurate over half the time, raising doubts about effectiveness and the integrity of its developer.
Turnitin's Feedback Studio AI-writing detection tool, which is used by many educational institutions, claims to be able to detect writing produced by AI models such as language generators. The comoany had claimed Feedback Studio was 98 percent confident of its assessments.
However, tests conducted on five high school students by the Washington Post found that the tool failed to detect AI-generated text about 50 percent of the time. The Post also found that Turnitin's accuracy varied widely depending on the type of text and the specific AI model used to generate it.
For instance, it wrongly assessed most of a range of human-written, AI-generated, and mixed-source essays, even if it was good at identifying AI-generated text when the whole input had been created by ChatGPT. It also found the tool was biased towards flagging text written by non-native English speakers.
The investigation raised concerns about the reliability and effectiveness of Turnitin and other AI-powered writing detection tools, and pointed to their potential to unfairly penalise students and writers.
The Post's findings suggest that educators and institutions should approach these tools with caution and consider multiple factors when evaluating the authenticity of written work.
Operator: Turnitin
Developer: Turnitin
Country: USA
Sector: Education
Purpose: Detect AI writing
Technology: NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability; Bias/discrimination
Page info
Type: Issue
Published: August 2024