Turnitin AI writing detection

Occurred: April 2023

Can you improve this page?
Share your insights with us

A Washington Post investigation found that Turnitin's AI writing detection tool is inaccurate over half the time, despite the company claim that its system is 98 percent confident of its assessments. 

The Post tested the system on five high school students, and discovered that it wrongly assessed most of a range of human-written, AI-generated, and mixed-source essays, even if it was good at identifying AI-generated text when the whole input had been created by ChatGPT.

It is unclear how Turnitin verified the accuracy of its software or how it predicts text generated by different language models. Educators are unable to opt-out of using the software.

The company points out that the results of its tool should not be used to accuse students of cheating.

Operator: Turnitin
Developer: Turnitin
Country: USA
Sector: Professional/business services
Purpose: Detect AI writing
Technology: NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability
Transparency: Governance; Black box; Marketing

Page info
Type: Incident
Published: April 2023