Turnitin AI writing detection
Occurred: April 2023
Can you improve this page?
Share your insights with us
A Washington Post investigation found that Turnitin's AI writing detection tool is inaccurate over half the time, despite the company claim that its system is 98 percent confident of its assessments.
The Post tested the system on five high school students, and discovered that it wrongly assessed most of a range of human-written, AI-generated, and mixed-source essays, even if it was good at identifying AI-generated text when the whole input had been created by ChatGPT.
It is unclear how Turnitin verified the accuracy of its software or how it predicts text generated by different language models. Educators are unable to opt-out of using the software.
The company points out that the results of its tool should not be used to accuse students of cheating.
Operator: Turnitin
Developer: Turnitin
Country: USA
Sector: Professional/business services
Purpose: Detect AI writing
Technology: NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability
Transparency: Governance; Black box; Marketing
System
News, commentary, analysis
https://futurism.com/software-schools-detect-cheating-flagging-real-essays-ai-generated
https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/
https://www.ft.com/content/d872d65d-dfd0-40b3-8db9-a17fea20c60c
https://www.theregister.com/2023/04/05/turntin_plagiarism_ai/
Page info
Type: Incident
Published: April 2023