AI detector falsely accuses autistic student of cheating
AI detector falsely accuses autistic student of cheating
Occurred: October 2024
Page published: October 2024 | Page last updated: March 2026
Moira Olmsted, a neurodivergent college student, was falsely accused of cheating after an AI detection tool misidentified her human-written essay as machine-generated, highlighting how algorithmic bias against "predictable" writing styles can cause severe academic and psychological harm to students with disabilities and non-native speakers.
Moira Olmstead, an autistic student at Central Methodist University, was falsely accused of cheating on an assignment due to an AI detection tool's misidentification of her work as AI-generated.
After returning to school following maternity leave, Olmstead submitted a written assignment that was later flagged by the AI detector, resulting in a grade of zero.
Despite being the original author, Olmsted was initially penalised and given a formal warning.
The incident caused significant emotional distress and stigmatisation, and forced her to adopt "defensive writing" habits, such as screen-recording her entire writing process and using Google Docs version history as a digital paper trail, to protect her academic standing.
Turnitin, the AI detection tool used in Olmstead's case, is part of a broader trend where nearly two-thirds of educators employ these kinds of technologies to combat academic dishonesty.
However, these tools have been shown to misidentify 1-2 percent of human-written texts as AI-generated. Even a small error rate can impact large numbers of students.
Because Olmsted is on the autism spectrum, her natural writing style is highly structured and literal- traits the software mistakenly interprets as "robotic" or AI-generated.
The problem was compounded by automation bias, where her professor deferred to the software's judgment as definitive proof rather than a fallible suggestion.
The false accusation against Olmstead underlines the potential dangers of relying on AI detection systems in education.
The situation raises ethical questions about the implementation of AI in educational contexts and demonstrates the need for more accurate and reliable detection methods.
Developer: Turnitin
Country: USA
Sector: Education
Purpose: Detect AI writing
Technology: Machine learning
Issue: Accountability; Accuracy/reliability; Automation bias; Transparency
2023. Moira Olmsted’s assignment is flagged by Turnitin; she receives a zero and a warning.
Late 2023. Olmsted successfully protests the grade, but the warning remains on her record with the caveat that another flag will result in a plagiarism charge.
Oct 18, 2024. Bloomberg publishes a major feature on Olmsted and other students, bringing the systemic nature of these false accusations to international attention.
Ongoing. Universities globally begin to reconsider or disable AI detection features due to high false-positive rates and documented bias.
Center for Democracy and Technology. Up in the air
AIAAIC Repository ID: AIAAIC1780