Texas exam scoring engine criticised for dumbing down students
Texas exam scoring engine criticised for dumbing down students
Occurred: March 2024-
Report incident 🔥 | Improve page 💁 | Access database 🔢
The introduction of an AI-powered system to grade Texas' STAAR exams met with criticism and concerns from educators and teachers for its opaqueness and potential inaccuracy and bias, loss of student creativity and impact on employment.
Having re-designed its State of Texas Assessment of Academic Readiness (STAAR) exams in 2023, the Texas Education Agency (TEA) decided to introduce an 'automated scoring engine' to improve scoring efficiency.
The new system uses an AI similar to OpenAI's GPT-4 large language model to grade written answers, particularly for open-ended questions, and is expected to save the state USD 15-20 million per year that would otherwise have been spent on hiring human scorers.
However, the new system sparked criticism and confusion among some educators and education leaders, who argue that the system lacks transparency and may contain potential biases.
Its impact on the jobs of exam assessors was also highlighted.
TEA officials emphasised that the automated scoring engine is overseen by humans and that a quarter of the responses would be rescored by humans.
Automated essay scoring
Automated essay scoring (AES) is the use of specialized computer programs to assign grades to essays written in an educational setting. It is a form of educational assessment and an application of natural language processing.
Source: Wikipedia 🔗
Automated scoring engine
Texas Education Agency (2024). Hybrid Scoring Key Questions (pdf)
Texas Education Agency (2024). Scoring Process for STAAR Constructed Responses (pdf)
Operator: Texas Education Agency
Developer:
Country: USA
Sector: Education
Purpose: Grade exam responses
Technology: Machine learning
Issue: Accuracy/reliability; Bias/discrimination; Employment; Transparency
Page info
Type: Issue
Published: April 2024