Court overturns Sheri G. Lederman teacher effectiveness rating
Occurred: September 2014
Report incident ๐ฅ | Improve page ๐ | Access database ๐ข
An algorithmic system used by New York State to evaluate teachers unfairly downgraded an experienced and highly regarded primary school teacher, resulting in public controversy and a lawsuit.
Sheri G. Lederman, an experienced and highly regarded primary school teacher in New York, received a shockingly low rating in her teacher evaluation. The state's "Value Added Modeling" (VAM) system awarded her only one point out of twenty for her students' progress on state tests, deeming her ineffective.
Lederman's students had performed marginally lower on their English exam in the 2013-2014 school year compared to the previous year - a slight decrease that caused her test-based effectiveness rating to plummet from 14 out of 20 points to just 1 out of 20 points. The drastic drop occurred despite Lederman's overall positive evaluation and her reputation as a skilled educator.
In response, Lederman filed a lawsuit against the New York State Education Department arguing that the methodology used to assign the growth rating was opaque, irrational and failed to account for student demographics and ability levels.
In May 2016, a New York state court sided with Lederman, ruling that her evaluation had been "arbitrary and capricious". The judge determined that the state failed to rationally explain how Lederman's score could fluctuate so dramatically in a single year.ย
The decision was significant as it marked the first time a judge had set aside an individual teacher's VAM rating based on such arguments.
The Lederman case exposed fundamental flaws in the VAM system and had far-reaching implications:
1. It challenged the reliability and fairness of using standardized test scores to evaluate teacher performance.
2. The ruling provided a legal precedent for other teachers to contest similar evaluations.
3. It sparked a broader debate about the effectiveness of VAM in education policy.
The incident underscored the need for more transparent, consistent, and fair methods of teacher evaluation that consider multiple factors beyond standardised test scores.
System ๐ค
Operator: New York City Department of Education
Developer: Mathematica Policy Research
Country: USA
Sector: Education
Purpose: Evaluate teacher performance
Technology: Value-added model
Issue: Accountability; Accuracy/reliability; Effectiveness/value; Transparency
Legal, regulatory ๐ฉ๐ผโโ๏ธ
Research, advocacy ๐งฎ
United Federation of Teachers. This is no way to rate a teacher (pdf)
Amrein-Beardsley A., Close K. (2019). Teacher-Level Value-Added Models on Trial: Empirical and Pragmatic Issues of Concern Across Five Court Cases
Katz D.S. (2016). Growth Models and Teacher Evaluation: What Teachers Need to Know and Do
Koedela C., Mihaly K., Rockoff J.E. (2015). Value-Added Modelling: A Review (pdf)ย
Blanchard M.R. et al (2010). Is inquiry possible in light of accountability?: A quantitative comparison of the relative effectiveness of guided inquiry and verification laboratory instruction
Schochet P.Z., Chiang H.S. (2010). Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains (pdf)
News, commentary, analysis ๐๏ธ
https://www.edweek.org/policy-politics/n-y-teacher-challenges-state-evaluation-system/2014/11
https://danielskatz.net/2016/05/11/new-york-evaluations-lose-in-court/
https://slate.com/human-interest/2015/08/vam-lawsuit-in-new-york-state-here-s-why-the-entire-education-reform-movement-is-watching-the-case.htmlhttps://ny.chalkbeat.org/2012/2/23/21110168/why-we-won-t-publish-individual-teachers-value-added-scores
Page info
Type: Incident
Published: August 2023