AI-powered U.S. private school generates faulty lessons
AI-powered U.S. private school generates faulty lessons
Occurred: 2024-
Page published: February 2026
Report incidentš„| Improve page š| Access database š¢
An AI system used by a private school generated inaccurate and misleading lesson content, causing confusion and learning setbacks for students and raising broader concerns about the reliability of AI in educational settings.Ā
Reports from former employees and internal documentation revealed that the AI-generated lessonsĀ at the costly and much hyped Alpha School in Austin, Texas, frequently produced faulty lesson plans, "hallucinated" facts, contained ambiguous wording, illogical multiple-choice answers and errors that fail to meet standardised test expectations.Ā
These technical failures treated students as "guinea pigs" in an unregulated educational experiment, undermining studentsā learning and confidence. Former employees describe feeling anxious and betrayed when assessments do not logically match the questions, because they could not trust whether their exams are valid measures of their knowledge.
Students were also forced to hit rigid, AI-set goals without sufficient human support, leading to overwork and burnout, and their performnace was tracked using surveillance techniques, creating a high-pressure environment.
Allegations also surfaced that the schoolās AI was trained by scraping data from other online courses without permission, undermining intellectual property standards.
The core problem is the decision to build a school model that replaces much human teaching and quality control with generative AI, while treating pupils and their data as training material to refine the system.Ā
Leaders promoted the idea that āall educational content is obsoleteā and that AI could fully personalise learning, creating pressure to automate lesson generation and review rather than invest in robust human oversight.
Ā Internal notes reportedly show that staff knew the AI had a significant hallucination/error rate yet still used AI to check AI-generated lessons, meaning flawed outputs were allowed to correct themselves instead of being systematically reviewed by qualified educators.Ā
Transparency to parents and students appears limited: they were sold high test scores and ā2āhour school daysā, while internal accounts suggest many students needed more time and human tutoring to compensate for AI shortcomings, raising questions about misleading marketing.Ā
The alleged scraping and reuse of other platformsā educational content without permission, and at least one external platform cutting ties, indicate weak governance around data sourcing, intellectual property, and compliance.
For directly affected students: The case risks academic gaps, reduced trust in assessments, heightened anxiety from constant digital monitoring, and long-term scepticism about educational institutions that overpromise on AI.
For educators: It underlines the importance of properly reviewing and supervising AI-generated materials.
For society and policymakers: It illustrates the need for clear standards, accountability frameworks, and evaluation requirements for the use of AI in education to protect learners, and to ensure that automated tools meet rigorous quality and safety expectations.Ā
Incept
Developer: Trilogy Software
Country: USA
Sector: Education
Purpose: Automate education
Technology: Generative AI
Issue: Accountability; Accuracy/reliability; Appropriation; Mis/disinformation; Privacy/surveillance; Transparency
Early 2024. Alpha develops and pilots its AIāfirst, ā2āhour learningā model using adaptive learning apps and its own generative AI tutor.
Mid 2025. The New York Times and major podcasts (Hard Fork) profile founder MacKenzie Price. The school reports serving 250 students with plans to expand to dozens of U.S. cities.
July-Oct 2025. Reports surface of "hallucinations" in the Incept and Sabrewing AI platforms. Critics allege that software (like IXL) is being used for hours of repetitive "click-work" rather than deep learning.Ā
Nov 2025-Jan 2026. Investigations by WIRED and CNN reveal cases of physical and emotional distress. Parents report children being denied lunch/snacks until they met AI-set "metrics."Ā
February 2026. 404 Media publishes investigation that reveals faulty AIāgenerated lessons, intrusive monitoring, and dataāhandling problems, and describing students as being treated like āguinea pigs.ā
AIAAIC Repository ID: AIAAIC2204