Mass AI cheating uncovered at top South Korean university
Mass AI cheating uncovered at top South Korean university
Occurred: October 2025
Page published: October 2025
Hundreds of students at Yonsei University, one of South Korea’s top universities, are suspected of using AI tools like ChatGPT to cheat on a midterm exam in a large online course, exposing a major academic integrity crisis and raising urgent questions about AI policy in higher education.
In a third-year course titled “Natural Language Processing & ChatGPT” at Yonsei University, about 600 students took a midterm exam online on October 15, 2025.
Despite proctoring measures that required them to record their screen, hands, and face, many students allegedly manipulated their camera angles and used multiple windows to evade detection.
The professor caught “a significant number” of cheating incidents and announced that all students found to have cheated would receive zero points for the exam.
A poll on a student community board (“Let’s Vote Honestly”) showed 190 out of 353 respondents admitted to cheating — suggesting more than half the class may have engaged in misconduct.
According to student reports, “most of us used ChatGPT” during the exam.
Yonsei University is planning a public hearing, hosted by its Institute for AI and Social Innovation, to discuss ethics, assessment in the AI era, and how to handle such cases.
In parallel, business school students at Seoul National University (SNU) were found to have used AI to generate code in a statistics exam despite an explicit ban; SNU is reportedly considering nullifying all the midterm scores and retesting.
Many Korean universities, including top ones, apparently don’t yet have robust, widely-adopted rules or guidelines for generative AI use in coursework and exams.
Large non-face-to-face lectures (like MOOCs) or online exams create “blind spots” — even with video recording, students can manipulate camera angles or use other devices.
For some students, using AI feels almost like the default. “Not using AI feels like a disadvantage,” some SNU students said.
Before the scandal broke, many institutions may have underestimated the need for explicit enforcement mechanisms; cheating was being caught only after the fact, rather than prevented by design.
Students who cheated may face academic penalties (zeros, potential suspension) if identified; but the scandal also raises fairness issues. Many feel pressure to use AI just to keep up.
Universities must rapidly develop or strengthen AI ethics policies, redesign assessment formats, and improve monitoring or proctoring mechanisms to better detect misuse.
The incident underscores a systemic tension: legacy assessment models (e.g., multiple-choice online exams) are ill-suited for an era of powerful generative AI. There may be a broader shift toward rethinking what it means to “test understanding” in the AI age.
As AI becomes more integrated into daily life and education, we’re seeing real-world ethical and governance challenges. How institutions respond could shape norms around academic integrity, AI literacy, and responsible use.
Developer: OpenAI
Country: South Korea
Sector: Education
Purpose: Cheat during exams
Technology: Generative AI; Machine learning
Issue: Accountability; Authenticity/integrity; Dual use; Fairness
AIAAIC Repository ID: AIAAIC2115