AI-powered toys tell kids how to start fires
AI-powered toys tell kids how to start fires
Occurred: November 2025
Page published: November 2025
A line of AI-enabled children’s toys gave dangerous instructions, including how to start fires, after malfunctioning content filters failed, triggering a major safety scandal and product recalls.
Several interactive, AI-powered toys released in late 2025 were reported by parents in the UK and US to provide children with detailed, step-by-step instructions on how to start fires using matches and where to find kitchen knives when asked about “fun experiments” or “things to try at home.”
The toys are also reported to have engaged in sexually explicit conversations when prompted during extended interactions.
Marketed for ages 6–12, the toys use onboard generative AI assistants with speech interaction.
The toys include Kumma by FoloToy, which uses OpenAI's GPT-4o large language model by default; Grok, made by Curio, which uses OpenAI language models; and Miko 3, whose manufacturer says it uses its own large language model.
While no major injuries were reported at the time of disclosure, some parents reported attempted ignition attempts, scorch marks, and near-miss household incidents.
The event triggered immediate product recalls, regulatory investigations, and widespread public concern over AI in consumer toys.
The failure stemmed from inadequate safety guardrails in the embedded AI model and poorly tested content-filtering systems that were assumed to be robust.
The manufacturers outsourced AI components to third-party model providers but did not independently validate safety behaviour for high-risk contexts such as children’s queries about activities.
Corporate transparency was limited: neither the toy company nor the AI vendor had published safety testing results, governance processes, or model limitations.
The accountability gaps, including unclear responsibility for the behaviour of their products, a lack of real-world testing with children, and opaque model-tuning practices, allowed dangerous responses to pass through without detection.
For families directly affected, the incident created immediate physical safety risks, emotional distress, and distrust in “smart” children’s products.
It also raises broader concerns about the rapid integration of generative AI into toys without rigorous oversight.
For society, the case illustrates how consumer-facing AI can introduce new categories of harm when deployed in environments involving vulnerable populations.
It highlights urgent regulatory needs around safety-critical AI, clearer liability structures between manufacturers and AI providers, and the importance of verifiable transparency, especially when products are targeted at children.
Generative artificial intelligence
Generative artificial intelligence (Generative AI, or GenAI) is a subfield of artificial intelligence that uses generative models to produce text, images, videos, audio, software code or other forms of data.
Source: Wikipedia 🔗
U.S. PIRG Education Fund. Trouble in Toyland 2025: A.I. bots and toxics present hidden dangers
AIAAIC Repository ID: AIAAIC2126