Inaccuracies reveal child protection worker used ChatGPT to draft court report
Inaccuracies reveal child protection worker used ChatGPT to draft court report
Occurred: December 2023
Report incident 🔥 | Improve page 💁 | Access database 🔢
A child protection worker used ChatGPT to draft a critical report for a custody case, inadvertently disclosing sensitive information that could have endangered a child's safety.
The incident, which was reported to the Victorian Office of the Information Commissioner (OVIC) in December 2023, revealed that the worker included personal details about the child and their family, which ChatGPT mishandled by downplaying risks associated with the child's living situation with parents charged with sexual offenses.
The investigation found multiple signs of ChatGPT's involvement in the report, including inappropriate language and sentence structures inconsistent with child protection protocols.
Notably, the report described a child's doll, allegedly used for inappropriate purposes by the father, as an "age-appropriate toy," which raised alarms about the potential misrepresentation of risks to the child.
The use of ChatGPT by Victoria's Department of Families, Fairness and Housing (DFFH) to draft child protection-related documents was systemic and not policied.
OVIC found that the worker may have used ChatGPT in 100 cases in drafting child protection-related documents, and that nearly 900 employees - almost 13 percent of the workforce - had accessed ChatGPT across the department from July to December 2023.
OVIC mandated that the DFFH block access to generative AI tools like ChatGPT, with the department being given until November 5, 2024, to implement the restrictions and ensure compliance with privacy regulations.
While the outcome of the child's case remained unchanged, the incident highlights significant ethical and safety issues surrounding AI use in sensitive areas like child protection.
OVIC Commissioner Sean Morrison emphasised that AI systems can make serious errors that may lead to harmful consequences if relied upon without verification.
The incident demonstrates the dangers of using generative systems, including ChatGPT, for sensitive work.
It also highlights the need for organisations to ensure their people understand the risks of using these tools and that procedures are put in place to stop employees using them for sensitive work, or to have their work properly vetted.
Operator: Department of Families, Fairness and Housing
Developer: OpenAI
Country: Australia
Sector: Govt - welfare
Purpose: Generate text
Technology: Chatbot; Generative AI; Machine learning
Issue: Confidentality; Ethics/values; Privacy
Transparency:
Office of the Victorian Information Commissioner. Investigation into the use of ChatGPT by a Child Protection worker (pdf)
Page info
Type: Incident
Published: October 2024