Child protective services and Artificial Intelligence (AI)? Can high-powered digital tools support families and children? We think they can, when used with care and forethought. No commissioner or supervisor wants to read about a career ending incident caused by a computer, and the best way to avoid digital mishaps and maximize the potential of AI tools is Be clear about AI use policies and Offer staff professional development opportunities to try AI tools out and engage critically in thinking about AI use.
Generative AI like ChatGPT, Gemini and Claude, called Large Language Models (LLMs) may feel new, but computer scientists have been working on building computer “intelligence” since the 1950s and 60s. These systems use masses of data, think of all of the works of Shakespeare, plus all of Wikipedia, Reddit and the New York Times, plus more, to “train” computer algorithms or formulas to work. In many more corporate or commercial fields, such as retail operations, human resources, or sales and marketing, artificial intelligence has been used for decades. However, in child and family protection agencies, AI systems have not been as widely used, for all the reasons we can imagine. AI systems have been accused of making biased decisions, for accelerating existing structural inequalities, and for “hallucinating” or delivering the wrong answers. In our settings, where families’ and children’s safety and security is on the line, caution in adopting AI tools is appropriate and necessary.
The biases baked into predictive systems are well documented, and newer AI tools, like ChatGPT, also pose risks to workers and the families they serve. However, the newer LLMs which can process and output text, audio, images and video, offer significant potential for improving internal operations of child welfare agencies and systems, but should be deployed only with great care and internal oversight. This article describes what AI is, offers examples of the potential AI use cases, as well as examples of unsuccessful AI deployments, ending with suggestions for AI governance in child protection. Pro-active AI use policies and practice guidelines can help leadership and staff safely manage the use of LLMs and other AI tools, in service of protecting workers and families.
What is AI?
AI is not one specific digital tool, it is a range of computational processing tools that encompass everything from computer vision, think self-driving cars, to sensing technologies, think automatic doors, to LLMs, think ChatGPT (among others). These tools use massive amounts of all kinds of data including videos, text, photos and songs to make probabilistic decisions about what data comes next. Traditional or narrow AI uses big sets of existing data to make predictions about what will happen in the future. These kinds of systems have been used in child and family protection systems for the last ten years, most famously in Allegheny County, PA. Newer Generative AI tools, like LLMs are designed to create new data. While the answers from predictive analytic models are designed to be reliable, or report the same response, every time the model is used, Generative AI models are designed to produce new responses, so it would not be surprising for a generative digital tool to create a different case note every time it was given the same data. LLMs also can make mistakes. Hallucinations are the cute name given to the wrong answers that an LLM can potentially deliver. LLMs work by probabilistically linking sequences of words together.
Sometimes, a model can link the wrong set of words. For example, when the New York City Mayor’s Office released a chatbot delivering fake information on the City’s small business regulations (Offenhartz, 2024).