Recommendations for Regulating Artificial Intelligence to Minimize Risks to Children and Their Families

From January 2023 to March 2024, multiple entities have published guidance on artificial intelligence (AI), underscoring growing public concerns regarding AI governance. Meanwhile, as federal and state legislators weigh the need for AI regulations to safeguard the public from various risks, recent discourse about AI risk has overlooked the use of AI by children and their families or caregivers. This gap is widening as students increasingly turn to AI for homework assistance and interact with AI-generated content (including images and videos), and as caregivers (including both parents and educators) attempt to use AI to foster child engagement. Drawing on lessons from a recent Child Trends study on the capabilities of AI systems, we propose stronger guidance and regulations to ensure rigorous assessment of potential harm by AI systems in contexts involving children and families.

For our study, we created two AI systems—both based on two prominent Large Language Models (LLMs; see Methods note below this blog)—and found that these two models showed strong agreement on simple tasks (e.g., identifying articles on compensating the early childhood workforce) but diverged when handling complex subjects (e.g., analyzing articles on change framework). This divergence in handling complex subjects illustrates a potential risk within AI systems: AI’s interpretation (or misinterpretation) of complex human ideas and values could expose children and caregivers to incorrect information or harmful content.

We propose that federal and state regulators mandate proper assessment of three aspects of AI systems to minimize the potential risks of AI to children and families. First, regulators should mandate AI assessments that are capable of distinguishing between AI systems that can reliably handle both simple and complex subjects and those that cannot. Our experience with the AI systems created for our study illustrates the need for safeguards so that AI tools and systems meant for young people—from chatbots to virtual reality devices—can be trusted to not generate images and suggestions that are harmful, dangerous, or unethical.

Read More

Comments are closed.