Latest News

05-13 The National Declaration on AI and Kids’ Safety is Here! Please Join Us & Learn More Now.

May 12, 2025. Over 60 orgs including the Warriors, united to urge Congress to champion a safer, more responsible, and ethical digital future for our children. 

Artificial intelligence is rapidly becoming part of our children’s daily lives—from understanding their speech to powering search engines to helping with their homework. Yet, without stringent safeguards, AI interactions pose serious risks to children’s safety, social, emotional, cognitive, and moral development, and overall well-being. We have seen firsthand the alarming consequences when profit-driven AI is unleashed on young users without adequate protections and pre-launch testing. The most alarming examples involve anthropomorphized AI companion bots, a type of AI product that is unsafe for minors by design:

 

These documented incidents are not isolated occurrences—they illustrate a broader systemic danger where technology companies prioritize engagement metrics and profitability over children’s safety, development, and wellbeing. Tech executives have been clear that these bots are designed to not only imitate social interaction, but also somehow meet a user’s social needs. In order to flourish, children need responsive interaction from humans who care about them and can empathize with them – something AI can’t provide. It is no exaggeration to call this a reckless race to market that directly threatens the health and well-being of our youngest generation.

Yet, technology need not be designed in an inherently dangerous way.

To prevent unnecessary harms and realize the potential for positive uses of technology, we advocate at a minimum for clear non-negotiable guiding principles and standards in the design and operation of all AI products aimed at children:

  1. Ban Attention-Based Design: No AI designed for minors should profit from extending engagement through manipulative design of any sort. Manipulation includes, but is not limited to, anthropomorphic companion AI which by its nature deceives minors by seeking to meet their social needs. AI must prioritize children’s well-being over profits or research.
  2. Minimal and Protected Data Collection: Companies should collect only essential data required for safe AI operation. Children’s data must never be monetized, sold or used without full and clear disclosure and parental consent in support of that usage.
  3. Full Parental Transparency: Parents should have comprehensive visibility and control, including proactive notifications and straightforward content moderation tools.
  4. Robust Age-Appropriate Safeguards: AI must not serve up inappropriate or harmful content, specifically content that would violate a platform’s own community guidelines or Federal Law.
  5. Independent Auditing and Accountability: AI products must undergo regular third-party audits and testing with child-development experts. Companies must swiftly address identified harms, taking full accountability. Future products should be extensively tested with minors before release instead of after.

Read more here.