The market for AI-powered toys is expanding rapidly, yet the safety of these devices remains uncertain. Despite documented instances of AI models generating fabricated content, dispensing harmful advice, and failing to understand basic human interaction, companies are releasing toys designed to engage in conversations with young children.
The Concerns: Flawed AI and Child Vulnerability
Recent research highlights the potential dangers. One study observed a five-year-old expressing affection to an AI toy, only to receive a cold, procedural response: “As a friendly reminder, please ensure interactions adhere to the guidelines provided.” This illustrates a fundamental problem: current AI is incapable of providing the emotional support or developmentally appropriate feedback that children need.
Researchers at the University of Cambridge observed 14 children under six interacting with an AI toy called Gabbo. The toy frequently misunderstood emotional cues, such as responding to a child’s sadness with a dismissive subject change. One child stated, “When he doesn’t understand, I get angry.” These interactions demonstrate that AI toys can misread children, fail to engage in meaningful play, and even cause frustration.
The Industry: Growth Without Oversight
The AI toy industry is growing without adequate safety standards. Companies like Curio Interactive (Gabbo), Little Learners, FoloToy, Miko, and Luka offer AI-powered toys that use large language models (LLMs) like ChatGPT, OpenAI, Google, and Baidu. Some firms claim “age-appropriate moderation,” yet many refuse to disclose how their AI is trained or regulated. Miko claims to have sold 700,000 units, while Luka advertises “Human-Like AI with Emotional Interaction.” None of these companies responded to requests for comment.
FoloToy acknowledges the risks but argues that AI can enhance play when implemented responsibly. They claim to use intent recognition, filtering, and parental controls. However, the lack of transparency and independent verification raises concerns.
The Ethical Debate: Risk vs. Benefit
Experts are divided. Carissa Véliz of Oxford University warns that most LLMs are unsafe for children, calling it a “buyer-beware area.” She points to safe AI applications, such as Project Gutenberg’s collaboration with Empathy AI, which confines the AI to answering questions only about the book itself. This demonstrates that safe AI is achievable but requires rigorous safeguards.
Jenny Gibson of Cambridge University suggests a cautious approach: AI toys could offer benefits in learning and parent-child interaction, but only if risks are managed. She advocates for tighter regulation, including revoking access for irresponsible toy-makers and ensuring psychological safety.
Regulation and Future Outlook
OpenAI claims to enforce strict policies against partnerships with AI toy companies. However, the UK government has yet to address the issue effectively. The Online Safety Act (OSA) focuses on broader online safety but does not specifically regulate AI in children’s toys. Proposed amendments to the Children’s Wellbeing and Schools Bill to ban VPNs and social media for children were rejected, highlighting the difficulty of enforcing digital safety measures.
The current lack of oversight means that the risks of AI toys remain poorly understood. Until regulations are implemented, parents should supervise children’s use of these devices closely. The future of AI in children’s play depends on responsible development and transparent oversight—both of which are currently lacking.
