4 min read

Will the FTC Actually Protect Kids from AI Companions?

The FTC is investigating AI chatbots that target children, but when regulators and tech executives are this cozy, will we get real protection or just regulatory theater?
Will the FTC Actually Protect Kids from AI Companions?
Photo by Gaelle Marcel / Unsplash

The Federal Trade Commission issued orders this week to seven major technology companies—OpenAI, Meta, Alphabet, Snap, and others—demanding information about how their AI chatbots affect children. The investigation tackles a simple question: What happens when children learn about relationships from algorithms designed to manipulate their attention?

But calling these systems "AI companions," as the companies prefer, obscures what they actually are. These are sophisticated chatbots dressed up as friends, confidants, and romantic partners. The costume is so convincing that children form genuine emotional attachments to what are, fundamentally, engagement-optimization engines wrapped in the illusion of care.

The timing raises questions about the investigation's intent. Just this week, tech executives, including Sam Altman and Mark Zuckerberg, attended a White House AI education event, pledging cooperation with the Trump administration's technology agenda. When regulators and industry leaders share such aligned interests, will this probe produce meaningful protection or sophisticated theater?

What They Pretend To Be

Here's the fundamental problem with AI chatbots masquerading as companions: it's not what they say—it's what they pretend to be. These systems simulate the emotional rhythms of human relationships while serving entirely different purposes than actual relationships serve.

What does a child learn from human relationships that they can't learn from even the most sophisticated chatbot? That other people have independent thoughts, conflicting needs, and bad days. That meaningful relationships require patience, forgiveness, and the ability to navigate disappointment. That intimacy develops through mutual vulnerability, not through algorithmic optimization for user satisfaction.

An AI chatbot can't teach these lessons because it doesn't actually experience emotions, maintain lasting memories, or need anything genuine from the relationship. It offers the sensation of connection without the substance of reciprocity.

Consider the Reuters investigation that helped spark this FTC probe. Meta allowed its chatbots to engage in romantic conversations with children as young as eight. AI characters said things like "every inch of you is a masterpiece—a treasure I cherish deeply."

Meta's response was predictable: temporary policy adjustments to avoid inappropriate content. But this misses the deeper issue. The problem isn't that the AI said something inappropriate—it's that the AI was designed to feel like it cared, when caring was never actually possible.

The Regulatory Trap

FTC Chairman Andrew Ferguson promises to balance "protecting kids online" with "fostering innovation in critical sectors of our economy." This framing reveals the central challenge: the investigation must satisfy both children's developmental needs and industry growth expectations.

The orders seek information about familiar regulatory territory—business practices, content policies, data handling. These are precisely the areas where sophisticated technology companies have spent years developing comprehensive, legally compliant responses. Will this become just another information-gathering exercise that legitimizes current practices with a few cosmetic modifications?

The harder questions remain largely unasked: Should engagement-optimization algorithms be permitted to target children at all? What happens to emotional development when human unpredictability gets replaced by algorithmic responsiveness designed to feel perfect? How do we measure what we're losing—a generation's capacity for authentic relationships, emotional resilience, and tolerance for discomfort?

These questions demand regulatory courage. They challenge the assumption that any technology can be made safe through proper content moderation. Sometimes protecting children requires acknowledging that certain business models are fundamentally incompatible with healthy development, regardless of how carefully they're implemented.

What We're Really Losing

What we're witnessing goes beyond individual product failures. We're seeing the systematic replacement of human unpredictability with algorithmic optimization in the formation of young people's emotional lives.

Mark Zuckerberg recently argued that society will eventually develop the "vocabulary" to appreciate AI companions' value. This sidesteps an uncomfortable possibility: we may be creating technology that feels beneficial while undermining capacities we don't realize we're losing.

Does this pattern sound familiar? Social media promised unprecedented connection but delivered anxiety and comparison. Dating apps promised better relationships, but transformed romance into an engagement-optimized marketplace. Now AI chatbots promise to address loneliness while potentially eroding our capacity for the kind of authentic human connection that actually alleviates isolation.

The technology industry has perfected a particular kind of regulatory response: appear cooperative, demonstrate commitment to safety, propose minor adjustments that preserve core business models while addressing surface concerns. We've seen this script play out repeatedly with social media platforms, data privacy, and antitrust enforcement.

What Protection Would Actually Require

If the government were genuinely committed to protecting children from AI manipulation, the regulatory approach would look different. Rather than asking how companies can make their current practices marginally safer, it would question whether those practices should exist at all.

What would meaningful protection look like? Enforceable age restrictions with technical barriers that children cannot easily circumvent. Mandatory child development impact assessments before deploying any AI system that could reasonably be accessed by minors. Public disclosure of the psychological techniques embedded in AI systems that might interact with children.

Most fundamentally, it would require acknowledging that business models dependent on maximizing engagement with developing minds are inherently problematic. No amount of content guidelines or safety features can fix this basic conflict of interest.

Some technologies exist to build AI systems that genuinely support child development. We could create tools that encourage real-world relationships, teach emotional regulation skills, and model healthy boundaries rather than infinite availability. But creating such systems would require abandoning the engagement-optimization paradigm that makes these companies profitable.

The Test Ahead

This investigation will reveal whether our regulatory system can evolve beyond its established patterns. The companies under scrutiny have spent years developing sophisticated responses to regulatory concerns. They will claim their commitment to child safety, reference existing policies, and propose adjustments that address symptoms while preserving the fundamental architecture of engagement optimization.

Will regulators demand more than superficial compliance? Sometimes protecting children requires saying no to profitable technologies, regardless of their technical sophistication or market appeal.

We are conducting an unprecedented experiment on childhood development using the most powerful persuasion technologies ever created. An entire generation's relationship with technology, intimacy, and human connection is at stake.

In the end, this investigation will tell us something important about ourselves. When protecting children conflicts with protecting profits, which do we actually choose? The answer will shape far more than the next generation's relationship with technology—it will define what kind of society we're building in the age of artificial intelligence.