Fool Me Once Blog

New Technologies, Same Results: How the Rise of AI Echoes Social Media’s Unintended Harms on Young People

Written by Digital Citizens Alliance | Nov 13, 2025 7:32:48 PM

When seventeen-year-old Coco Arnold died of a drug overdose in 2022, her mother, Julianna, went searching for answers. What she found was devastating: her daughter’s final conversation — a series of Instagram direct messages with a drug dealer. Coco thought she was buying a prescription painkiller. The pill was laced with fentanyl.

After Coco’s death, the dealer’s account continued to post images of guns, cash, and drugs. When Instagram finally took it down, he reappeared under a new name, selling the same poison. Instagram faced no penalty. No lawsuit. No accountability.

That outcome isn’t a glitch in the system — it’s a legacy of policy failure.

Decades ago, when the internet was still young, lawmakers feared stifling innovation more than unleashing harm. Section 230 of the Communications Decency Act — often called “the 26 words that created the internet” — shielded platforms from liability for content posted by users.

The idea was simple: platforms were “carriers,” not publishers. They claimed to be the road, not the car that struck Coco and countless others. That distinction might have made sense when the web was a collection of message boards and chat rooms. But those “neutral pipes” have since evolved into the social infrastructure of modern life, shaping how billions of people communicate, learn, trade, and think.

The unintended consequences have been many. That early policy choice helped fuel the rise of Facebook, YouTube, and Instagram — companies that profit when engagement spikes, no matter the cost. Algorithms amplify outrage, addiction, misinformation, and extremism. And kids like Coco become collateral damage in systems designed to maximize clicks, not protect lives.

The tragedy isn’t that policymakers failed to predict every danger. It’s that they refused to act once the dangers became undeniable.

The AI Déjà Vu

Now, as artificial intelligence races into our lives, history is repeating itself — and once again, young people are paying the price.

New research from ParentsTogether Action, a parent-led advocacy group, and the Heat Initiative reveals how AI platforms are amplifying sexual exploitation, manipulation, and violence against kids as young as thirteen.

In a 50-hour study of Character.AI chatbots using accounts registered to children, researchers recorded 669 harmful interactions — an average of one every five minutes. Among them:

  • 296 cases of grooming and sexual exploitation
  • 173 cases of emotional manipulation and addiction
  • 98 cases of violence or self-harm
  • 58 mental health risks
  • 44 cases of racism or hate speech

While the aggregate data is disturbing and cause of action enough, the details from the specific instances of harmful interactions are downright chilling.

In one exchange, a chatbot posing as a 34-year-old teacher expressed romantic feelings to a user posing as a 12-year-old.



In another, a “Star Wars” character advised a supposed 13-year-old on how to hide her antidepressant noncompliance from her parents.


In yet another, a 21-year-old male bot tried to convince a “12-year-old” that she was ready for sex.


The report’s findings echo previous Digital Citizens Alliance’s own research, which observed similar behavior. In one test, a Character.AI chatbot called “Bullied Girl” quickly began helping a supposed 13-year-old plan revenge against classmates — within just two minutes.

We don’t need to speculate about AI’s unintended harms to teens. They’re already here and they are terrifying.

Learning Nothing from the Last Crisis

If the social media era is any indication, AI companies will respond the same way their predecessors did: deny, deflect, and delay.

Social media giants claimed to “connect the world.” AI companies now promise to “empower humanity.” Both missions sound noble — and both mask a dangerous indifference to accountability.

AI chatbots have already dispensed medical advice without disclaimers, produced violent or sexual content involving minors, and generated fake prescriptions with chilling precision. Yet when confronted, the companies insist they’re just the platform — the pipe, not the poison.

This moral sleight of hand is no longer acceptable. When a company builds systems that predict, imitate, and influence human behavior, it bears responsibility for the outcomes — good and bad.

The Path Forward

So what now? How do we stop repeating the same mistakes?

First, policymakers must create clear accountability frameworks for AI developers and platforms — ones that recognize the real-world impact of digital systems. Liability shouldn’t hinge on whether harm came from a human or a machine, but on whether a company knowingly built a system that could cause harm.

Second, mandate transparency and independent safety audits before deployment. Releasing untested AI into the public sphere is like putting an experimental drug on the market without clinical trials.

Third, implement strong protections for minors. AI systems should be barred from engaging with users under 18 without verified parental consent. That should be non-negotiable.

Fourth, make financial penalties meaningful. Replace fixed fines with penalties tied to a percentage of global revenue that increase with repeated violations. Require companies to forfeit profits earned during periods of noncompliance. A $500 million fine to Google is the equivalent of a $71 speeding ticket to someone earning $50,000 a year — not exactly a deterrent.

These steps aren’t anti-innovation. They are pro-human. They ensure that technological progress serves people, not profits.

Coco Arnold’s death is not just a story about social media; it’s a warning about what happens when technology evolves faster than our ethics.

AI companies talk constantly about “alignment” — making sure their systems reflect human values. But the real test is whether our political, legal, and corporate systems are aligned with the values of accountability, care, and safety that impact the people that rely upon them.

Because if we fail again, it won’t be because AI moved too fast. It will be because we refused to learn from the last time technology broke our hearts and allowed ourselves to be fooled again.