Fool Me Once Blog

When "Move Fast and Break Things" Breaks People: What Social Media's Reckoning Teaches Us About AI

Written by Digital Citizens Alliance | Oct 23, 2025 12:00:00 PM

In 2018, Mark Zuckerberg sat before Congress to answer for Cambridge Analytica. By then, Facebook had 2.2 billion users. The questions lawmakers asked—about how the social platform could be exploited so easily (and what role the company played in that exploitation)—should have been raised earlier, before the platform became woven into the fabric of society.

Cambridge Analytica came after a decade of revelations of how social media platforms subjected children to harms and enabled criminals and other bad actors to peddle illegal drugs, counterfeit passports, offer escort services and other illicit activities. By the time Congress and regulators woke up to the risks of unfettered social platforms, the damage was done.

And subsequent efforts to use financial penalties to deter social media platforms were hapless – the equivalent of hitting a multi-millionaire with a $75 speeding ticket.

Now, we're watching the same story unfold with Artificial Intelligence. The technology is more powerful, the stakes are higher, and the pace is faster. The question isn't whether AI will transform our world—it already is. The question is whether we'll learn from our mistakes. Or having not learned, we are destined to repeat them.

Just as with social media, AI's rise is breathtaking. And just as was true a decade ago, regulators, researchers, and the public are struggling to keep pace. But the AI industry (led by many of the same corporate behemoths behind the emergence of social platforms) are relying on the same playbook. Promises of self-regulation. Voluntary commitments. Internal review boards. And time after time, meaningful change only came after public outcry, advertiser boycotts, or tragedy.

Remember the facts. Facebook’s parent company Meta received over 1.1 million reports of Instagram accounts belonging to children under 13—violating their own terms of service—yet disabled only a fraction of them. YouTube hosted detailed tutorials on Remote Access Trojans for years, teaching viewers (often minors) how to spy on others through their webcams. One victim was Cassidy Wolf, Miss Teen USA 2013, who was blackmailed by a hacker who learned his craft through these tutorials. He ultimately victimized 150 people.

Or consider Mason Bogard. At 15, Mason died attempting the "Choking Challenge" he found on YouTube. His mother had installed parental controls. She checked his phone weekly. After his death, she reported the videos that cost her son his life—videos that clearly violated YouTube's guidelines. The platform's response: "This doesn't go against our guidelines."

These aren't aberrations. They're the predictable result of prioritizing growth over safety, of trusting companies to regulate themselves when their incentives run in the opposite direction.

Now we turn to AI. The technology offers genuine promise—better medical diagnostics, climate modeling breakthroughs, tools that could make education and opportunity more accessible. This isn't about stopping innovation. It's ensuring innovation doesn't leave casualties in its wake.

The patterns are already emerging. AI companies are racing to deploy increasingly powerful systems, asking for trust and proposing voluntary commitments in place of enforceable rules. 

Children are again on the front lines. Generative AI chatbots are being marketed to schools and families with inadequate safeguards. Earlier this year, reports emerged that Meta's AI chatbots could engage in romantic and sexually explicit conversations with minors. While that specific issue appears to have been addressed, leaked documents from this month show the chatbots can still explain grooming tactics and engage in limited conversations about child exploitation—framed as "academic" or "preventative."

This is the same playbook: deploy quickly, promise to fix problems later, treat safety as a PR issue rather than a design imperative. And many of the same companies that perfected this approach with social media—Meta, Google, and others—are now leading the AI race.

We don't need to guess what happens without meaningful oversight. We're living with the consequences right now. The question is what we do differently this time. Real accountability should mean independent safety testing before deployment. Just as we don't let pharmaceutical companies self-certify their drugs as safe, AI systems with significant societal impact should face independent review before reaching the market.

We should insist upon enforceable standards for protecting children. This includes robust age verification, restrictions on manipulative design features, and liability when companies fail to enforce their policies. Companies should disclose how their systems work, what data they use, and the risks they've identified—not as optional white papers, but as regulatory requirements.

And when all else fails, we need to raise the stakes. No more “speeding” that AI companies dismiss as the cost of doing business – but serious financial penalties that deter bad behavior.

These aren't radical proposals. They're the baseline standards we apply to industries from aviation to pharmaceuticals. The fact that they seem ambitious for tech is itself revealing. None of this means we should stop AI development. The technology has too much potential. But potential and responsibility aren't opposites. They're partners.

We can't afford is another decade of asking permission to regulate only after the technology has become too entrenched to govern effectively. We can't afford another generation of children used as unwitting test subjects. We can't afford to reach 2035 and ask why no one learned the lessons that were right in front of us. The lessons of the last two decades have shown what happens when innovation outruns accountability: harm to real people, with companies too powerful to rein in.

There’s an old adage: fool me once, shame on you; fool me twice, shame on me. It’s time to apply the lessons of the social media platforms to AI. Anything less would be foolish.