Fool Me Once Blog

As Tech Giants Weigh AI Decisions That Could Harm Users, Billions of Dollars in Fines Is Nothing More Than 'The Cost of Doing Business'

Written by Digital Citizens Alliance | Oct 31, 2025 3:11:39 PM

If you want to understand how tech companies are weighing "profits now vs. future trouble" decisions when it comes to artificial intelligence (AI), consider this: Google forfeited half a billion dollars to settle federal charges for knowingly accepting advertising from Canadian online pharmacies illegally selling controlled substances and other drugs to U.S. customers. The story of how Google advised an online drug dealer on how to skirt laws is a remarkable whodunit worth reading.

A $500 million fine for endangering consumers would feel like a five-alarm fire for most companies. That financial hit could sink some corporations. And if it didn't, the bad PR alone could take years to repair. But for the last decade-plus, for tech companies such as Google and Facebook, large fines have been just another day in the office. 

Why? Because billions of dollars in fines are a drop in the bucket to them. Did Google's half-billion-dollar hit send shock waves through the industry and send the company's stock plummeting? Not even a little bit. The news didn't dent its stock price, and Google plowed forward to become the $350 billion juggernaut it is today.

Years later, Facebook faced a reckoning for allowing the data from users and their Facebook friends (87 million people in total) to be harvested and sold to Cambridge Analytica. The Federal Trade Commission in 2019 fined the company $5 billion, which sent its stock……..upward. That's right; investors' reaction to the record fine raised the company's market value by $10 billion. 

All told, Google and Facebook have faced $25 billion in fines over the last decade for exploiting their size to harm competitors and their own users. But when you make a collective $164 billion a year in profits (as Google's and Facebook's parent companies do), a few billion dollars a year is not a serious deterrent.

Users, especially young ones, have paid the real price. Former Miss Teen USA Cassidy Wolf fended off a sextortion attempt by an attacker who exploited her and dozens of victims. Gavin Guffey was 17 when he took his own life after scammers targeted him in an Instagram-initiated sextortion scheme. Brandon Guffey, Gavin's father and a South Carolina State Representative, said, "These companies are making this much money; they need no protection. And I'm a little bit more libertarian minded; I'm the guy who normally wants to rip out regulations. I literally want to stop these algorithms."

Remember, to Facebook these aren't unforeseen events. In a 2016 memo, company executive Andrew Bosworth said, "So we connect more people. That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still, we connect people."

Which brings us to the future of AI. In the race to lead on AI, it's hard to imagine companies approaching it differently than they did to become dominant social media platforms. Train models on user data without consent, sell behavioral insights to third parties, or harvest conversations to build detailed user profiles for monetization? All doable. And just as with social media platforms, it's not hard to envision AI assistants being purposefully designed to be artificially engaging or emotionally compelling to maximize usage and engagement.

If you have any doubt, consider what we're already seeing with kids using AI. Families are already coming forward, telling their stories in Congressional testimony or court cases about children who committed suicide after interactions with AI. In one instance, ChatGPT allegedly mentioned suicide 1,275 times to 16-year-old Adam Raine before he took his life. His parents say that ChatGPT even offered to draft a suicide note for the 16-year-old.

As their actions have severe consequences, look for AI companies to deploy the same "move fast" playbook used for social media: deploy quickly, promise fixes later, treat safety like a PR problem. Then pay billions in fines later if it protects tens of billions in revenue now.

Policymakers have a choice. They can ignore the lessons of the social platform era, grant AI companies wide latitude through self-regulation, and later impose ineffectual fines. Or they can learn from the lessons of the last two decades and approach the AI era differently.

Responsible voices are speaking out. "We can, and must, do better. It begins with accountability and transparency," said Carmen Marsh, founder of the Global Council for Responsible AI. "A global framework that is cross-sector and cross-industry must be embedded not only in the development but also in the implementation and use of AI systems. Such a framework integrates practical safeguards that preserve the freedom to innovate while ensuring that technology serves humanity, enhancing our digital world rather than destabilizing it with job loss, failed deployments, and escalating risks. These measures are not barriers to innovation; they are the foundations of trustworthy innovation." 

There are specific steps that policymakers can take:

  • Make financial penalties an actual deterrent. Replace fixed penalties with fines calculated as a percentage of global annual revenue that escalate with repeated violations. As part of that, requiring companies to forfeit all profits generated from the period they knowingly violated rules would be a real deterrent. Consider this: a $500 million penalty for Google is equivalent to a $71 speeding ticket for a person making $50,000 a year.
  • Require safety before deployment. AI systems should be required to pass independent safety evaluations before public release. These tests should include risk assessments for foreseeable harms, especially to vulnerable populations. 
  • Protections for minors. AI systems should be prohibited from engaging with users under 18 without parental consent. This seems so basic that it doesn't require explanation.
  • Regulatory technical expertise. One of the biggest weaknesses of the social media era was the lack of knowledge at the lawmaking and regulatory level. Government must prioritize the recruitment of engineers, ethicists, and technologists who understand these systems, not just lawyers and policy generalists who can't keep up.

The Internet was not built for safety. Shame on tech companies for not making that a priority once they realized the harms. We can build AI differently, but only if we act now with rules that drive good behavior, remedies that change direction, and enforcement that truly protects children and consumers. If we don't, shame on us.