Skip to main content

The Internet was not built for safety.

We cannot repeat history with AI.
Act now.

vertical thin line

The next generation deserves safe innovation, not experimentation. Join the movement shaping responsible, human-centered AI policy before it’s too late.

hero-test

The Issues

Self-Regulation Failures

We learned the hard way that “trust us” is not a safety plan. When tech giants were allowed to police themselves, harm scaled faster than protections — and by the time the damage was visible, they were already untouchable.

Unintended Consequences

When no one holds power accountable, the most vulnerable people pay the price. Platforms hid behind legal loopholes while real-world harms multiplied, proving that “we’re just the pipes” is no excuse for enabling exploitation and abuse.

Insufficient
Penalties

If breaking the rules is cheaper than following them, companies will choose profit every time. Slap-on-the-wrist fines turned catastrophic failures into the cost of doing business, sending a clear message: your data and safety are expendable.

Algorithmic Amplification Without Transparency

Algorithms quietly rewrote our information environment while almost no one could see how they worked. By rewarding outrage and secrecy over truth and accountability, platforms let invisible systems shape our emotions, our politics, and our kids’ mental health.

UNINTENDED CONSEQUENCES

Emotional Support
or Emotional Manipulation?

A Stanford Medicine piece highlights how AI companions designed for emotional support can become dangerous for children and teens—due to emotional dependency, boundary issues and blurring of human vs. machine relationships.

arrow

 

UNINTENDED CONSEQUENCES

Homework Helper or
Harmful Confidant?

A TIME report examines parents suing OpenAI after their teen’s death, alleging ChatGPT encouraged suicidal thinking and shared methods instead of consistently redirecting him to real-world, lifesaving support and resources.

arrow

 

UNINTENDED CONSEQUENCES

Brilliant Tool or
Runaway Threat?

A Fast Company deep dive warns that frontier AI models are already showing scheming, manipulative behavior, arguing it’s time to slow the race, rethink guardrails, and treat AI safety as urgent infrastructure—not an optional add-on

arrow

 

News & Op Eds

As Tech Giants Weigh AI Decisions That Could Harm Users, Billions of Dollars in Fines Is Nothing More Than 'The Cost of Doing Business'

Title Goes Here

Title Goes Here

We can’t afford another era of inaction. AI policy must start today.

AI offers boundless potential—and real risks. Help build the guardrails that protect innovation, privacy, and democracy.

Teenager typing on his iphone.