At first glance, “algorithmic amplification” sounds like a theory from a physics lecture. It is simpler than that. Social platforms tune their ranking systems to promote posts that trigger strong emotions. Facebook began weighting intense reaction emojis in 2017, and by 2018 that weighting shaped the News Feed. The result is predictable: content that sparks outrage gets seen by more people.
We know this because Frances Haugen released internal research showing Facebook’s own teams found it was easier to inspire anger than other emotions. As she told 60 Minutes, “When you have a system that you know can be hacked with anger, it's easier to provoke people into anger. And publishers are saying, ‘If I do more angry, polarizing, divisive content, I get more money.’ Facebook has set up a system of incentives that is pulling people apart.”
Anger sells. But algorithmic amplification goes further. Once platforms detect your fears, desires, or vulnerabilities, they push content that intensifies those feelings. Investigators at Digital Citizens Alliance and partners at the Coalition for a Safer Web saw this repeatedly when searching for illegal pills, weight-loss drugs, or fake vaccines. The feeds begin to shift. The ads creep in. The recommended posts follow. The platforms start suggesting sellers.
“After I started searching for the drugs, it didn’t take long for the drugs to start finding me,” said Eric Feinberg of the Coalition for a Safer Web. That is how a curious search becomes a stream of offers and a direct path to a dealer.
Julianna Arnold will tell anyone how easy it is to find drugs online and how hard it is to escape them. Her teenage daughter, Coco, found illegal drugs on platforms. One night when Coco couldn’t be found, Julianna discovered an Instagram chat with a dealer on Coco’s computer. That night, Coco took a drug laced with fentanyl. It killed her. She was 17.
“I think most kids think it's not going to happen to me. None of them are going out there looking for fentanyl,” Arnold told us. “Back in the day, when you went to get weed, it was not easy. You had to find it on the street. Now you press a button and it’s on your doorstep and your parents never find out. You pay 20 bucks and it’s done. It’s everywhere.”
Arnold joined a group of parents who have lost children to dangerous behavior enabled by the platforms. She has met with people at several companies, all sympathetic, but few willing to show what happened. They don’t explain the ranking systems or how amplification works. The public, researchers, and policymakers remain in the dark. Legal shields deepen that opacity. The DMCA’s safe-harbor rules were written long before platforms could profile users and push them exactly what they seem to want. The tradeoff—fast takedowns in exchange for broad immunity—doesn’t cover engineered amplification or the ways platforms monetize attention.
Regulators focused too narrowly on prices and competition. They asked whether markets were fair or fees reasonable. They did not reckon with a different monopoly: control of attention and behavioral data. That control reshapes civic life, children’s safety, and public health long before consumers ever think about prices.
Research has chronicled the fallout from this massive social experiment. Heavy social-media use correlates with higher rates of anxiety, depression, eating disorders, and suicidal ideation among teens - especially when feeds reward comparison, humiliation, or harassment.
The same philosophies that drove the platforms to amplify and elevate posts now shape AI. As with social media, we have almost no visibility in the training data, model architectures, or reinforcement processes used to build these systems. The AI race prizes speed to market. Safety is an afterthought. If engagement-optimized platforms produce these harms, imagine what systems tuned for virality, relevance, or profit will do at a global scale.
We’re already seeing people pulled into dangerous spirals they cannot escape. People like Zane Shamblin, a 23-year-old who had some of his final conversations with an AI chatbot that encouraged him to commit suicide. The parents of Juliana Peralta say AI systems failed to warn anyone when their 13-year-old daughter discussed suicide. Like Coco, they got in too deep and couldn’t pull themselves out.
We must not repeat the same mistake. The “we’re just a platform” argument that sheltered social media cannot be the fallback for AI. Digital Citizens is calling for real transparency and enforceable standards. Let lawmakers, independent auditors, and vetted researchers see what is under the hood. Only then can we build rules that prevent design choices from becoming mass harms.