London, UK – It was 2022, and the world for Cai, a 16-year-old boy, was no different from millions of other young boys across the globe. In the glow of his phone screen, he found company, curiosity, and, he thought, a bit of joy. A video of a cute puppy blinked before his eyes, but soon the tenderness was drowned by something else—darker, colder, and full of harm. Without warning, the algorithm turned, and his screen became a battlefield—car crashes, street fights, and a voice, full of disdain for women, echoed out from a figure whose face he barely knew.
“Why me?” he whispered to no one but the night.
Unseen Forces at Play: The Algorithm’s Power to Shape Minds
For many boys like Cai, these experiences are not accidents. The algorithms that drive platforms such as TikTok and Instagram have a language of their own, one that often speaks louder than the voices of their users. Behind the innocent façade of funny dances and friendly faces, lurks an intelligence so powerful, it listens not to words, but to time—the seconds you linger on a video, the quiet moments where your gaze catches something unexpected. And for boys, especially those at the fragile age where questions about the world and self are so delicately intertwined, these algorithms are pulling them into a whirlpool of violent and misogynistic content.
Andrew Kaung knows this too well. He sat at his desk in Dublin, working for TikTok, scanning endless hours of footage recommended to teenagers, particularly boys around Cai’s age. What he saw disturbed him—videos that glorified violence, dehumanized women, and promoted a dangerous world view. What hurt more was that these weren’t anomalies. These were patterns.
“They aren’t just watching,” Andrew explained to the BBC. “They are learning.”
The Seeds of Violence: Why Boys Are Targeted More
For the young and curious mind of a teenage boy, what he sees shapes what he believes. But the algorithm doesn’t care about growth, it craves engagement. It was designed to learn, trained to maximize the time spent watching, liking, and sharing. And in this pursuit, the content it shows—be it joyful or harmful—becomes fuel for a greater fire.
Andrew explained that boys, more than girls, were being shown violent and misogynistic content. “It wasn’t about who they were, but what other boys their age had been interested in,” he said. It was as if the algorithms had decided—somewhere in the quiet code that spun behind the screen—that what grabs the attention of one boy must surely catch the eye of another.
For girls, the experience was different. Makeup tutorials, pop songs, and life advice filled their feeds, free from the shadows of violence that haunted their brothers and friends.
The Cost of Looking Away: Social Media’s Failure to Protect
But TikTok, Instagram, and Meta, the giants of this new digital landscape, have their defenses. They point to systems of artificial intelligence (AI) that are in place to remove harmful content, claiming that 99% of the most violent and inappropriate material is taken down before it ever reaches 10,000 views. Yet, for many like Cai, those AI safeguards felt like little more than whispers in the wind. He, along with others, was still seeing the worst.
“I tried everything,” Cai admits. He clicked the “not interested” button, flagged content that disturbed him, hoping with all his heart that the algorithms would adjust, that the darkness would lift from his feed. But no matter how hard he tried, the content kept coming back, relentless.
“It stains your mind,” he says softly, “and no matter how hard you try to forget, you carry it with you.”
An Industry Slow to Change: Profit over Protection
Andrew Kaung saw this from the inside, both during his time at TikTok and earlier at Meta, the parent company of Instagram. Though there were systems in place, he recalls a stark truth: “The cost of making changes was too high, so nothing really changed.” Despite raising concerns, it seemed that the engine driving these companies wasn’t the safety of the users, but the engagement that brought in profit.
“It’s like asking a tiger not to hunt,” Andrew remarked. “We are asking these companies to moderate themselves, but their very survival depends on keeping you engaged—no matter the cost.”
The algorithms, fueled by a mixture of data, guesses, and reinforcement learning, don’t know the harm they cause. They know only what keeps you watching, what brings you back.
Hope on the Horizon: A Call for Greater Accountability
For Cai, though, the solution isn’t about shutting down the screens or banning the platforms. His phone is his lifeline, his connection to friends, to music, to laughter. But he wishes that the people behind these powerful systems would listen more closely to what young people like him truly want.
“I want them to respect us,” he pleads, “to take us seriously when we say we don’t want to see these things anymore.”
Change may yet come. In the UK, a new law is on the horizon, set to be enforced by 2025, which will require social media platforms to verify the ages of young users and stop them from seeing inappropriate content. The watchdog, Ofcom, will be watching over them, ready to enforce penalties on those who continue to put young minds at risk.
But for Andrew Kaung and the millions of boys like Cai, this is a battle that should have been fought long ago. And while TikTok and Meta now boast about the measures they have in place, the damage for many has already been done.
A Future Worth Fighting For
We cannot go back and erase what Cai has seen, what his friends have been exposed to, or how their hearts and minds have been shaped by the violence and hate that slipped through the cracks. But there is a future still to be written, one where algorithms don’t dictate the shaping of young souls, where boys and girls alike are free to grow, to dream, to become more than the clicks and views they leave behind.
And perhaps, if the world listens closely enough, that future will come soon enough to save the ones still watching.