Spring of 2009, New York. At the time, the beauty standards in the city were impossible to escape. As a young female student, I felt pressure to conform. I worked out excessively and was strict with my diet. One Saturday morning, like every week, I went to the gym. But this time, something felt off. I wasn’t myself. Still, I pushed through, convinced that missing one session wasn’t an option, until a point where I nearly fainted. That’s when my instructor stopped me and told me—plainly and firmly—that I needed to take care of myself first before coming back.
Struggling with an eating disorder is hard enough in the offline world; in today’s digital landscape, it’s even more inescapable. 70% of girls aged 12 to 19 report that social media harms their mental health. Moreover, they are exposed to 6,000-10,000 ads daily, promoting unattainable beauty standards.
The data-driven business model of social media doesn’t just reflect these societal pressures—it amplifies them by deploying powerful predictive algorithms, trapping users in an echo chamber where harmful content becomes the norm, not the exception. Today, the EU has a strong legislative framework provided by the Digital Services Act (DSA), which sets clear rules for all very large online platforms operating in our EU single market, preventing them from using these techniques. But the critical question is: are the platforms complying?
Online platforms are designed to keep users engaged, no matter the cost. Algorithms feed content based on past interactions, meaning that a single click on a video about dieting or weight loss can quickly spiral into a feed dominated by extreme and harmful messaging. The Center for Countering Digital Hate (CCDH) found that YouTube actively pushes eating disorder content: one in three videos recommended to a simulated 13-year-old girl after watching just one eating disorder-related video contains harmful content. This isn’t accidental – it’s by design.
I conducted my own small social experiment. I followed a social media account with over 200,000 followers that subtly promotes eating disorders, using coded language like “skinni” instead of “skinny” and glorifying unhealthy weight loss.
Despite reporting the account multiple times, the platform ruled that the account wasn’t going against their Community Standards on eating disorders. Officially, that may be true—but for young people who have suffered from eating disorders, seeing this content can immediately set them back. The reality is clear: despite the promises of social media companies and the existence of the DSA, harmful content is still slipping through the net.
The DSA was meant to be a turning point. It recognized that platforms have a responsibility to ensure that their recommender systems—those algorithms that decide what content gets shown to whom—do not impair users’ ability to make free and informed choices. It explicitly requires platforms to protect minors from harm. Yet, we continue to see addictive design practices that exploit vulnerabilities, from amplifying toxic content to creating “rabbit hole” effects that reinforce dangerous behaviors.
The European Commission is investigating TikTok for its role in fostering digital addiction, and France has launched a parliamentary inquiry into whether TikTok encourages self-harm and hypersexualized content among young users. And TikTok is just one example.
So, what’s missing? Enforcement is one issue, but the quest for safer social media must go further. In July 2024, European Commission’s President Ursula von der Leyen announced the first-ever European-wide enquiry into the impact of social media on the wellbeing of young people, with a focus on addressing the addictive design of online platforms. I’ve since asked the Commission for further details on this. But whatever the outcome, one thing is clear: this inquiry must feed directly into the upcoming Digital Fairness Act (DFA).
The EU is at forefront in regulating the online space, with ground-breaking pieces of legislation, such as the DSA or the AI Act. In view of the DFA, children and youth should be at the centre and their mental health must be prioritised. The DFA must directly address the addictive design of platforms. We have a unique opportunity to introduce clear, enforceable rules against harmful algorithmic amplification and digital addiction, including a concrete list of banned practices. It should promote transparency and accountability in how platforms prioritize content, ensuring that users—especially minors—are not manipulated into engaging with content that threatens their well-being.
Sixteen years ago, I was lucky enough for someone to notice that my excessive exercising was not okay. But there was an easy solution to that – I stopped physically going to the gym. When promotion of unhealthy eating habits is happening online, there is no physical barrier that could stop that. Harmful content follows you everywhere. That is why we must ensure that no more children grow up in a world where algorithms dictate their self-worth.
Dear reader,
Opinions expressed in the op-ed section are solely those of the individual author and do not represent the official stance of our newspaper. We believe in providing a platform for a wide range of voices and perspectives, even those that may challenge or differ from our own. We remain committed to providing our readers with high-quality, fair, and balanced journalism. Thank you for your continued support.