DSA vs. Reality: Are children safer online?

Article
26.02.2026
10 min. read
Text
Image
Photo of participants in the EP Hearing

What needs to be done to make social media a safer space for people of all ages? Together with MEPs from three major political groups and officials from the European Commission, civil society experts led by Panoptykon debated possible solutions during the hearing held in the European Parliament on 24th February. Independent data paints a different picture to claims made by VLOPs, yet platforms dodged having to answer difficult questions. YouTube, TikTok, and Meta refused the invite for the hearing.

The Digital Services Act – the EU regulation adopted in 2022 – was meant as a remedy for multiple risks posed by social media platforms to children but also adults. For the first time, the EU forced the biggest platforms to assess systemic risks, limit harmful design, and protect children. On paper, it was a real breakthrough. In practice, everybody, including the Commission, knew that it will take years of investigations, public pressure, and bold political moves to force VLOPs to comply.

How far are we in this fight in 2026?

Image
EP hearing graphic: Protecting Children Online. Europe at a Crossroads

On invitation of Kamila Gasiuk-Pihowicz (EPP) and Irena Joveva (Renew) Panoptykon co-hosted a hearing in the European Parliament – Protecting children online: Europe at a crossroads.  At the hearing we confronted the world described in VLOPs’ official DSA reports and the world documented by independent researchers or civil society organisations, and experienced by young people themselves.

These two worlds do not match.

We invited representatives of Meta, TikTok and YouTube to join the discussion and answer questions directly, yet none of them showed up.

As the white paper Artificial Intelligence in Education: State of Knowledge, Competency Gaps and Systemic Recommendations for Poland – shared with us by our colleagues just yesterday – makes clear, children need “the ability to understand the impact of AI systems on democratic processes, the phenomenon of post-truth and deepfakes” and this capacity, the paper argues, is “the very foundation of digital citizenship.”

Platforms that exploit rather than cultivate that capacity are failing our children – and failing the law. It is deeply concerning that companies whose representatives and lobbyists are frequently present in this house and across Brussels decided not to take part in today’s discussion with the Commission, Members of Parliament and civil society. Closed-door meetings are not enough. We are asking very basic questions – questions that come directly from the platforms’ own reports. Civil society is not demanding confidential business information. We want clear data and measurable indicators. We need to know whether the measures platforms describe on paper are actually working in real life.

MEP Kamila Gasiuk-Pihowicz, EPP

Image
Photo of Kamila Gasiuk-Pihowicz

MEP Kamila Gasiuk-Pihowicz, EPP

What platforms claim – and what the research shows

Across their DSA risk assessments, platforms insist they are providing “the strongest safeguards” for minors, especially by:

  • reducing addictive design,
  • introducing (in the case of minors even “by default”) prompts such as “go to bed” and time limits,
  • moderating harmful content effectively,
  • preventing algorithmic amplification of illegal, harmful and age-inappropriate content, 
    verifying users’ age.

We would be among the first ones to applaud these policies. But independent research paints a very different picture. During the hearing, researchers and civil society experts focused on three areas where the gap between claims made by VLOPs and reality is most striking.

1. Addictive design: platforms are still designed to keep children hooked

VLOPs highlight they all have screen‑time management tools and nudges to take a break. Yet research consistently shows that:

  • infinite scroll, autoplay, and hyper‑personalised feeds still work exactly as intended – to maximise time spent on the platform,
  • young people describe feeling “stuck” in endless sessions,
  • design choices are optimised for attention extraction, not wellbeing.

This is not a bug. It is their business model.The business model of big tech giants is unacceptable, towards our kids. One thing is what adults can do and understand. Even though it is difficult even for us... But kids -- they come online in their very formative years. They spend more time online than with their teachers and parents. And yet we have not been able to fix the situation.

The aim of this legislation is crystal clear: we wanted to protect our kids. Unfortunately, it is a daily struggle with online platforms because they are not doing it voluntarily. We have to push them over and over again. It would be so nice if we could count on the platforms to be more proactive and do what is right: protect our kids.

In the parliament we made a report on minors, where we call for a harmonized EU age limit. In an ideal world it is not necessary. (...) But in this world we have now the tools, the social media are simply not safe enough.

MEP Christel Schaldemose, S&D

2. Harmful content: algorithms still recommend what hurts

Platforms claim their moderation systems are effective – even if not perfect, they “result in a significantly lower residual risk profile” (quote from YouTube Risk Assessment) But according to experiments by Amnesty International, Reset Tech, and others:

  • minors are still exposed to pro‑eating‑disorder, self‑harm, and depressive content,
  • recommender systems actively push harmful material,
  • reporting mechanisms remove only 6–15% of flagged content.

This is not (just) bad content moderation failure. It is a bad design of algorithmic systems responsible for this task.

3. Age verification: a leaky barrier that shifts responsibility from platforms to their users

Platforms proudly present age‑control tools as a solution. In reality:

  • young people bypass them with ease (usually self-declaration is enough),
  • age-estimation methods, when used by the platform, raise serious privacy concerns,
  • and – crucially – even perfect age verification does not fix harmful design.

As youth activist Leandra Voss put it: “age verification only delays the harm – it does not remove it.”

Image
Photo of Michiel Von Hulten

MEP Christel Schaldemose, S&D (left), Michiel Von Hulten, Reset Tech (right)

A recent survey of European 16- and 17-year-olds conducted by Reset Tech in partnership with YouGov revealed that 93% of children surveyed feel “stuck” on social media at least once a week. 63.5% said they experienced this feeling every single day. One in five of these teens reported losing sleep every night. The most addictive platforms, according to these findings, are TikTok, Instagram and YouTube. 75% of teen TikTok users, 69% of Instagram users and 68% of YouTube users reported staying on these platforms longer than they had intended.

The users know exactly what is trapping them: over 80% point directly to endless streams of content and hyper-personalised feeds as the cause. Shifting the burden to “user choice” is a proven failure. Our survey shows that most teens do not use time-management tools, often because they simply do not know they exist. For instance, 41% of teenagers surveyed didn't know YouTube had these tools.

And you don’t need to take our word for it.

Michiel Von Hulten, Reset Tech

A rare consensus: three political groups, one diagnosis

Alignment among MEPs from EPP, S&D, and Renew – three major political groups  – was symptomatic of the urgency of the task that European institutions are facing. The business model of platforms is incompatible with children’s safety. The EU has legal and political tools to demand fundamental changes, not superficial corrections. 2026 is the time to use them!

Platforms profit from attention. Their incentives are to keep users scrolling, not to protect their wellbeing. This cross‑party consensus matters: it signals that the political centre of gravity in Europe is shifting.

Infinite scroll is not just a feature but a design for one more dopamine hit. It has always been so, and it will remain so, because this is the business model. One maybe cannot blame the tech companies for this, because it's their work, but it’s clear that allowing them to do as they wish to maximize engagement is not an option here and clearly the regulators are the ones who have to draw the lines. And we need to stop asking if the kids are disciplined enough and we need to start asking if the product is safe enough. And we should be drawing the lines for the dealers, not the innocent users.

Safety by design isn’t a limitation for freedom, it’s a legal necessity for a functioning society.

MEP Irena Joveva, Renew

Image
Photo of MEPs: Joveva, Gasiuk-Pihowicz and Schaldemose

From the left: MEP Irena Joveva (Renew), MEP Kamila Gasiuk-Pihowicz (EPP) and MEP Christel Schaldemose (S&D) 

The Commission was in the room and acknowledged both: the problem and readiness to act upon it

Even more important was the presence of the European Commission, including officials responsible for reviewing platforms’ DSA reports and drafting the upcoming Digital Fairness Act.

The EC officials made several crucial commitments.

1. Platforms must ensure that their reports are “comprehensible”

Commission officials recognised what civil society has been saying for months:

  • current risk assessments provided by platforms are vague,
  • their methodologies are unclear,
  • and key claims are unsupported by data.

As Eike Graef from DG Connect put it:

It must be clear how are the services working, how are the risks coming about and how is the assessment arrived at. This is also the expectation we absolutely have towards regulated entities.

Eike Graef, European Commission – DG Connect

This echoed our own message: it cannot be that we only have their claims, while our data shows something entirely different. Platforms must justify the statements in their reports.

2.  Researchers and CSOs are invited to share their evidence with the Commission

The Commission announced the launch of a new tool enabling:

  • researchers,

  • civil society organisations,

  • and independent experts

to submit evidence and access data relevant to DSA enforcement.

3. The Commission is ready to go further than the DSA

Officials confirmed that the upcoming Digital Fairness Act may introduce:

  • stricter rules on addictive design,
  • obligations to switch off certain manipulative features,
  • ‘user empowerment’ requirements for recommender systems,
  • and specific, higher protections for children as consumers.

Our take away from the discussion was that the question of a minimum digital age will be a political decision – informed by a high-level expert panel convened by President von der Leyen – and that legislation will follow. In the meantime, DG Justice and Consumer will carry out a thorough impact assessment for the Digital Fairness Act and this is where they need reliable data:

We share your pain when it comes to the lack of data. So at this stage of our impact assessment [for the DFA] it is very important for us to have reliable data, good metrics. So again, an invitation to provide us with any research you have. I am also saying this to colleagues from industry if they are listening somewhere.

Maria-Myrto Kanellopoulou, European Commission – DG Just

Image
Photo of Maria-Myrto Kanellopoulou

Maria-Myrto Kanellopoulou, European Commission – DG Just

Why this hearing mattered

The stakes are too high to rely on self‑reporting by companies whose profits depend on keeping children online. That was the main reason we organised the hearing and invited all stakeholders (well in advance!): we wanted platforms to hear our questions, explain how their mitigation measures are meant to work, and share relevant metrics (certainly more representative than any research than outsiders can do). They did not.

But everyone else came:

  • independent researchers with their evidence,
  • youth activists with their lived experience,
  • MEPs from across the political spectrum,
  • Commission officials responsible for the DSA enforcement and DFA proposal.

And together, they delivered a clear message:

  • two years into the DSA enforcement declarations are not enough,
  • transparency of how risk assessments are carried out is not optional,
  • mitigating online risks for children requires confronting VLOPs business model, which is at the heart of the problem.

Adoption of the DSA in 2022 was a crucial first step. Uncompromising enforcement is what needs to happen now. As CSOs, we will continue to support the Commission by providing independent evidence. We will also make sure that the upcoming DFA explicitly prohibits pervasive dark patterns (still) used by VLOPs and (at least) the most obvious forms of addictive design.

Maria Wróblewska
Cooperation