Report Algorithms of trauma: new case study shows that Facebook doesn’t give users real control over disturbing surveillance ads A case study examined by Panoptykon Foundation and showcased by the Financial Times, demonstrates how Facebook uses algorithms to deliver personalised ads that may exploit users’ mental vulnerabilities. The experiment shows that users are unable to get rid of disturbing content: disabling sensitive interests in ad settings limits targeting options for advertisers, but does not affect Facebook’s own profiling and ad delivery practices. While much has been written about the disinformation and risks to democracy generated by social media’s data-hungry algorithms, the threat to people’s mental health has not yet received enough attention. 28.09.2021 Text
other Safe by Default – Panoptykon Foundation and People vs BigTech’s Briefing Moving away from engagement-based rankings towards safe, rights-respecting, and human centric recommender systems. 05.03.2024
other Joint Submission on the Commission’s Guidelines for Providers of VLOPs and VLOSEs on the Mitigation of Systemic Risks for Electoral Processes Part 1 introduces how recommender systems contribute to systemic risks. Part 2 responds to the Commission’s proposals to moderate virality of content that threatens the integrity of the electoral process. 07.03.2024
other For Algorithmic Pluralism! In the wake of the restitution of the États Généraux de l’information and on the occasion of the Numérique en commun(s) national event, a collective of 50 personalities, associations, French and international companies... 25.09.2024
other Not only content moderation: Creating rules for targeting content in the Digital Services Act or ancillary regulations (brief on DSA) 01.05.2020