Report Algorithms of trauma: new case study shows that Facebook doesn’t give users real control over disturbing surveillance ads A case study examined by Panoptykon Foundation and showcased by the Financial Times, demonstrates how Facebook uses algorithms to deliver personalised ads that may exploit users’ mental vulnerabilities. The experiment shows that users are unable to get rid of disturbing content: disabling sensitive interests in ad settings limits targeting options for advertisers, but does not affect Facebook’s own profiling and ad delivery practices. While much has been written about the disinformation and risks to democracy generated by social media’s data-hungry algorithms, the threat to people’s mental health has not yet received enough attention. 28.09.2021 Text
Article AI Act: we call on MEPs to put our fundamental rights first As the European Parliament gets ready to vote on the AI Act, we call on MEPs to put our fundamental rights first and protect the people affected by AI systems. 28.04.2023 Text
other Safe by Default – Panoptykon Foundation and People vs BigTech’s Briefing Moving away from engagement-based rankings towards safe, rights-respecting, and human centric recommender systems. 05.03.2024
other Joint Submission on the Commission’s Guidelines for Providers of VLOPs and VLOSEs on the Mitigation of Systemic Risks for Electoral Processes Part 1 introduces how recommender systems contribute to systemic risks. Part 2 responds to the Commission’s proposals to moderate virality of content that threatens the integrity of the electoral process. 07.03.2024
Article Monologue of the Algorithm: how Facebook turns users data into its profit. Video explained Does Facebook identify and manipulate your feelings? Is it able to recognize your personality type, habits, interests, political views, level of income? Does it use all the information in order to reach you with personalized ads or sponsored content? You bet! 13.01.2018 Text