Report Algorithms of trauma: new case study shows that Facebook doesn’t give users real control over disturbing surveillance ads A case study examined by Panoptykon Foundation and showcased by the Financial Times, demonstrates how Facebook uses algorithms to deliver personalised ads that may exploit users’ mental vulnerabilities. The experiment shows that users are unable to get rid of disturbing content: disabling sensitive interests in ad settings limits targeting options for advertisers, but does not affect Facebook’s own profiling and ad delivery practices. While much has been written about the disinformation and risks to democracy generated by social media’s data-hungry algorithms, the threat to people’s mental health has not yet received enough attention. 28.09.2021 Text
Article Limits to harmful surveillance in online advertising? Joint statement ahead of the vote in the European Parliament next week “We don’t have to manipulate our customers or exploit their vulnerabilities to scale up” – European entrepreneurs and social organizations appeal to the MEPs to put an end to invasive and privacy-hostile practices related to surveillance-based advertising and thus open the market to ethical and innovative online ads, which respect users’ rights and their choices. On the opposite bench – the Big Tech lobby fights for the status quo to remain – despite the well-documented social and individual harms caused by the current ads ecosystem. 13.01.2022 Text
Article Belgian authority finds IAB Europe’s consent pop-ups incompatible with the GDPR Following a number of complaints filed in 2018 and 2019, including by Panoptykon and Bits of Freedom, and coordinated by the Irish Council for Civil Liberties, the Belgian Data Protection Authority has found that the consent system developed and managed by the adtech industry body IAB Europe, and used by many websites in the EU, is illegal under the GDPR. 16.02.2022 Text
other Safe by Default – Panoptykon Foundation and People vs BigTech’s Briefing Moving away from engagement-based rankings towards safe, rights-respecting, and human centric recommender systems. 05.03.2024
other Joint Submission on the Commission’s Guidelines for Providers of VLOPs and VLOSEs on the Mitigation of Systemic Risks for Electoral Processes Part 1 introduces how recommender systems contribute to systemic risks. Part 2 responds to the Commission’s proposals to moderate virality of content that threatens the integrity of the electoral process. 07.03.2024