Report Algorithms of trauma: new case study shows that Facebook doesn’t give users real control over disturbing surveillance ads A case study examined by Panoptykon Foundation and showcased by the Financial Times, demonstrates how Facebook uses algorithms to deliver personalised ads that may exploit users’ mental vulnerabilities. The experiment shows that users are unable to get rid of disturbing content: disabling sensitive interests in ad settings limits targeting options for advertisers, but does not affect Facebook’s own profiling and ad delivery practices. While much has been written about the disinformation and risks to democracy generated by social media’s data-hungry algorithms, the threat to people’s mental health has not yet received enough attention. 28.09.2021 Text
Article Limits to harmful surveillance in online advertising? Joint statement ahead of the vote in the European Parliament next week “We don’t have to manipulate our customers or exploit their vulnerabilities to scale up” – European entrepreneurs and social organizations appeal to the MEPs to put an end to invasive and privacy-hostile practices related to surveillance-based advertising and thus open the market to ethical and innovative online ads, which respect users’ rights and their choices. On the opposite bench – the Big Tech lobby fights for the status quo to remain – despite the well-documented social and individual harms caused by the current ads ecosystem. 13.01.2022 Text
other Safe by Default – Panoptykon Foundation and People vs BigTech’s Briefing Moving away from engagement-based rankings towards safe, rights-respecting, and human centric recommender systems. 05.03.2024
Article Monologue of the Algorithm: how Facebook turns users data into its profit. Video explained Does Facebook identify and manipulate your feelings? Is it able to recognize your personality type, habits, interests, political views, level of income? Does it use all the information in order to reach you with personalized ads or sponsored content? You bet! 13.01.2018 Text
Article IGF 2020: Aiming for AI explainability: lessons from the field. Summary of the session AI systems will soon determine our rights and freedoms, shape our economic situation and physical wellbeing, affect market behaviour and natural environment. With the hype for ‘problem-solving’ AI, claims for (more) accountability in this field are gaining urgency. Summary of the IGF 2020 session: Aiming for AI explainability: lessons from the field. 04.01.2021 Text