Article The right to explanation of creditworthiness assessment – first such law in Europe Thanks to Panoptykon’s initiative bank customers in Poland will have the right to receive explanation of their creditworthiness. It’s the first right of this kind in Europe and a higher standard than the one envisioned in the GDPR. 12.06.2019 Text
other CNAF’s discriminatory scoring algorithm: 10 new organisations join the case before the Conseil d’État in France Today, our coalition is proud to welcome 10 new organisations in this litigation. We are now 25 asking for a ban of the CNAF’s scoring algorithm. 20.01.2026
Article Discrimination in datafied world Data-driven technologies are not neutral. A decision to collect, analyse and process specific kind of information is structured and motivated by social, economic and political factors. Those data operations may not only violate the right to privacy but also lead to discrimination and oppression of socially marginalised communities. Discriminatory data processes and algorithms are a massive challenge for the modern human rights movement that requires non-standard solutions. The report “Between Anti-discrimination and Data” – tries to shed light on this problem from the perspective of European civil society organisations. 10.07.2018 Text
Article GDPR Today – for better data protection tomorrow Let us introduce you to GDPR Today – your online hub for staying tuned to the (real) life of EU data protection law. Every two months we will be publishing statistics showing how the GDPR is being applied across Europe. More often we will be sharing relevant news – from legal guidelines and decisions to data breaches, new codes of conduct, important business developments, and memes. 25.10.2018 Text
Report Algorithms of trauma: new case study shows that Facebook doesn’t give users real control over disturbing surveillance ads A case study examined by Panoptykon Foundation and showcased by the Financial Times, demonstrates how Facebook uses algorithms to deliver personalised ads that may exploit users’ mental vulnerabilities. The experiment shows that users are unable to get rid of disturbing content: disabling sensitive interests in ad settings limits targeting options for advertisers, but does not affect Facebook’s own profiling and ad delivery practices. While much has been written about the disinformation and risks to democracy generated by social media’s data-hungry algorithms, the threat to people’s mental health has not yet received enough attention. 28.09.2021 Text