Report Algorithms of trauma: new case study shows that Facebook doesn’t give users real control over disturbing surveillance ads A case study examined by Panoptykon Foundation and showcased by the Financial Times, demonstrates how Facebook uses algorithms to deliver personalised ads that may exploit users’ mental vulnerabilities. The experiment shows that users are unable to get rid of disturbing content: disabling sensitive interests in ad settings limits targeting options for advertisers, but does not affect Facebook’s own profiling and ad delivery practices. While much has been written about the disinformation and risks to democracy generated by social media’s data-hungry algorithms, the threat to people’s mental health has not yet received enough attention. 28.09.2021 Text
Article Poland adopted a controversial anti-terrorism law On 22 June, Polish president signed a new anti-terrorism law. The law contains measures that are inconsistent with the Polish Constitution and with the European Convention on Human Rights. The list of controversies is long: foreigners’ phone calls might be wire-tapped without a court order, and police might collect their fingerprints, biometric photos and DNA if their identity is “doubtful”. Online content might be blocked, citizens' freedom of assembly limited, and secret services are given free access to all public databases. 22.06.2016 Text
other Panoptykon Foundation’s submission to the consultation on the ‘White Paper on Artificial Intelligence’ 16.06.2020
other Not only content moderation: Creating rules for targeting content in the Digital Services Act or ancillary regulations (brief on DSA) 01.05.2020