A case study examined by Panoptykon Foundation and showcased by the Financial Times, demonstrates how Facebook uses algorithms to deliver personalised ads that may exploit users’ mental vulnerabilities. The experiment shows that users are unable to get rid of disturbing content: disabling sensitive interests in ad settings limits targeting options for advertisers, but does not affect Facebook’s own profiling and ad delivery practices. While much has been written about the disinformation and risks to democracy generated by social media’s data-hungry algorithms, the threat to people’s mental health has not yet received enough attention.
You are here
The list of negative consequences of how dominant online platforms shape our experience online is neither short nor trivial. From exploiting users’ vulnerabilities, triggering psychological trauma, depriving people of job opportunities to pushing disturbing content to others, these are just some examples. While members of the European Parliament debate their position on the Digital Services Act, Panoptykon Foundation, together with 49 civil society organisations from all over Europe, urge them to ensure protection from the harms caused by platforms’ algorithms.
Algorithmic decision-making can carry more weight than you might expect. While algorithms do innocuous or helpful things like changing the traffic signals when you approach an intersection, they also decide what content to show in your social media feed. There are also algorithms that assist real people in deciding whether you can get a mortgage, get into a particular university or qualify for insurance. How does it work? Katarzyna Szymielewicz explains three layers of digital profile in the video.
A progressive report on the Digital Services Act (DSA) adopted by the Committee on Civil Liberties, Justice and Home Affairs (LIBE) in the European Parliament in July is the first major improvement of the draft law presented by the European Commission in December. MEPs expressed support for default protections from tracking and profiling for the purposes of advertising and recommending or ranking content. Now the ball is in the court of the leading committee on internal market and consumer protection (IMCO), which received 1313 pages of amendments to be voted in November. Panoptykon Foundation explores if the Parliament would succeed in adopting a position that will contest the power of dominant online platforms which shape the digital public sphere in line with their commercial interests, at the expense of individuals and societies.
The District Court in Warsaw (Appellate Division) upheld its interim measures ruling from 2019 in which it temporarily prohibited Facebook from removing fan pages, run by the Polish NGO “SIN”, on Facebook and Instagram, as well as from blocking individual posts. This means that – until the case is decided – SIN’s activists may carry out their drugs-related education on the platform without concerns that they will suddenly lose the possibility to communicate with their audience. The decision is now final. What does it mean on the broader scale?
We can finally see the light at the end of the tunnel, when it comes to the ePrivacy Regulati
On 22 March 2021, a group of journalists and activists published a letter they had received from the non-existent “Agency of National Security” on their social media profiles, informing them that they were subject to surveillance. The letter was accompanied by a message notifying them that it is part of a campaign launched by EDRi member Panoptykon Foundation that aims to demonstrate the problem of unscrutinised powers of intelligence agencies.
The Digital Freedom Fund prepared an animation explaining the SIN versus Facebook case, in which Panoptykon Foundation fights against private censorship online. WATCH
Polish government's proposal for a new law on "protecting free speech of social media users" introduces data retention, a new, questionable definition of “unlawful content”, and an oversight body (Free Speech Council) that is likely to be politically compromised. In this context, “Surveillance and Censorship Act” would be a more accurate name.
AI systems will soon determine our rights and freedoms, shape our economic situation and physical wellbeing, affect market behaviour and natural environment. With the hype for ‘problem-solving’ AI, claims for (more) accountability in this field are gaining urgency. Summary of the IGF 2020 session: Aiming for AI explainability: lessons from the field.