The Digital Services Act is a regulatory package announced by the European Commission as part of the European Digital Strategy aimed to address risks – for citizens, society at large, and markets – related to the development of new digital services, in particular large online platforms.
In our response to the public consultation organised by the Commission we argue that large online platforms, in particular those providing social media and content-related services, have developed business models that, while generating high commercial gains, come with troubling societal and individual costs.
Global online platforms are shaped by advertisers’, not users’ interests. In order to accumulate vast amounts of users’ personal data, large online platforms integrate different services and connect behavioural data coming from different sources. Collected and generated masses of personal data are then analysed with the use of advanced algorithms in the search for meaningful statistical correlations. The task of these algorithms is to establish users’ hidden characteristics that they have never consciously revealed, such as their psychometric profiles, IQ level, family situation, addictions, illnesses, beliefs etc., thus creating detailed profiles, which are then offered to advertisers and used for content personalisation.
Large online platforms have the technical ability to control both the content that users won’t see (result of content moderation) and the content – paid or not – that users will see (result of content targeting and personalisation). This ability to effectively control content monetisation and dissemination in global information networks is the very source of what the Commission calls the “gatekeeper power” of large platforms. The way this power is currently used by the biggest players leads to negative effects for both individual users and society as a whole.
Many of the negative examples of large platforms’ power imply mistreatment of their users’ personal data. Therefore, in theory, mere enforcement of the GDPR should be the right tool to curb these detrimental practices. In practice, however, there are certain aspects of platforms’ power, like their algorithm-driven targeting abilities based on statistical correlations (that are not classified as personal data), which are not adequately addressed in the GDPR.
In this context we came up with the following (groups of) recommendations for the DSA package:
- Enhanced transparency of targeting and personalisation: new regulation should impose high standards of transparency for all online platforms that engage in content moderation, curation (personalisation), and targeting. At the same time there should be different transparency mechanisms for the general public (incl. researchers and watchdogs) and for individual users.
- Effective tools to control the use of data and algorithms: building on individual data protection rights, as defined by the GDPR, the new regulation should equip users of large platforms with more effective tools to control both the use of their personal data and the use of big data and algorithms that affect their online experience. These tools should include both default protections and granular data management settings (including but not restricted to personal data).
- Minimum standard of interoperability: we recommend introducing data and protocol interoperability in the DSA package, as a precondition that will enable the development and provision of effective (i.e. independent from platforms’ own business interests and their proprietary interfaces) tools for users to manage their data and shape their online experience (e.g. set their own parameters for content personalisation).
We encourage you to read the full text of our submission in which we describe these recommendations in detail.
We have also joined 28 civil society organisations, coordinated by the European Partnership for Democracy, in the call on the EU to legislate on full advertising transparency online, which is in line with our recommendations for the DSA: read the statement.