Anxious about your health? Facebook won’t let you forget

Article
07.12.2023
12 min. read
Text
Image

There is little point in telling Facebook which posts you do not want to see – it will not listen. A new study by the Panoptykon Foundation entitled The Algorithms of Trauma 2 has shown that the digital giant competes for its users’ attention by exploiting its knowledge of their fears and weaknesses, and the buttons which are supposed to hide unwanted content do not work as they promise.

Social media annoy their users in many ways. Some people are irked by weight loss and plastic surgery ads, others by political polarization and hate. There are also those overwhelmed by the deluge of unsettling content in which they are steeped. As they scroll through their feeds, their gaze inadvertently lands on content that aggravates anxiety and fuels fears.

This is the story of one such user.

Joanna’s* story

Joanna lives in Poland and is the mother of a toddler. Before she had her baby, a loved one of Joanna’s had suddenly fallen ill and died. It was then that she began to suffer from intense anxiety over the life and health of those closest to her, and her own. The problem has only gotten worse since the baby’s birth. Is the little one developing normally? What would happen to her baby if Joanna herself succumbed to a terminal disease?

Joanna is plagued by those fears: she is living under constant stress, and has noticed some psychosomatic symptoms. She has spent hours online researching the ailments that afflict her, seen one doctor after another, and undergone test after test (including invasive ones). She is sacrificing more and more of her time and money.

The anxiety has finally pushed her to seek psychological help. She has now started therapy and joined Facebook support groups for sufferers of chronic anxiety. Still, she is feeling increasingly overwhelmed when using the platform.

What is it that Joanna sees on Facebook?

Feed = organic content + ads + suggested content

Each Facebook user sees three groups of content:

  • organic content: posts by Facebook friends and the fan pages and groups that the user has joined or followed;
  • advertising (sponsored posts): paid content targeted at people who, according to Facebook’s algorithm, are the most likely to react in a particular way;
  • content described as “suggested for you”: these are non-sponsored posts by unfamiliar people and pages to which the user has not subscribed, but Facebook believes that they may attract their attention.

The specific content users find in their feeds is decided by the algorithm based on the traits and interests attributed to them by Facebook. Users cannot independently define the subject matter in which they are interested.

They made her believe it was depression

“They made her believe it was depression. The orange-sized tumor had already spread from the colon to the liver”. “She noticed strange symptoms on her daughter’s face. They were the signs of a rare cancer.” “A 2-year-old died after a bath in a hot spring. The boy was killed by a brain-eating amoeba.” “She was hospitalized with a stomach ache. Three limbs were amputated.”

These are just a few headlines selected from the flood of unsettling content that was “suggested” to Joanna. Apparently, based on her previous online behavior (such as reading about disease symptoms and belonging to support groups whose members discuss health issues), Facebook determined that her attention would be attracted by posts about medical cases, deaths and accidents. The platform started feeding Joanna a steady diet of those posts.

Unfortunately, it was right on target! Joanna cannot help but click on posts like these, even if she already knows that reading them worsens her health anxieties. She would like to cut herself off from them.

 

Technically, one could limit the incidence of undesirable content on Facebook by:

  • unfollowing a person or page;
  • hiding the content published by a person or page;
  • changing advertising preferences;
  • finally, in the case of suggested content, by selecting the option “Hide post – See fewer posts like this.”

But is this enough to purge Joanna’s feed of the unwanted content? Read on.

The unsettling feed: trapped between ads and sponsored posts

We have studied Joanna’s feed before. In 2021, we analyzed the ads she was seeing which largely involved distressing health information. For example, crowdfunding campaign ads for gravely ill children alternated with offers of paid screening for the genetic diseases from which the children suffered. At the time, our aim was to check whether Facebook’s ad preference tools do in fact allow the users to shape the content of the ads they see.

The short answer is: they do not (for more details, see Algorithms of trauma: new case study shows that Facebook doesn’t give users real control over disturbing surveillance ads).

As time passed, apart from the unsettling ads, Joanna observed an increasing number of suggested posts with similarly upsetting themes. Therefore, in the updated second part of our case study, we focused on this category of content. We investigated if the tools offered by Facebook to its users could successfully influence the presence of unwanted suggested posts in the feed.

About The Algorithms of Trauma 2

The data collection phase of the study was conducted in June and July 2023. The Panoptykon Foundation collaborated with Piotr Sapieżyński, PhD, from Northeastern University in Boston, Mass., a specialist in Internet platform research and algorithm auditing.

The stages of research:

  1. Scrolling. For two weeks (from June 7 to June 22, 2023), we monitored Joanna’s feed when she used Facebook on her cell phone (in the browser). This allowed us to establish how frequently the posts that she labeled as unwanted appeared in her feed as suggested content.
  2. Hiding. For another week (from June 23 to July 2, 2023), Joanna clicked the button “Hide post – See fewer posts like this” beside the suggested content she did not want to view. Throughout the week, she did this as many as 122 times.
  3. Monitoring the feed. For another 4 weeks (from July 3 to July 28), Joanna’s feed was monitored when she used Facebook in her regular manner, without hiding any posts. The aim was to test whether her signals that she did not want to see specific content impacted the composition of the feed.

Over almost two months of the study, more than 2500 suggested posts appeared in the user’s feed, which constituted 22% of all posts viewed. Over 56% (1416 posts) of suggested content included themes that the user labeled as unwanted:

  • the sudden death of young people (“They came back from their dream vacation. They died within 4 hours of each other”; “He fell to the pavement and perished. Young man dead”);
  • seemingly trivial symptoms of terminal diseases (“She noticed a scratch on her breast. A fight for her life began”; “This 29-year-old complained of back pain. It was a symptom of inoperable lung cancer”);
  • alarming physical signs (“Itchy armpits? One cause is especially dangerous”; “This ache may be a sign of an aneurysm. Watch out, it’s a ticking time bomb”);
  • the illness and death of celebrities (“This actress has been battling cancer for 8 years. Another tumor means a grim prognosis”; “Famous soccer player dies of cancer. The disease spread like wildfire”;
  • accidents and deaths involving young children (“4-month-old infant killed. Chilling findings”; “Family tragedy strikes on vacation. The 5-year-old died in front of her parents’ eyes”;
  • dramatic incidents and murders (“Honor student of Wrocław high school slays step-father with axe. He struck five times”; “They burned alive, tied to hospital beds. A nightmare in Częstochowa”).
  • morbid stories (“Man cuts off own scrotum in lawn-mower accident”; “Shocking... The crying mother attempted to attach the child’s head back to the body”).

There were days when 8 out of 10 suggested posts looked like that. In the long run, almost every eighth suggested post in Joanna’s newsfeed was categorized as unwanted. This amounts to an average of around 27 such posts per day!

Keep in mind that was not the only content shown to Joanna by Facebook that worsened her anxiety. The usual upsetting ads also kept undermining her sense of security.

The “Hide post” button does not improve the feed

In the second stage of the study, Joanna was tasked with tagging the “suggested for you” posts she did not want to view. In just one week, she clicked the button “Hide post – See fewer posts like this” 122 times. Can you imagine? Well, now imagine the frustration of a person who concludes that their actions change nothing.

Over the week that the subject clicked the “Hide post” button, the overall number of suggested posts did marginally decrease. However, at the same time, the ratio of unwanted content to all suggested posts increased (i.e., there was a greater concentration of unwanted posts). Within just a few days, the total number of suggested posts returned to its former level, and then continued to grow, exceeding the initial level from the beginning of the study. Ultimately, the ratio of unwanted content did not diminish. In fact, the opposite happened: it slightly increased (both relative to the suggested content category and to the entire feed; see the table and the chart below).

  Before tagging unwanted content (Research Stage 1) After tagging unwanted content (Research Stage 3)
Ratio of unwanted content to entire feed 12.9% 13.8%
Ratio of toxic unwanted content to suggested content 53.7% 56.6%

Table 1. Unwanted posts before and after tagging

chart presenting the ratio of unwanted content in suggested content category (7 day average)

Chart 1. Ratio of unwanted content in suggested content category (7 day average)

The conclusion? The buttons which should technically impact the content of the feed when clicked, and free the user of unwanted posts, do not work. Apparently, the corporation finds it more profitable to supply Joanna – even against her will! – with engaging content that the algorithm selects on the basis of her behavior on Facebook and other websites than to abide by her explicitly stated preferences.

Why is that? This question can only be answered by Meta (the company that runs Facebook). We can only suspect that a business model based on maximizing user engagement is to blame. People are supposed to scroll and click no matter if they really want it or not; regardless of their wellbeing and satisfaction. And negative emotions can often engage people more strongly than positive ones.

The faulty tools of newsfeed control

This is not the only study to question the functioning of the control tools provided by VLOPs (Very Large Online Platforms) to their users with the promise that they can “impact their feed”. Facebook especially offers many different options to refuse (or request) content. However, they all seem to be mere façades.

  • A 2021 study by the Panoptykon Foundation indicates that, contrary to Facebook’s assurances, changes in advertising preferences do not allow users to permanently limit the ads related to the interests attributed to them by the algorithm. The platform replaces deleted interests with other, similar ones, and matches the ads to them.

Algorithms of trauma: new case study shows that Facebook doesn’t give users real control over disturbing surveillance ads, The Panoptykon Foundation and Piotr Sapieżyński, 2021.

  • A 2022 study by the Mozilla Foundation leads to the conclusion that the users have no influence over the recommendations YouTube provides to them, and using the feedback tools offered by the platform is simply frustrating.

Does this button work? Investigating YouTube’s ineffective user controls, Mozilla Foundation, 2022.

  • A report published in 2023 by a research team from the University of Minnesota Twin Cities shows that according to many TikTok users the buttons purportedly allowing them to refuse certain content (such as the “not interested” feature) have no real impact on their feed (specifically, the For You page).

„I See Me Here”, Mental Health Content, Community, and Algorithmic Curation on TikTok, Milton et al., 2023.

Is quitting Facebook the only way out?

Over 3 billion people all over the world use Facebook: a third of the entire humanity. In Poland, a country with a population of over 37.5 million people, 26 million have an account. They log in for various purposes: to contact friends, comment on current events, keep up with the local and global news, follow the events at their school or university, participate in cultural life, collaborate, exchange information, trade, work, or run a business. We are pointing this out to preclude the dismissive suggestion that Joanna can simply quit Facebook for the sake of her mental health.

Do users really have no other choice but to quit or accept the unwanted content in their feeds?

On the one hand, Facebook provides Joanna with access to support groups for people with anxiety issues; on the other hand, it plants destructive posts on her feed. It is as though a person with an alcohol addiction was being served a glass of wine at each AA meeting.

Joanna is not the only user whose weaknesses and sensitivities are routinely exploited by the algorithm. Some may not even realize that this is happening. Therefore, systemic solutions are needed to provide a basic level of safety on online platforms.

* the name has been changed for privacy

Authors: Anna Obem, Maria Wróblewska

Written in collaboration with: Dorota Głowacka, Piotr Sapieżyński, Małgorzata Szumańska

Translated from Polish by Aleksandra Paszkowska

More of Algorithms of Trauma 2:

Dorota Głowacka, Piotr Sapieżyński, Małgorzata Szumańska, Aleksandra Paszkowska (translation)
Cooperation
Topic