From Meta to X, most major social media companies are failing LGBTQ+ users: GLAAD
GLAAD has released its 2024 Social Media Safety Index report, revealing what it says are significant failures by social media platforms to protect LGBTQ+ users from hate speech and harassment. According to the report, most major platforms continue receiving failing grades for handling LGBTQ+ safety. TikTok was the only platform to improve, moving from an F to a D+.
Keep up with the latest in LGBTQ+ news and politics. Sign up for The Advocate’s email newsletter.
The report paints a grim picture of the online landscape for LGBTQ+ people in an environment rife with disinformation. The 2024 SMSI report evaluates the performance of six major platforms: Facebook, Instagram, TikTok, YouTube, X (formerly Twitter), and Threads. TikTok received a D+ with a score of 67 percent, while Facebook, Instagram, and YouTube each scored 58 percent, and X scored 41 percent. Threads, in its first evaluation, scored 51 percent. Despite these platforms’ stated policies against hate speech, the report finds a significant gap in enforcement, leaving LGBTQ+ users vulnerable to harmful content.
The SMSI highlights how social media platforms are increasingly being used to amplify hate speech and disinformation, with algorithms often prioritizing engaging content over accurate and safe information. According to GLAAD, this creates a breeding ground for harmful rhetoric, which can escalate into real-world violence and discrimination against LGBTQ+ people.
Some of these actions, according to experts, can be categorized as stochastic terrorism, which refers to violent acts performed in response to messages intended to inspire such actions.
The report cites over 700 incidents of anti-LGBTQ+ hate and extremism documented between November 2022 and November 2023, including homicides, assaults, bomb threats, and acts of vandalism. These incidents are linked to the pervasive spread of harmful narratives and disinformation online, according to the report.
GLAAD president and CEO Sarah Kate Ellis emphasized the urgency of the situation, calling on social media companies to take immediate action to enhance the safety of their platforms.
“Leaders of social media companies are failing at their responsibility to make safe products. When it comes to anti-LGBTQ hate and disinformation, the industry is dangerously lacking on enforcement of current policies,” Ellis said in a press release. “There is a direct relationship between online harms and the hundreds of anti-LGBTQ legislative attacks, rising rates of real-world anti-LGBTQ violence and threats of violence, that social media platforms are responsible for and should act with urgency to address.”
A December 2022 study by Media Matters and GLAAD revealed a significant surge in the use of the slur “groomer” on Twitter following Elon Musk’s acquisition of the platform. This term, falsely linking LGBTQ+ individuals to pedophilia, was used by prominent far-right influencers to incite hostility and provoke harassment against the LGBTQ community. The study found that retweets of “groomer” slur tweets by nine prominent anti-LGBTQ+ accounts increased by 1,200 percent after Musk’s takeover. Mentions of this slur from right-wing media accounts rose by over 1,100 percent. Anti-LGBTQ+ extremist influencer Chaya Raichik’s Libs of TikTok account saw mentions grow from about 2,000 to almost 14,000. Additionally, tweets mentioning prominent LGBTQ+ accounts with the “groomer” slur increased by more than 225,000 percent.
It’s not all bad news, though. Since the first SMSI was released in 2021, GLAAD’s Social Media Safety Program has advocated for platforms to update their policies to include additional protections for LGBTQ+ safety. In 2022, GLAAD and UltraViolet worked with TikTok to have the platform add protections against targeted misgendering and deadnaming, as well as banning the promotion of the harmful and discredited practice of conversion therapy. GLAAD notes that the Social Media Safety Program staff have worked extensively with many platforms, apps, and companies over the past year on these critical policy areas. Companies recently adopting one or both policies include Snapchat, Discord, Post, Spoutible, Grindr, IFTAS, and Mastodon.
GLAAD’s report also explains that the lack of effective moderation and enforcement of community guidelines allows harmful content to proliferate. So, despite policies against hate speech, platforms often fail to implement these policies adequately, spreading harmful content. The report emphasizes the need for social media companies to enforce their own policies more rigorously and transparently.
Additionally, financial incentives drive the spread of hate speech and disinformation, with social media platforms profiting from increased engagement regardless of the content’s nature, according to the report. GLAAD slams this business model for prioritizing sensational and divisive posts over accurate and respectful discourse, undermining the safety of LGBTQ+ users, and threatening the integrity of democratic processes by allowing misinformation to influence public opinion and political outcomes.
Jenni Olson, GLAAD’s senior director of social media safety, stressed the broader implications of moderation failures, pointing out Meta specifically.
“In addition to these egregious levels of inadequately moderated anti-LGBTQ hate and disinformation, we also see a corollary problem of over-moderation of legitimate LGBTQ expression — including wrongful takedowns of LGBTQ accounts and creators, shadowbanning, and similar suppression of LGBTQ content,” said Olson. “Meta’s recent policy change limiting algorithmic eligibility of so-called ‘political content,’ which the company partly defines as: ‘social topics that affect a group of people [or] society large’ is especially concerning.”