Meta's New Content Moderation Practices Raise Concerns Over Disinformation, Misinformation and Safety of Online Health Information
Melbourne: Social media giant Meta has announced significant changes to its content moderation practices, citing a shift towards user-sourced "community notes" models. The move has sparked concern among experts, advocate groups, and health organizations over the potential impact on online misinformation, disinformation, and the safety of sensitive health information.
Effective immediately, Meta will no longer employ human fact-checkers and moderation teams, relying instead on these community notes to police its platforms, including Facebook, Instagram, and Threads. The company claims that this new approach aims to avoid bias, but critics warn that it could lead to an increase in abusive and demeaning statements as well as disinformation.
Health experts have long been concerned about the spread of misinformation related to COVID-19 and other health issues on social media platforms. However, there has been limited discussion about the potential impact of Meta's new policies on sexual and reproductive health information online.
"The town square" for health information has often relied on public health organizations sharing factual information about sensitive topics such as pregnancy, HIV, and unplanned pregnancy. Facebook and Instagram have served as spaces where these organizations can reach diverse audiences, particularly in rural and regional areas.
However, with the community notes model, users may no longer feel safe sharing accurate and reliable sexual health information online, putting vulnerable groups at risk. Internal training materials leaked from Meta reportedly show that comments containing hate speech targeting marginalized communities are now permissible under the new policy.
Additionally, the Oversight Board has acknowledged that over-censorship has been an issue in the past, including blocking or shadow-banning content related to sex and reproductive health.
Analysts warn that community notes may be used as a tool for "user-generated warfare", where malicious users attack and sabotage content creators and organizations on social media platforms. Malicious tactics include false reporting of images and coordinated pile-ons of hate speech in comments.
Health service providers are already taking precautions, with many LGBTQIA+ and women's health organizations closing their accounts on X (formerly Twitter) or self-censoring sensitive information on alternative platforms.
The future of online social media for sexual and reproductive health services looks uncertain. While some experts explore new platforms like Bluesky, others warn that there is no perfect solution to address the rapidly evolving global political environment in which business as usual is not an option.