Conspiracies are welcome in the Facebook metaverse

A study of metaverses shows that they are still unsafe places for users and users. Their methods of moderation do not prevent extreme and dangerous speech.

We already knew that the metaverse could be a dangerous place. In December 2021, a user of Horizon Worlds, the metaverse of Meta, the parent company of Facebook, told that she was being sexually assaulted virtually through her avatar, who was allegedly a victim of touch. Spending a week in the metaverse, our reporter Nicolas Lellouche also witnessed bad behavior in a number of alternate worlds, such as insults, acts of violence and attempted sexual assaults.

These issues are not isolated: The Sum Of Us association published a full report on May 31, 2022 listing the very many moderation issues of metaverse platforms, especially Meta’s. In addition to security vulnerabilities for users, the authors write that: conspiracy theories would not be well moderated in the metaverse.

for further

The metaverse is not a safe place // Source: Canva

A QAnon server in Horizon Worlds

The report’s authors explain that extremist content would be very common in the metaverse. In particular, they cite the example of Buzzfeed journalists, who built a private server in Horizon Worlds entirely devoted to fake news. Nicknamed “Qniverse”, referring to the US conspiracy theory QAnon, the group welcomed extreme comments.

On this server, Buzzfeed journalists were able to freely publish a very large amount of fake news about the alleged “theft” of the US elections of 2020, or even about the origin of the Covid pandemic, saying it would have originated​​ from scratch. Messages from Alex Jones, one of the main US conspirators, explaining that Joe Biden would be a pedophile and that a caste of reptiles would secretly rule the world have also been shared without any problem.

The reporters purposely used terms commonly controlled by Facebook, including references to QAnon, which are normally quickly removed from Facebook. But for 36 hours, the server went undetected by Horizon’s moderator teams. They then had to report certain publications multiple times for moderators to respond, announcing to journalists that they had found nothing that violated the platform’s terms of use.

Meta failed to effectively moderate a group sharing extreme content in the Horizon World metaverse //Source: Canva

“Unable” to moderate the metaverse

How could such a decision have been made when the comments should normally have gotten them banned? In their article, the journalists propose various hypotheses, such as the limited size of the server, or the fact that they have not interacted with content outside the group. Nevertheless, their observation shows the limits of the moderation currently applied in Horizon Worlds, and that these problems are likely to only increase as the number of users increases.

Instead of learning from his mistakes, Meta persists in the metaverse conclude the authors of the Sum Of Us report. Meta does not have a specific plan for moderating dangerous content and behavior, such as hate speech and misinformation. They also recall that Andrew Bosworth, Meta’s Chief Technology Officer, himself admitted in an internal message that moderating the Metaverse was ” practically impossible A message showing that conspiratorial speeches are certainly not going to be moderated anytime soon.

Leave a Comment