The Facebook metaverse is already one of the most toxic places on the internet

An investigation shows that Facebook (Meta) was extremely ill-prepared to fulfill its dream of creating a shared online space for millions of users. After days of trying out the Metaverse and its virtual chat room Horizon Worlds, researchers say it’s already one of the worst places on the web, completely devoid of moderation.

© SumOfUs

Who would have believed that? The Business Watch Group SumOf Us published an interesting report on the Facebook empire’s transition to Meta last week. While Meta’s VR platforms are already enjoying more than 300,000 usersthe group also documents how Horizon Worldsa virtual metaverse chat room, host all the web’s worst behaviorbetween racism, sexual harassment, homophobia and conspiracy theories.

Less than an hour to be “virtually violated”, the total absence of moderation

Meta has two main VR apps. One is Horizon Worldsan application of social networking that allows users to create and interact with unique digital rooms called “worlds”. As of February 2022, some 10,000 individual worlds have already been designed, with an estimated user base of over 300,000. The second is Horizon locationsa separate application dedicated to hosting live events in the Metaverse.

The explicit promise of the metaverse is to occupy a digital domain as if you were really there to interact with other people (as already does VR chat, for example). although online harassment is not new, everything necessarily becomes more visceral once a VR helmet is placed on the head.

Horizon Worlds
Horizon Worlds © Meta

In their study, the researchers share many examples of toxic behavior, in addition to the almost total absence of moderation in the game space offered by the application Horizon World’s† The cited examples are not lacking: they declare to have been ” rushed across different worlds in Meta’s product; of the fake sales of fake drugs on tables, or again, users who of course constantly treat themselves with racist insults or homophobic

One of the researchers explains that he had to to be less than an hour virtually violated by a user, while trying out the metaverse for the very first time. SumOf Us also included a link to a video of what they consider to be a ” virtual assault Another video shows racist behavior, gun violence, etc.

The watchdog group points out: dozens of sexual assaults so virtual especially to female avatars† No wonder: a metaverse tester had already been sexually harassed last year, whileHorizon Worlds was only in beta stage.

Nothing planned to fix the problem

Meta moves forward with the Metaverse, but without a clear plan on how it will reduce harmful content and behaviordisinformation and hate speech says the report. The researchers even cite an internal memo from last March, shared by the Financial times from Meta Vice President Andrew Bosworth: “ user moderation at any scale is virtually impossible ” he said.

However, in February, Meta introduced a feature that prevents other avatars from getting too close to another player’s body – similar to what other virtual chat rooms like. VR chat. The problem is that the researchers indicate that they are constantly ” harassed ” and asked to remove personal limit settingseither by the game itself or by other users.

Read: Facebook (Meta): Zuckerberg Lost $30 Billion in an Instant

Horizon Worlds © Meta
Horizon Worlds © Meta

Worse, when another user tries to touch or communicate with youVR controllers vibrate,” creating a very disorienting and even disturbing physical experience, especially during a virtual attack “, as the study points out. A good idea, while Zuckerberg is testing gloves to better feel virtual objects…

Horizon Worlds understand anyway parental supervision of the ability to unsubscribe other users† But the platform is still a big deal for younger people, many of whom are already using it. Unlike social networks, which can use systems to monitor written content or even videos, VR chat rooms rely only on individual users to report bad behavior.

Meta has repeatedly shown that it unable to adequately monitor and respond to malicious content on Facebook, Instagram, and WhatsApp – so it’s no surprise it’s already failing on the Metaverse said the researchers at SumOf Us

Source: SumOfUs

Leave a Comment