- The safety issue in the metaverse is nuanced and obscure.
- Moderators must take steps to ensure a secure and friendly user environment.
- Safety measures are necessary to ensure users are able to report and capture bad behavior
When asked whether it plans to hire more content moderators in Horizon Worlds in light of the new age restrictions, Meta declined to provide an estimate of the number of moderators it currently employs or contracts in that game. However, the metaverse modification draws attention to those charged with enforcing the law in these new online spaces.
As a virtual world that is still in its infancy, the metaverse, content moderation in this setting is a young and developing field. For undercover content moderators in particular, it is evident that the duty of policing content in the metaverse would call for a special set of abilities and approaches.
The Security Issue In The Metaverse
The safety issue in the metaverse is nuanced and obscure. Journalists have documented incidents of sexual assault, scams, and hateful comments that were carried out using Meta’s Oculus.
A list of the tools and policies that Meta has in place was sent, and it was highlighted that it had trained safety specialists within Horizon Worlds. Meta declined to comment on the matter on the record. A Roblox representative claims the firm employs “a team of thousands of moderators who monitor for inappropriate content 24/7 and investigate reports submitted by our community” in addition to using machine learning to assess text, image, and audio content.
Because the metaverse is so immersive, many tools designed to deal with the billions of potentially harmful words and images on the two-dimensional web don’t function well in VR. Among the most crucial answers, human content moderators are emerging as key.
Another significant issue is grooming, in which adults with ulterior motives attempt to develop relationships of trust with young people. Users entrusted with reporting and capturing the bad behavior when businesses fail to filter out and prevent this abuse in a proactive manner.
Rules That Could Be Enforced
Establishing a presence: In the metaverse would be the first step for an undercover content moderator. Making an avatar and participating in the community in numerous ways required. To avoid bringing attention to themselves, the moderator would need to use an avatar that is difficult to identify as a moderator.
Monitoring Conversations: The moderator would be responsible for keeping an eye on user interactions and talks within the metaverse. This can entail listening in on phone talks or keeping an eye on text-based discussions. Any content, such as hate speech, harassment, or sexual content, that violates the platform‘s standards must be easily recognise and flagg by the moderator.
Investigating complaints: The moderator would also have to look into complaints of rules infractions. This can entail following up on user complaints or carrying out their own investigations in response to questionable behavior. The moderator would have to be able to acquire information and decide whether or not to take action based on that information.
Keeping their cover: The moderator would have to take extra caution not to give away their identity. In order to maintain their cover identity while handling potentially difficult situations, the moderator would need to have some acting ability.
Working closely with other moderators: To keep the metaverse secure and friendly for all users, the moderator will need to coordinate with other moderators and platform administrators. This might entail exchanging information and coordinating actions to deal with particular problems or problematic users.