A 21-year-old researcher's avatar was allegedly sexually assaulted in Meta's Horizon Worlds VR game. This incident shows the need for clear rules and laws surrounding conduct in VR and AR spaces, especially as they get more sophisticated and life-like.
Footage of the incident was shared with the BBC. People may be tempted to brush this off as it happened in a virtual space, but corporate accountability group SumOfUs campaigns director Vicky Wyatt said, "It still counts, it still has a real impact on users."
Wyatt paraphrased the experience of the researcher in question. "part of them was really shocked, part of them thought, 'OK, this isn't my real body, this is an avatar' and another part of them thought 'this is really, really important research that I'm doing, I need to capture this footage.'"
The researcher also claims they witnessed homophobic slurs in Horizon Worlds. Wyatt is hoping Meta will take these matters seriously and look at harm prevention methods more closely. "Rather than Facebook rushing headlong into building this metaverse, we're saying 'look, you need to stop look at all the harms that are happening on your platforms right now that you can't even deal with.' Let's not repeat and replicate those in the metaverse. We need a better plan here on how to mitigate online harms in the metaverse."
Meta's president of global affairs and ex-British Deputy Prime Minister Nick Clegg recently wrote a blog post detailing Meta's stance on moderation and safety within the metaverse.
He notes that in Horizon Worlds, "a rolling buffer" stores audio data for a short period on users' headsets so that they can submit it along with any complaints they wish to make.
He also states, "in the US, we wouldn’t hold a bar manager responsible for real-time speech moderation in their bar, as if they should stand over your table, listen intently to your conversation, and silence you if they hear things they don’t like."
It seems while the company is broadly for enabling people to block and report others who cause harm in online spaces, they don't want to encroach on what people say within those spaces. Hopefully, the company realises that hate speech falls under the causing harm category.
Source: Read Full Article