The company committed $50 million to research the practical and ethical implications of the metaverse.
In an internal memo, Meta's CTO, Andrew Bosworth, told staff that developing safe virtual experiences was a critical aspect of the company's business model. According to Financial Times, Bosworth intends Meta virtual worlds to have nearly “Disney levels of safety," despite the risk of lower-quality content being created by third-party developers.
Meta could use a stronger version of its existing community guidelines to govern spaces like its Horizon Worlds VR platform, according to Bosworth. In a blogpost, Bosworth referred to the necessity of moderation in Meta spaces.
VR moderation tools include the ability to block other users in VR and a comprehensive Horizon surveillance system for monitoring inappropriate behaviour. Bosworth notes, “There are tough societal and technical problems at play, and we grapple with them daily”.
[2 minute read]