I recentlty took part in a session entitled Regulation and Policing of Harm in the Metaverse as part of a Society for Computers and the Law and Queen Mary University of London policy forum on the metaverse alongside Benson Egwuonwu from DAC Beechcroft and Professor Julia Hornle Chair of Internet Law at the Centre for Commercial Law Studies at Queen Mary 

This is what i said in my introduction.

This is what two recent adverts from Meta said:

  • “In the metaverse farmers will optimize crop yields with real time data”
  • “In the metaverse students will learn astronomy by orbiting Saturn’s rings”

Both end with the message “The metaverse may be virtual but the impact is real”.

This is an important message but the first advert is a rather baffling use of the metaverse, the second could be quite exciting. Both adverts are designed to make us think about the opportunities presented by it.

But as we all know, alongside the opportunities there are always risks. It is very true of Artificial Intelligence, a subject I speak on regularly, but particularly as regards the metaverse.

The metaverse opens new forms and means of visualisation and communication but I don’t believe that there is yet a proper recognition that the metaverse in the form of immersive games which use avatars and metaverse chat rooms can cause harm or of  the potential extent of that harm.

I suspect this could be because although we now recognize that there are harms in the Online world,  the virtual world is even further away from reality and we again have a pattern repeating itself. At first we don’t recognize the potential harms that a new and developing technology such as this presents until confronted with the stark consequences.

The example of the tragic death of Molly Russell in relation to the understanding of harm on social media springs to mind

So in the face of that lack of recognition it’s really important to understand the nature of this potential harm, how can it be addressed and  prevent what might  become the normalisation of harm in the metaverse

 

The Sunday Times in a piece earlier this year on Metaverse Harms rather luridly headlined “My journey into the metaverse — already a home to sex predators”  asserted:   “….academics, VR experts and children’s charities say it is already a poorly regulated “Wild West” and “a tragedy waiting to happen” with legislation and safeguards woefully behind the technology. It is a place where adults and children, using their real voices, are able to mingle freely and chat, their headsets obscuring their activities from those around them.”

It went on: “Its immersive nature makes children particularly vulnerable, according to the National Society for the Prevention of Cruelty to Children (NSPCC) charity.”

This is supported by the Center for Countering Digital Hate’s  investigation last year into Facebook’s VR metaverse which  found children exposed to sexual content, bullying and threats of violence.

And there are other potential and actual harms too not involving children. Women and girls report being harassed and sexually assaulted, there is also fraudulent activity and racial abuse.

It is clear that because of the very nature of the metaverse- the impact of its hyper-realistic environment -there are specific and distinct harms the metaverse can cause that are different from other online platforms.

These include harms that may as yet be unquantified – which makes regulation difficult. There is insufficient knowledge and understanding about harms such as  the potentially addictive impact of the metaverse & other behavioural and cognitive effects it may have.

Policy and enforcement are made more difficult by fact that the metaverse is intended to allow real-time conversations. Inadequate data storage of activity on the metaverse could mean a lack of evidence to prove harm and the track of perpetrators but in turn this also raises conflicting privacy questions.

So What does the Online Safety Bill do?

It is important that metaverse is included within the platform responsibilities proposed by the bill. The Focus of the bill is about systems and risk assessment relating to  published content but metaverse platforms are about activity happening in real-time and we need to appreciate and deal with this difference. It also shows the importance of having a future proofing mechanism within the  bill but one that is not reliant on the decision of the Secretary of State for Culture Media and Sport.

There is the question whether the metaverse definition of regulated services currently falls within scope. This was raised by my colleagues in the Commons and ministerial reassurance was  given in relation to childrten but we have had two Ministerial changes since then!

Architects of the Bill such as CarnegieUK are optimistic that the metaverse – and the tech companies who create it  will not escape regulation in the UK because of the way that user generated content is defined in clause 50 and the reference there to “encountered”.

It is very likely that harms to children in the metaverse on these services will be caught.

As regards adults however  the OSB now very much focuses on harmful illegal content. Query whether it will or should capture analogous crimes within the metaverse so for instance is ‘virtual rape and sexual assault’ considered criminal in  the metaverse?

As regards content outside this, the current changes which have been announced to the bill which focus on Terms of Service rather than ‘legal but harmful’ create uncertainty.

It seems the idea is to give power to users to exclude other participants who are causing or threantening but how is this practical in the context of the virtual reality of the metaverse?

A better approach might be to clearly regulate to drive Safety by Design. Given the difficulties which will be encountered in policing and enforcement I believe  the emphasis needs to be placed on design of metaverse platforms and consider at the very outset how platform design contributes to harm or delivers safety.

Furthermore at present there is no proper independent complaints or redress mechanism such as an Ombudsman proposed for any of these platforms which in the view of many is a gaping hole in the governance of social media which includes the metaverse.

In a recent report The Center for Countering Digital Hate recorded 100 potential violations of Meta’s policies in 11 hours on Facebook’s VR chat . CCDH researchers found that users, including minors, are exposed to abusive behaviour every seven minutes. Yet the evidence is also that Meta is already unresponsive to reports of abuse. It seems that of those 100 potential violations, only 51 met Facebook’s criteria for reporting offending content, as the platform rejects reports if it cannot match them to a username in its database.

Well we are expecting the Bill in the Lords in the early New Year . We’ll see what we can do to improve it!