How much privacy is there in the age of AI and the metaverse?

By Jessica Werb

A family in the forest with social media ads popping up around them
Vulnerable people like children or those who have an addiction could be at risk in an unregulated metaverse. Photo: iStock/UBC

UBC experts say we don’t yet have a handle on how much artificial intelligence should be allowed to influence consumer behaviour

A ripple of concern reverberated among privacy advocates earlier this year when the Financial Times reported that Meta (the company formerly known as Facebook) had accumulated a series of patents. Depending on your perspective, the patents suggest a couple of scenarios. One, Meta is creating an immersive virtual world as hyper-realistic, interactive and engaging as possible. Or two, it’s building dystopian future in which consumers’ biometrics are analyzed to flood them with hyper-targeted advertising and sponsored content.

One particular patent detailed a system in a headset that would adapt media content based on users’ facial expressions and eye movements; another looked at how to present ads to users in augmented reality based on a range of factors including age, gender and their interactions with social media.

How Mark Zuckerberg ultimately wields these remains to be seen. But, according to UBC experts Dr. Robert Xiao, assistant professor in the Department of Computer Science, and Jon Festinger, adjunct professor in the Peter A. Allard School of Law, it won’t be up to the tech billionaire alone to decide. Rather, it will be up to the public, and the policies enacted by the governments they elect, that determine how much consumer privacy exists in the future of our virtual, online, interconnected world known as the metaverse”.

A woman sitting and holding a credit card looking at the laptop.
Predictive algorithms may cross privacy lines when people search for things like health-related issues online. Photo: iStock

Yes, you are being watched

Just about everyone’s had the unnerving experience of searching for something like shoes online, only to find themselves besieged by related ads popping up throughout their online experience — from sponsored content on social media feeds to Google ads peppered throughout the websites they visit.

For some, this may seem like an acceptable trade — free content in exchange for ads. But, as Dr. Xiao points out, this mechanism can start to feel uncomfortable in certain situations. “Shoes might be something that’s fairly benign, but what happens when I search for something health-care-related?’” he observes. “I’ll probably get a half-dozen advertisements saying, ‘Hey, do you want to solve your health-related problem?’ Which is pretty creepy, because I don’t want random companies knowing that I’m searching for things in that regard.”

Take, for example, the famous case of Target using predictive algorithms—or AI—to pinpoint which customers were pregnant for marketing purposes, and ending up knowing a teen was pregnant before she had shared the news with her family. “There’s a big, big, big, big chasm between respecting your attention,” says Dr. Xiao, “and going off into the uncanny valley of creepiness.”

An image of a dark server room.
The European Commission has proposed regulating machine learning, including a ban on AI systems using subliminal techniques that could cause physical or psychological harm. Photo: iStock

Who’s manipulating whom?

Festinger, who specializes in justice and evolving technologies, points out that as AI becomes more attuned and adept at predicting and reacting to consumer behaviours, the lines between decider and manipulator can become blurred.

“If you can imagine an AI that knows who you are and constantly knows what you do, as it becomes at understanding your triggers and reactions, the issue becomes, ‘Who is making the decision here? Do I really have freedom of thought? I’ve been compelled to buy this thing that I don’t really want, and I’m actually acting against my own interests.’”

“Games have an addictive quality, and when you add a sales mechanism that takes advantage of that addictive quality, and then add more mechanisms that we’re not aware of, based on our data and based on our psychology, that is potentially dangerous.”

Jon Festinger, Peter A. Allard School of Law

This becomes an even greater concern, Festinger points out, in the case of vulnerable people. “Think about children, or people who might have an addiction to gambling, or a food addiction. The better the AI gets, the more it’s programmed, the more it gets to know you, the more it’ll know what your buttons are.” Festinger, an avid gamer himself, even shares his own experience of downloading a mobile game that, though initially free, had him hooked on buying in-game features.

“I’ve never been addicted to anything before that I know of,” he says. “And I found myself in this pretty horrible state, and I had to remove the game two or three times. Games have an addictive quality, and when you add a sales mechanism that takes advantage of that addictive quality, and then add more mechanisms that we’re not aware of, based on our data and based on our psychology, that is potentially dangerous.”

3 people sitting on the couch playing a video game.
The better AI gets, the more it knows what buttons to push to get you to buy things you might not necessarily need. Photo: iStock

Where do we go from here?

Even with all his concerns, Festinger says he remains an optimist about the future of privacy in the metaverse, likening our current technological transition to other moments of technological upheaval in human history. “We’re going through a very severe period of change,” he says. “We’ve seen this with the printing press, we’ve seen this with the invention of writing. All of these things have created cataclysmic change.”

While he acknowledges that “our privacy is without any question breached on any traditional standard,” he also maintains that “we know what the building blocks are to put it back together. I don’t see the future as bleak. I just think we’re having a really hard time because we don’t have a handle on the technology yet.”

Just as the advertising industry did decades ago around subliminal messaging in ads, Festinger says industries operating in the metaverse must develop and maintain ethical guidelines

A child wearing a virtual reality headset in the classroom.
We’re in a period of technological transition and regulators are realizing that they need to protect those who are vulnerable. Photo: iStock

“There’s a history of self-regulation for advertising, and countries regulate advertising in politics, as examples,” Festinger notes.

Dr. Xiao also points to the role of governmental regulations, like the EU’s fairly stringent General Data Protection Regulation (GDPR), adopted in 2016, aimed at preserving individual privacy and control of personal data. “The EU values very much that their citizens should have access to data privacy online, the same way they have privacy in the real world,” he says. “A lot of companies are going to tell you that if they want to operate in Europe, the GDPR makes life difficult for them,” he says. “Well, that’s by design.”

More recently, in 2021, the European Commission turned its attention toward machine learning in its 2021 proposal for a legal framework for AI—which includes a ban on AI systems using subliminal techniques that could cause physical or psychological harm, and systems that exploit vulnerabilities of specific groups, among others.

Canada is also moving along the same lines. The Digital Charter Implementation Act of 2020, not yet passed into law, gives more control to individual consumers over their personal data, and also includes regulations around the use of “algorithmic transparency” including AI.

Dr. Xiao, who conducts research in developing tools for VR and AR, says he welcomes the coming regulations. “I think that privacy is really going to be a key obstacle actually in adoption,” he says. “If we can align the incentives of privacy, it will mean we actually can deploy these products. The future development of these kinds of technologies is going to be predicated on if they can they get privacy right. Because it’s going to be too creepy otherwise, and it’s not going to be a future that anyone really wants.”

Learn more about the metaverse: what it is and why it matters.


Jessica Werb is a freelance writer for UBC Brand and Marketing. This article was published on June 20, 2022. Feel free to republish the text of this article, but please follow our guidelines for attribution and seek any necessary permissions before doing so. Please note that images are not included in this blanket licence.

More stories

  • ‘So much mental load’: Mothers speak about school lunches
  • Urban flooding calls for new stormwater infrastructure in Canadian cities
  • People, climate to intensify pressure on popular BC provincial parks