Meta’s AI Glasses — why they’re a disaster for data privacy

The newest Meta glasses are a privacy nightmare… Learn how.

Meta’s AI Glasses — why they’re a disaster for data privacy
Meta Ray-Ban Wayfarer (C) Meta.

Meta’s Ray-Ban Glasses — why they’re a disaster for data privacy

In this day and age, we need to be vigilant about our privacy — with the emerging use of AI processing our data to train systems and supermarket loyalty cards collecting our shopping data.
It feels like only a few weeks ago (although it’s been months) since Adobe made changes to its terms to state that user creations may be used for machine learning purposes to train Firefly, their AI. This sparked backlash from many creators, and Adobe later issued a statement clarifying that user data would not be used to train their AI, amending their Terms of Service accordingly.


The most recent product under scrutiny is Meta’s Ray-Ban glasses. They’re glasses that can take videos, livestream to Instagram and Facebook, and perform several other features. However, Meta has released Meta AI, which is activated by saying “Hey Meta” with the glasses on. This assistant can help with almost anything. It’s particularly useful for those who are visually impaired or struggle to use a phone due to a disability.

They seem like normal glasses that can help out. What’s the issue?

One (large — if not the largest) feature of these glasses is the ability to ask Meta, “What’s in front of me?” I tested this out, and it works remarkably well. It tells you exactly what’s in front of you, can read text, and describe further.

There’s a massive privacy issue here, though.

Meta is known for making most of its revenue through advertising, which poses a massive issue for people who may not want their photo taken. Under GDPR, the main requirement is consent. A “data subject” needs to consent to their data being processed. If I’m sitting in front of you and ask my glasses, “What’s in front of me?” they’ll take a picture of you and process your data. Typically, faces are classified as biometric data because they can uniquely identify you. There may be 10 million John Does, but only one of them has your face.

Meta has also said that the content of these images will be retained to help train Meta Products. You can read Meta’s statement on this here.


The Impacts

It’s important to note that these glasses exist in a grey area concerning data protection laws. Although it’s likely that countries will begin cracking down on AI (such as with the EU’s AI Act), it could take a while before these regulations are fully implemented.

The processing of data essentially erodes public privacy, and there’s massive potential for misuse. Since the glasses are discreet, there’s an increased risk of abuse. For example, students at Harvard managed to access personal information, like addresses, from images the glasses captured as they walked past people.

This also introduces another issue — advertising. As mentioned earlier, Meta’s main revenue source is advertising. Similar to supermarket loyalty cards, the ads you receive could be influenced by objects or brands you interact with. Other businesses could use the data (if shared) to track behavior, raising even more privacy concerns.


So What Can Meta Do to Make This More Privacy-Focused?

Firstly, they could remove all data sent to the cloud after an output is produced. This would mean that any image sent to the cloud would be processed and then deleted, rather than stored. However, this still wouldn’t prevent images from being processed initially.

In this scenario, it’s impossible to stop the processing of content entirely, as it’s required for the functionality, and some information will inevitably be stored somewhere.


I’m not going to say these glasses are bad, as they have massive advantages for those who need them — like I mentioned earlier. However, it’s important to realize the effect on privacy these products are having.