Ban Ray
* Non-Swedish versions of this text are AI-translated but proofread by humans.
In 2025, Meta sold over seven million pairs of camera-equipped glasses that look like regular Ray-Bans.[1] The person wearing them looks like anyone else. But these people are now products, as is everyone they interact with.
1. You don't know what happens to the data
A joint investigation by Svenska Dagbladet and Göteborgs-Posten revealed that footage recorded by Meta's Ray-Ban glasses is sent to Sama, a subcontractor in Nairobi, Kenya.[2] Workers describe reviewing videos of people undressing, using the toilet, having sex, and entering credit card details. "We see everything," as one worker put it.[3]
The AI feature that makes this possible cannot be turned off. If you use the voice assistant, your video and audio are processed on Meta's servers, where the material may be forwarded for human review.[4] It's in the terms of service, the same terms a Sama worker said most users never read.
Optical store employees have said that "everything stays locally in the app," but when journalists analysed the network traffic they found constant communication with Meta's servers.[5]
Meta has been repeatedly asked where the footage goes, whether recordings from countries like Sweden are reviewed by workers abroad, what safeguards exist, and how long recordings are stored. In response, they point to their privacy policy.[6]
The product page still says, in bold: "Designed for privacy, controlled by you."
2. Private spaces no longer exist
When someone wearing these glasses walks into your kitchen, your bedroom, your doctor's office, your place of worship, a protest: every person in range becomes raw material for AI training. The people being recorded never consented, and have no way of knowing whether their faces are being used in a dataset on the other side of the world.
A Sama worker described how a man placed his glasses on the bedside table. His wife came in and undressed. She had no idea.[7]
Former Meta employees have stated that the anonymisation protocols fail under certain lighting conditions, meaning faces remain identifiable despite claimed privacy protections.[8] Data protection lawyer Kleanthi Sardeli summarised it: once the material has been fed into the models, the user effectively loses control over how it is used.[9]
Your living room. Your bathroom. Your children's faces. Your friends' faces. None of them agreed to this.
3. They are using assistive devices as a trojan horse
Meta markets the glasses as assistive technology. In marketing materials, they emphasise hands-free convenience for people with low vision, live translation, turn-by-turn navigation.[10] Internal documents show they planned to launch the facial recognition feature "Name Tag" at a conference for the blind, before rolling it out to the general public.[11]
This is the plan: wrap a surveillance infrastructure in a genuine accessibility feature, launch it through people with disabilities to build goodwill, then deploy it everywhere. Every photo you tagged on Facebook since 2010, every public Instagram post, every time the app suggested a friend's name over a face and you confirmed it: you were tagging training data for a facial recognition model. Without thinking about it, you were training Meta's AI models and working for them for free.
GDPR requires a legal basis for processing personal data. Kenya has no EU adequacy decision.[12] Italian MEPs have written to Ireland's Data Protection Commission (which has primary jurisdiction over Meta in the EU) asking under what legal basis this processing occurs.[13] No one has received a clear answer. That in itself is an answer. The tech companies hope you don't think about this.
4. Meta cannot be trusted with this
In October 2024, two Harvard students showed what the glasses make possible. Using ordinary Ray-Ban Meta glasses connected to PimEyes, a commercial facial recognition engine, they identified strangers on the Boston subway and retrieved names, home addresses, phone numbers, and social security numbers in seconds. They approached a woman on the street, said they had met at a Cambridge event, and she believed them.[27] They did this using publicly available systems, but these are capabilities Meta has always had access to, and with Name Tag this becomes a built-in feature.
Meta has been developing Name Tag internally since 2025: glasses that identify unknown people in real time.[14] An internal memo laid out the plan to launch the feature "during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns."[15]
Read that again. They planned to wait until civil society was distracted. This is not a company making a mistake. This is a company making a calculation. Why do you think they want to exploit the turbulent times we live in to launch this?
They are also building "super sensing" glasses that run cameras and sensors continuously to record the user's entire day.[28] In April 2025, Meta quietly updated the privacy policy for the glasses, expanding its right to use all captured photos, videos, and audio for AI training, with no clear notification to existing owners.[29] The product page still says "designed for privacy, controlled by you." A US class action lawsuit filed in March 2026 calls it what it is: fraud.[20]
The EFF published a warning in March 2026 urging consumers to think twice before buying Meta's Ray-Ban glasses.[16] EPIC wrote to the FTC and state enforcers demanding that Name Tag be blocked before it reaches the market.[17] Sweden's Civil Minister Erik Slottner has demanded answers, warning that the combination of location data and intimate images creates serious safety risks.[18] The UK's ICO has formally written to Meta requesting compliance information.[19] US Senators have demanded answers on how bystanders would avoid being scanned.[21]
Meta paid $650 million to settle an Illinois class action over collecting facial geometry without consent.[22] In 2021 they shut down their facial recognition system on Facebook, claiming they wanted to "find the right balance." The photos and the human-verified labels attached to them, they kept. Same with DeepFace, the algorithm that can regenerate templates from all of it.[30] Now they want cameras on millions of faces, with the same database underneath.
The fines that Meta and other tech companies have received so far for their privacy-violating practices are a drop in the ocean compared to their revenues, and everyone involved knows it. This is not a company that made a mistake and learned from it. This is a company that factors the fines into its business model and keeps going, because they know that in the long run they can earn many times over by continuing to break laws and act against the public interest.
Zuckerberg treats your face as a product. Your kitchen is a data source. Your partner undressing is training material for an AI and business model they will never have control over, reviewed by a worker you will never meet, for a product no one consented to. Zuckerberg should be stripped of the right to sell consumer products. He won't be. So we do what we can: we make the glasses unwelcome everywhere they appear. Inform or shame those who wear them. Ban them from your home. Sabotage the systems where you can, destroy them if you can get away with it.
5. It's not just Meta
Meta is not alone. Spotify is already building smart glasses support into its app.[23][24] More companies on the platform means more data flowing through more lenses.
Unbranded camera glasses are available on AliExpress for under €30.[25] No recording indicator. No privacy policy. No terms of service. No oversight. No Zuckerberg to blame. Just a lens, a chip, and your face in someone's pocket and on the tech companies' servers, forever. When the technology is cheap enough to be disposable, the policy debate becomes irrelevant. This is called regulatory capture, and it is the goal of every tech company that wants to outrun legislation: make something mundane enough and it becomes politically unpopular to regulate the technology. It's cynical, but it has worked before, and the tech companies are betting it will work again.
Apple, Google, and Samsung are all developing competing smart glasses for 2026 and beyond.[26] This is not one company's product. It is an entire industry converging on the idea that your face is a surface to capture, index, and monetise. The question is not whether this technology will spread. It already has. The question is whether you accept it.
Ban them. From your spaces, your events, your workplaces. Demand policies. Make it socially unacceptable to point the tech companies' cameras at your face.
What you can do
Ask your workplace, your gym, your school, your bar to adopt a policy against camera glasses. It only takes one person asking for it to become a conversation. Print the sticker and put it on the door. If your municipality, company, or venue has a policy for CCTV, it should have a policy for this too. Write to your MEP, your MP, or your local council and ask what they are doing about facial recognition and personal data storage in consumer products. Delete or learn to restrict your Meta accounts, and be alert to how companies try to sneak this technology into your everyday life. They are not your friends, they are not on your side. But there are so many more of us, and we can stop this abuse.
Share this page.
Download the sticker
No printer? Order stickers here. Price covers printing and shipping.
CC BY-SA 4.0 — Free to share and adapt with attribution