Growing adoption of AI-powered wearable devices is raising new concerns about privacy and data handling. Reports now indicate that footage captured through Ray‑Ban Meta Smart Glasses is sometimes reviewed by external contractors as part of AI training processes.
The smart glasses, developed by Meta in partnership with Ray‑Ban, allow users to capture first-person video and audio through built-in cameras and microphones. The device can also analyse surroundings using Meta’s artificial intelligence systems.
The product has seen strong consumer demand. Meta reportedly sold more than 7 million pairs in 2025, a sharp increase compared with the combined 2 million units sold during 2023 and 2024.
However, the technology has sparked debate among critics. Some have warned that features such as facial recognition could create privacy risks, especially as discussions grow around surveillance technologies and the use of AI in law enforcement.
A major concern involves the process used to train AI systems. Footage recorded through the glasses can be sent to external contractors for data labelling. This process requires workers to review and annotate video content so AI models can better understand real-world situations.
Contract workers based in Nairobi told a European investigation that they were asked to review highly personal and sensitive recordings.
“In some videos you can see someone going to the toilet, or getting undressed,” one contractor said. “I don’t think they know, because if they knew they wouldn’t be recording.”
Another data annotator described seeing private moments accidentally captured by the glasses. “I saw a video where a man puts the glasses on the bedside table and leaves the room,” the worker said. “Shortly afterwards his wife comes in and changes her clothes.”
Workers said they also encountered footage showing bank cards, people watching adult content and recordings of intimate moments. One employee explained that refusing to review such content could risk losing their job.
“You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work,” the employee said. “You are not supposed to question it. If you start asking questions, you are gone.”
Meta’s AI terms of service state that the company may review user interactions with its AI systems either automatically or manually. The document also advises users not to share sensitive information with AI tools.
Privacy experts say that once data is uploaded to company servers and used in AI systems, users may lose control over how that information is used in the future.
When asked about the issue, a Meta spokesperson referred to the company’s existing terms of service and privacy policy, stating: “When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy.”
The situation also highlights a broader industry practice. Technology companies often rely on data annotators in countries such as Kenya, Colombia and India to review large volumes of content used to train AI systems.
As wearable AI devices become more common, experts say the hidden human and privacy costs behind data labelling are becoming more visible.
Also read: Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream LinkedIn | The Mainstream Facebook | The Mainstream Youtube | The Mainstream Twitter
About us:
The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.



