Smart glasses with AI capabilities have evolved from futuristic concept to everyday reality. The market exploded in 2024, with global smart glasses shipments surging 210% year-over-year, driven primarily by Meta’s Ray-Ban smart glasses. From the consumer-focused Meta Ray-Ban Display (featuring a built-in heads-up display announced in September 2025) to Meta’s partnership with Oakley for athletic glasses, enterprise solutions like RealWear and Vuzix for industrial use, and developer-focused options like Brilliant Labs’ Frame glasses, these devices promise to revolutionize how we interact with the world.
But with innovation comes risk. Modern AI glasses can record video and audio, process conversations in real-time with AI assistants, perform visual analysis of everything you see, generate meeting summaries, create searchable transcripts, and transmit data to cloud servers—often without obvious visual indicators. For businesses deploying these technologies and individuals using them in professional settings, the compliance landscape is treacherous.
In Part 1 of this series, we address biometric data collection.
The Risk
AI glasses increasingly incorporate biometric data collection capabilities that trigger strict privacy regulations. This includes facial recognition through camera feeds, voiceprint capture through AI transcription (see upcoming Part 2 in this series for AI specific risks), eye tracking and gaze analysis, and even the processing of images that could be used to identify individuals. Under laws like California’s Consumer Privacy Act (CCPA), Illinois’ Biometric Information Privacy Act (BIPA), and the EU’s General Data Protection Regulation (GDPR), biometric data receives heightened protection.
The 2024 Charlotte Tilbury settlement established that virtual try-on features using facial geometry may constitute biometric data collection under BIPA, potentially requiring separate notifications and annual consent reaffirmation. This and other precedents extend directly to AI glasses that process visual and audio data that can constitute biometric information.
Relevant Use Cases
- Retail employees using AI glasses that analyze customer faces or body language for personalized service recommendations
- Security personnel deploying glasses with facial recognition capabilities for identification
- Healthcare providers using glasses that process patient images, potentially capturing biometric identifiers
- Any workplace use where AI processes images or voices of employees, customers, or the public
- Industrial workers whose AI glasses capture and analyze faces or voices of colleagues during recorded training sessions
Why It Matters
BIPA provides for statutory damages of $1,000 to $5,000 per violation, along with attorneys’ fees. Following the Illinois Supreme Court’s 2023 Cothron decision, each scan or transmission could constitute a separate violation—though a 2024 amendment limited this to one violation per person per collection method. The $51.75 million Clearview AI settlement in 2025 demonstrates the scale of exposure: with biometric data from millions of individuals, companies face bankruptcy-level liability.
While BIPA may be the most popular of the biometric laws in the United States, it certainly is not the only one. Measures to regulate the collection, use, and disclosure of biometric information exist in states such as California, Colorado, Texas, and Washington, as well as several cities including New York City and Portland OR.
For a summary of these requirements, see our Biometrics white paper.
Practical Compliance Considerations
The compliance challenges surrounding AI glasses are significant, but manageable with proper planning:
- Address Applicable Notice, Consent, and Policy Requirements: Organizations may need to create detailed, written policies governing when, where, and how AI glasses may be used. Address recording features, AI processing, data transmission, and specify prohibited uses. Include clear guidance on consumer versus enterprise devices. And, of course, consider applicable notice, consent, and record retention policies.
- Conduct Privacy Impact Assessments: Before deploying AI glasses, evaluate privacy risks specific to your industry, geography, and use cases. Consider biometric data collection, workplace surveillance, third-party AI processing, and cross-border data transfers. Note such risk assessments may be required, see here and here.
- Implement Technical Controls: Use device management solutions to control which features can be activated in which locations. Consider geofencing to automatically disable recording in sensitive areas like bathrooms, break rooms, confidential meeting spaces, and healthcare facilities.
- Vet Vendors and AI Services: Understand where data goes, who processes it, how long it’s retained, what security controls exist, and whether vendors will sign appropriate agreements (BAAs for HIPAA, DPAs for GDPR, etc.). Negotiate contracts that protect your organization and comply with your obligations.
- Train Rigorously: Ensure all users understand the legal implications of AI glasses, including consent requirements, prohibited uses, data handling obligations, and discovery implications. Training should be role-specific and regularly updated.
- Monitor Regulatory Developments: Regulation is evolving rapidly concerning biometrics, as well as AI tools that leverage that information for additional capabilities. The EU AI Act took effect in 2024, California increased its AI-regulatory environment in 2024-2025, and federal AI legislation is under consideration. State workplace surveillance laws are proliferating. Stay current with legal developments.
- Establish Clear Lines of Responsibility: Designate who is responsible for AI glasses compliance, including legal review, privacy assessment, security controls, HR considerations, policy enforcement, and incident response.
- Consult Legal Counsel: Given the complexity and variability of the regulatory environment, work with attorneys familiar with privacy, employment, biometric, and AI regulations before rolling out these wearables.
Conclusion
AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.
Organizations that fail to address these compliance concerns face not just regulatory penalties, but class action litigation (BIPA damages alone can reach millions), reputational harm, loss of customer trust, and the erosion of employee confidence.
The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.
