top of page

When Technology Sees More Than We Consent To

  • Writer: Carolina MIlanesi
    Carolina MIlanesi
  • 15 minutes ago
  • 6 min read

Why smarter glasses are exposing a growing gap between innovation, consent, and human decency


There is a striking report from BBC News that should make all of us uncomfortable. It tells the story of women who were unknowingly filmed by men wearing smart glasses, interactions that were then shared widely on social media without consent. For some, the footage drew millions of views and a barrage of harmful, derogatory comments.

It is tempting to frame this as a story about technology gone wrong. Smart glasses are the villain. Cameras are the problem. Innovation has outpaced ethics. But that framing is too easy, and ultimately misleading. The uncomfortable truth is that technology is getting better, while humans do not seem to be.


Privacy Didn’t Disappear With Glasses


Smart glasses feel invasive because they collapse the distance between seeing and recording. A glance becomes documentation. A conversation becomes content. But the privacy erosion they represent did not start with eyewear and it is not new.

We have been here before.


When smartphones first added cameras and people were not yet used to them, social norms simply did not exist. Phones were pulled out in gyms, on public transport, and in changing rooms. Women were photographed under skirts or caught in vulnerable and embarrassing moments without their knowledge or consent. The technology was new, the etiquette was undefined, and the harm fell disproportionately on women.


Over time, awareness caught up. People learned to recognize when a phone was being used as a camera. Social pressure, platform rules, and regulation slowly followed. The behavior did not disappear, but it became more visible — and therefore more accountable.

At the same time, camera technology itself kept improving.


Today’s smartphone cameras can zoom with astonishing clarity. Computational photography, image stabilization, and AI enhancement make it possible to capture usable images from distances that would once have seemed implausible. From across the street, it is now possible to photograph someone inside their home or office without them ever realizing it. None of this requires smart glasses. It requires capability — and intent.


Smart glasses sit at the intersection of these two dynamics. Like early camera phones, they arrive before social norms are fully formed. Like modern smartphone cameras, they benefit from vastly improved optics and AI. What they add is seamlessness. The camera is always there, always aligned with the wearer’s gaze, and socially invisible.


The glasses do not introduce a new privacy problem so much as accelerate an old one. They remove the cues we have learned to rely on. When someone raises a phone, we have a chance to notice. When a camera is embedded in something as ordinary as eyewear, that moment disappears — and with it, the possibility of informed consent.


What feels unsettling is not just the technology itself, but how efficiently it exploits the gap between what cameras can now do and what our social norms are prepared to handle.


Filming for Clicks, Not Connection


What the BBC article and video footage reveal so clearly is not curiosity or documentation for memory’s sake. It is exploitation. Social encounters are being secretly filmed not to tell stories, but to generate clicks. The interaction itself becomes raw material. The person being filmed becomes a prop.


This is a crucial distinction. Photography has always involved strangers. Street photography, documentary work, and candid images have a long and complicated history. But there has traditionally been an implicit social contract: the photographer is visible, the camera is visible, and the risk of being seen taking the photo creates a kind of accountability.


Smart glasses break that contract. They allow the recorder to remain socially present while ethically absent.


The Dog in the Park Principle


There is a useful contrast that helps clarify where the real line should be.

On social media today, there are creators who have built entire followings by photographing dogs in parks. They approach owners, compliment the dog, ask for permission, and take photos that capture something joyful and spontaneous. The exchange is explicit. The camera is visible. Consent is part of the interaction.

Now imagine the same creator wearing smart glasses.


From a creative perspective, the possibilities are obvious. A first-person view as a dog runs up to greet you. A wet nose filling the frame. A candid moment that feels more natural than anything shot at arm’s length with a phone. The technology could genuinely improve the storytelling.


But the premise should not change.


The interaction should still begin with a question: May I take a picture of your dog? The fact that the camera sits on your face rather than in your hand does not remove the obligation to ask. It does not turn consent into an optional extra. It simply changes the form factor.

This is where smart glasses expose a broader problem. They make it easier to blur, or bypass, the moment where permission is requested. They tempt people to treat recording as a default state rather than a deliberate act. And once that line is crossed, it becomes easy to justify far more problematic behavior.


The issue is not that glasses enable better, more candid photography. The issue is what happens when creators decide that candid means unconsented.


If asking permission remains the starting point, smart glasses can be a creative tool like any other. If it does not, they become a way to quietly strip agency from the people, or owners, on the other side of the lens.


And that distinction has nothing to do with dogs, glasses, or cameras. It has everything to do with whether consent is treated as foundational or inconvenient.


Design Choices and Manufacturer Responsibility


That said, technology companies are not off the hook.


Smart glasses have been deliberately designed to look like regular glasses, unlike earlier models that were unmistakably devices. That design choice was not accidental. It was necessary for adoption. People will only wear technology that blends in.

But blending in has consequences.


Manufacturers often point to small indicator lights as safeguards, a tiny glow to signal recording. In practice, these are inadequate. Most people are not familiar enough with smart glasses to know what to look for, especially in a fast-moving social interaction. A pinprick of light is not meaningful consent.


Interestingly, awareness does grow. Over the summer, while on holiday, I found myself starting to recognize a gesture: a hand moving to the side of the glasses, a subtle tap, a pause. Once you know what to look for, you see it. But that kind of awareness cannot be a prerequisite for privacy.


If recording is meant to be ethical, it must be obvious, not technically discoverable.


What We Risk Losing If We Get This Wrong


What makes this moment so consequential is not just what smart glasses can do when misused, but what they could do when used well.


As AI improves, cameras on glasses have the potential to become powerful tools for accessibility and learning. For people who are blind or visually impaired, a camera that sees what the wearer sees can describe the world in real time, reading signs, identifying objects, helping with navigation. For others, it can support memory, provide step-by-step guidance, or enable contextual learning where the agent understands the environment because it shares the user’s perspective. This is not surveillance; it is assistance. The technology is not watching the user, it is accompanying them.


That first-person perspective is transformative. It grounds AI in lived experience. It turns abstraction into context. It has the potential to make the world more navigable, more inclusive, and more humane.


And that is precisely why it would be such a shame if human behavior were to derail it.

When cameras on glasses become synonymous with secret filming, exploitation, and consent violations, trust collapses. Public backlash grows. Regulation tightens. And the people who stand to benefit most from this technology, those who need it for accessibility, independence, and learning, end up paying the price for behavior that has nothing to do with them.


So yes, manufacturers have a responsibility. Recording must be obvious. Indicators must be unmistakable. Design cannot optimize invisibility at the expense of consent. But this conversation cannot stop there.


Because this is not, and never has been, a technology-only problem.


Smart glasses do not create a lack of decency. They expose it. Cameras do not erase consent. People do. As our tools become more powerful, the ethical burden does not disappear, it increases.


Technology will keep getting better. That is inevitable. The open question is whether we are willing to get better alongside it, more aware, more accountable, and more respectful of the fact that not everything we can capture should become content.


That, more than any device, will determine whether this future is worth seeing.

 
 

©2023 by The Heart of Tech

bottom of page