Through Meta Glasses, Darkly

Jacobin

It’s 2026, and we’re building up a surveillance society in earnest. A few years ago, I rewatched Enemy of the State. Released in 1998, just before the calendar rolled over into the new millennium, the film captured the late-century anxieties of a population concerned that they might be watching us. That the state would endeavor to adopt invasive surveillance laws at any price, and even track its own citizens with satellites and digital networks, was plausible but perhaps not inevitable. Three years later came the attacks of 9/11 and soon after the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT) Act. Soon after, the movie looked equal parts prescient and quaint.

Two and a half decades into the new century, we don’t need the state to surveil us, though it does. We’re perfectly happy to do it ourselves. That didn’t stop the National Security Agency from collecting internet and cellular data from around the world without a warrant, and when Edward Snowden broke the story, we at least feigned outrage for a time. But when the dust settled, we continued about our business, flying consumer drones, carrying cell phones with us into the washroom, sharing our data with every app and social media site on Al Gore’s internet, setting up cameras outside the entrances to our homes, and networking everything with a circuit. Now, in a bid to push the frontiers of voluntary surveillance, we’re putting cameras on our faces.

Who’s Watching Whom?

When Meta rolled out its “smart” or “AI” glasses, it was betting against a history of ignominious defeat — the same history that saw Google Glass go down in flames, its wearers marked as “glassholes” sporting the preferred accessory of perverts. Those who sported the glasses could talk to them, issuing voice commands, which, contrary to corporate hopes, didn’t help make the case for the technology.

The glasses were simply uncool, and more than a little creepy. It didn’t help that Google had a hard time explaining what the glasses were for. To everyone who thought about it for a second, the implicit — or perhaps explicit — purpose of the glasses was voyeurism, an obvious and odious use case. Today Meta’s partnership with Ray-Ban offers a more stylish fit with a better battery life, though the core concern remains: What would a better-looking, better-performing set of artificial-intelligence specs mean for privacy? Nothing good.

The core concern around smart glasses remains that camera technology embedded in them will always be a risk to the privacy for anyone who passes by the wearer, not to mention the wearer themselves. No green or white LED light or other small indicator will solve this problem, and the risk of the technology intersecting with law enforcement and security state surveillance is so high that it ought to be considered less of a risk than a fact waiting to come to life — or already coming to life.

There are reports of federal law enforcement agents wearing AI glasses. Meta is now planning to add facial recognition capacity to its line. In Kenya, Meta contractors report reviewing footage recorded by the glasses of people on the toilet to couples having sex. The company says the reviews are in service of improving the user’s “experience.” One is inclined to let the juxtaposition speak for itself.

Copyrighting Privacy

Surveillance glasses breach privacy by default in a way that one’s eyes, or even one’s phone, don’t. The glasses record discreetly, making them a bigger threat than phone recording, which may also be a breach of privacy but is much more conspicuous. The recorded material becomes the property of the person recording, who may then circulate it. That potential for circulation is a particularly disturbing risk in an era when social media and internet-forums-gone-rabid have become sites for bullying and harassment, or worse.

These same materials may also be reviewed and otherwise used by technology companies with atrocious track records on user privacy, entities that view data as a commodity to be extracted, bought, sold, traded, and leveraged. It’s one thing if someone chooses to buy and wear Meta’s AI glasses and give the company access to their private world, moment to moment. But those who wear the glasses may now also share the worlds of others, without consent or remuneration.

In Denmark, the government is moving to protect individuals from AI-driven exploitation — particularly deepfakes — by bolstering copyright protection to include likeness and voice protections for ordinary people, not just celebrities. Social media and other companies would risk punishments for hosting materials that violate the law, but users would not. The proposed changes recognize that we’ve entered a brave new world in which everyone is subject to manipulation and exploitation at scale, requiring little effort on the part of the exploiter. In sum, the cost of being a scammer or a creep has never been lower.

The Danish copyright amendments are regrettably necessary, and other countries should follow suit, holding social media and other companies to account for enabling and hosting exploitative and invasive materials. Countries might consider further requirements that place restrictions on the manufacturing and use of smart glasses and similar technologies, like much larger LED lights and aural warnings, which would be eminently helpful for those who are unable to see that the glasses are recording. These warnings would further pollute public spaces with visual and audio clutter, but it would be regrettably necessary if we’re to have these devices in our midst.

Social Sanctions and Mass Politics

More robust manufacturing regulations would be a start, but there’s much more to do. Governments ought to restrict their officials and law enforcement officers from using these devices. Past uses should be publicly disclosed. Employees shouldn’t even be allowed to step foot on state property while on duty wearing the things. The risks are too high.

For the rest of us, we might collectively decide to establish a norm that places a high, ideally prohibitive, social cost on those who wear smart glasses: we ought to judge, mock, and shun them into making better, pro-social and pro-privacy decisions. One’s choice to wear smart glasses that record images and videos in public is a shared concern because we become a long-term part of their view of the world, recorded and ripe for sharing or using privately or publicly, for personal or commercial purposes. However, these sorts of developments are never easy to fight with individual disdain alone. Norms may change, but at the end of the day, we face a collective action problem created by powerful companies that profit by enclosing the commons — and keeping an eye on them.

We may be a long way from the enclosure of shared pastureland, but the effects on the public good is the same: everyday life, the stuff of our social world, is transformed into data that can be captured, commodified, sold. This is not a problem we can opt out of individually. Like common lands privatized for profit or industrial pollution socializing costs, surveillance glasses turn our public life — parks, pubs, cafés, the outside world — into a negative externality. Private gain is built on the degeneration of a shared commons. We may try to shame the early adopters. Indeed, we may, in time, as a society spurn smart glasses altogether. But as with our slow grappling with social media age appropriateness, such as with legislation increasingly requiring users to be a minimum of sixteen years old, the answer must be political.

That politics ought to dictate a return to a commons held in common in which everyone retains meaningful control of how they are seen and recorded.