Study of the Week: 'Academia: Now Serving Mass Surveillance with Extra Citations and a Side of Denial'

Study of the Week: 'Academia: Now Serving Mass Surveillance with Extra Citations and a Side of Denial'
Photo by Matthew Henry / Unsplash

Remember when we thought academia was the conscience of society? The wise old owl watching over ethics while Silicon Valley played with its shiny new toys? Well, it turns out that owl was actually busy writing code to help cameras track your face across a crowded street.

This week’s study, published in Nature, no less, lays it all bare: computer vision research has been powering surveillance systems for decades, and not as some accidental byproduct. It’s a pipeline. A system. A deeply normalized practice. And it’s coming straight from the ivory tower.

Researchers analyzed more than 19,000 academic papers and 23,000 patents to trace how ideas from top computer vision conferences (CVPR, specifically) end up in real-world tech. The numbers are jaw-dropping: 90% of papers involve human data. 70% focus on human bodies or body parts. And by the 2010s, nearly 80% of the papers with patents were tied to surveillance-enabling tech.

This isn’t just facial recognition anymore. It’s movement tracking, behavior analysis, monitoring "human spaces" like streets and homes. It's about making the entire public sphere observable and classifiable, and calling it innovation.

And let’s talk about language. The study points out a disturbing trend: the way researchers describe their work often avoids the word “human” altogether. They write about “objects,” “scenes,” “semantic categories.” Then quietly slip in images of people walking, sitting, shopping, being watched. It’s like writing a cookbook and pretending you’ve never heard of chickens.

This isn’t a careless oversight. It’s strategic ambiguity, designed to make it easier to distance the work from its consequences. No messy ethics to handle if you never admit the tech is meant to follow human beings through space and time.

What makes this worse is that it’s not just a few bad apples or misused inventions. The study shows that this is the default setting across top institutions, nations, and subfields. When elite universities like Stanford, MIT, Oxford, and Berkeley publish computer vision work, the majority of it ends up in surveillance patents. This isn’t a bug in the system, it is the system.

So yes, academia helped build the surveillance state. With conferences, publications, grants, and citations. All wrapped in the comforting blanket of “scientific neutrality.” But the truth is, the field didn’t drift toward surveillance, it was always aimed there. The roots of computer vision lie in military and carceral tech. What we’re seeing today is just a slicker, more automated version of that legacy.

We’ve reached a point where research that turns people into trackable data points is celebrated, cited, and commercialized, while questions about privacy, consent, and human rights are dismissed as afterthoughts, or worse, activism.

The study doesn’t pretend there’s a quick fix. The authors suggest conscientious objection, ethical reorientation, even collective resistance. But real change would require shaking the foundation of how we think about science and success. Less chasing patents. More asking: what will this be used for? Who benefits? Who’s harmed?

Until then, next time you see a glowing press release about an AI breakthrough that “understands human movement,” just remember: someone’s body had to be studied for that. And odds are, they never even knew it.

References:

Kalluri, P. R., Agnew, W., Cheng, M., Owens, K., Soldaini, L. & Birhane, A. (2025). Computer-vision research powers surveillance technology. Nature. https://doi.org/10.1038/s41586-025-08972-6

Read more