AI surveillance and algorithmic management threaten worker autonomy and dignity. It’s time for a rethinking of rights. Part of “AI and the Future of the American Worker,” a series on how artificial intelligence is impacting labor, power, and the meaning of work.
“Someone must have been telling lies about Joseph K., for without having done anything wrong he was arrested one morning.” ~ The Trial (1925), Franz Kafka
There you are, staring at your screen. The cursor freezes, so you nudge it — just to be safe. Did you just look idle? Nothing’s wrong. Still, you act as if it is.
A silent software program is watching constantly. The company calls it “help.” An AI “partner” to make you faster, smarter, more productive — even happier. Yet you sense something has shifted under the gaze of this digital inquisitor. Maybe you don’t even know what’s being watched or measured, or where all that information goes.
This is fast becoming the new normal in offices, where algorithmic eyes never blink. The shift puts real pressure on how we think about rights. Some are already on the books — if not easily enforceable — like limits on intrusive surveillance, privacy protections, and due process in evaluation and dismissal. Others are harder to name, even if people feel them every day: the need for a margin of opacity and the understanding we have a sense of self that isn’t reducible to data. These rights are the foundation for dignity and meaningful work – vital to a successful workforce, thriving businesses, and a prosperous society.
Employers have always sought control; workers have always fought for autonomy and dignity. AI is the latest chapter, and perhaps the most intense yet — more intimate and pervasive than any monitoring that came before. Where the story goes is unclear, but without swift, deliberate intervention, the arc bends toward the normalization of unanswerable systems that are downright Kafkaesque.
Eyes Without a Face
Today’s AI monitoring systems come in two varieties: tools that track your behavior and systems that make automated decisions about it. Together, they’re often called “bossware” – a term rather uncomfortably close to the already-established term, “spyware.”
Not content to watch and measure, bossware predicts, nudges, and intervenes: Krista, stay within approved applications during work hours. Dave, take corrective action to improve your engagement score.
Delivering your assignments is no longer enough. Work can feel like you’re forced to play a game on a board you can’t see. The more you type, the more the algorithm learns. It alone truly knows the score.
Many full-time, part-time, and gig workers are now facing what some describe as a “dehumanizing” level of surveillance. And most, especially white-collar workers and contractors with little organized protection or clear agreements, are drifting in a legal gray zone. Even for those lucky enough to have unions, the old safeguards lag behind the technology. Employees are left improvising, trying to navigate systems of spycraft they don’t fully understand.
Increasingly, the watching is driven by “task mining,” software that records how employees interact with their computers and workflows to map how work gets done and where it can be optimized or automated. Your everyday digital behavior becomes a continuous dataset about productivity.
This is, effectively, Taylorism for the 21st century. Employers pitch it as efficiency, but workers often experience it as exposure and humiliation. For some, it’s not so much the tracking of outputs, like packages delivered or sales closed. That, they could live with. It’s the scrutiny of inputs: idle minutes flagged, bathroom breaks timed, tone and cadence picked apart on calls. When you add in opaque policies, the always-on expectation, and constant screenshots that make you fear to look up a recipe for steamed fish, what might have been gained in productivity gets lost in a mounting wave of stress.
Consider, a Starbucks barista, or any number of office or gig workers, may find themselves under the gaze of a particular AI called “Aware” that scans Slack, Teams, and Zoom for engagement, sentiment, or whatever qualifies as “risk behavior,” then pushes its assessents to managers’ dashboards. Workers see only the results, not the logic that produced them. It’s Kafka’s logic updated for software.
Did the barista consent to this system? Not in any ordinary sense. The algorithm isn’t optional, and ordinary contracts and labor protections didn’t anticipate a supervisor this opaque and embedded.
With more sophisticated tools at their disposal, employers seek to capture not only what you do, but how you feel while doing it. AI systems interpret facial expressions, eye movements, even posture, turning your mood into a metric. What was already creeping into the workplace as biosurveillance has morphed into Emotional Artificial Intelligence, showing up everywhere from call centers to finance offices.
AI programs purport to use data from wearables, text, and computer activity to detect how you feel, but in reality, it’s only inference: never what you actually experience. A wide range of employers are already using Emotional AI, despite the fact that scholars warn the science behind it is dubious at best. Was that raised eyebrow skepticism or interest? Companies like Azure Vision may seem to know, but researchers at the University of Western Ontario put it bluntly: “We should not take computer scientists at their word that the paradigms for human emotions they have developed… can produce ground truth about human emotions.”
Part of the reason is that machines are biased. Women, older employees, neurodiverse workers, and people of color are far more likely to be misread and mismeasured. What the algorithm flags as “disengagement” may simply be fatigue, cultural difference, or, god forbid, a moment of quiet reflection. Yet those misreadings can influence performance reviews, promotions, and layoffs.
What, Me Worry?
Even at its best, AI surveillance can backfire, leaving workers with more precarity, worse conditions, unfair pay and scheduling, and more discrimination — all the while pushing inequality deeper. Yet some employers are casting AI surveillance as a wellness tool instead of Big Brother at your desk.
At JPMorgan Chase, for example, junior bankers’ every digital move is now tracked to catch “overwork.” The firm claims it’s all about “awareness” and “well-being.” But even when oversight is framed as helpful, algorithmic monitoring and management can breed trouble. In one experiment, participants tackled the same task under two conditions: watched by a human or by an AI system called “AI Technology Feed.” Even when the feedback was the same, the AI group felt stressed, powerless, and less creative, and they pushed back more against the AI than the human observer. A 2025 study found that ramped-up digital surveillance erodes trust and keeps workers on edge.
Katharina Klug, a business psychology researcher at the University of Bremen, warns that AI-driven workplace surveillance “could have demotivating effects…if it’s done in a way that’s not transparent—you don’t know what data is being collected, or what your employer does with it.” She notes that the result could be a shift in motivation toward extrinsic rewards and a situation in which employees feel pressured and anxious. Economist Nadia Garbellini of the University of Modena in Italy has warned that AI could decrease the quality of jobs, consigning workers to an “ever-decreasing degree of autonomy.”
It’s also a health issue. AI watching generates anticipatory stress: you worry about how every action might be interpreted in the future. This can lead to burnout, weakened imaginative capacity, and even physical symptoms. Alex Rosenblat has written of AI bosses which, in addition to enabling problems like wage theft, sometimes encourage risky activity, like nudging Uber drivers keep going when they are tired.
Even when employees understand the metrics, unintended problems show up. For example, people game what’s being measured, often at the expense of the bigger picture, leading to surface-level compliance and metric shenanigans. In some workplaces, staff are driven to countermeasures like the famous “mouse jigglers” that simulate slight cursor movement so employees can take a smoke break without being flagged. Wells Fargo fired more than a dozen employees after detecting such tactics.
And just what happens to employee data once it’s collected? It may not stay inside the workplace. Employers can pass it to vendors, cloud services, and analytics firms, while tools like Slack or Zoom generate streams of behavioral data that move through multiple third parties under broad service agreements.
Workplace policies increasingly allow wide collection and reuse of productivity metrics, communications metadata, and other digital traces, often expanded through updates that offer little clarity on downstream use. In some jurisdictions, laws like California’s California Consumer Privacy Act (CCPA) offer limited rights to access or opt out of certain data uses, including sales of personal information, though enforcement is uneven and opt-outs are rarely straightforward. Elsewhere – good luck.
Even if a company doesn’t intend to sell your data, it can still slip through their fingers. The surveillance app WorkComposer left more than 21 million employee screenshots exposed in an unsecured Amazon S3 bucket. Sensitive images of employee activity leaked, putting workers at risk of identity theft and other harms. Oops!
AI has also amplified a workplace hazard we might call “shadow evaluation.” Your manager calls it a “training program,” but behind the scenes, AI is reviewing weaknesses. By the time you finish the session, the algorithm has already made its recommendation about you, leaving the human manager to simply click “approve.”
In the meantime, the system builds a fuller picture of your performance, sharpening judgments about whether the firm can ultimately do without you. INET Research Director Thomas Ferguson notes that industry analysts privately caution that employees trying to familiarize themselves with AI tools may well want to experiment with the software on their own time, instead of sharing the knowledge with their employers. “U.S. labor practices treat most workers as casual, disposable tools” he comments, “with predictably disastrous effects on how fast social benefit from AI can spread in many industries.” Ferguson expects that introducing AI in northern European states with stronger labor protections will be easier.
Without such protections, we end up with Kafka’s court at its most efficient: invisible charges processed in real time by invisible bureaucrats.
Bossware can also trim your paycheck through what is known as “surveillance pay.” A report from the Washington Center for Equitable Growth, which examined 500 AI vendors, finds that under AI systems, “different people may be paid different wages for largely the same work, and individual workers cannot predict their incomes over time.” The result is an “uncoupling of hard work and secure, fair pay” — a dynamic that first hit gig workers like ride-hail and food delivery drivers and is now spreading into other industries and jobs.
Finally — and we’ve barely scratched the surface of AI surveillance risks — companies are deploying AI to keep unions at bay. Tools built for the military are patrolling cubicles and warehouse aisles, making sure organizing never gets a foot in the door. Some companies even use AI to stalk social media outside the workplace to find out who has a mind to unionize. Employees at Amazon, and even Boston University, have gotten a taste of AI-powered union-busting. The anti-union deployment of AI has become a sinister feature of what has been called the “Amazonification of the American workforce.”
In a more insidious way, algorithmic management tend to shifts the workplace from a shared political space into an isolation tank where people compete against their own data shadows. The experience becomes one of individual metrics rather than collective conditions, eroding a sense of agency.
Designing for Dignity
In the age of AI, we’re confronted with issues both regulatory and conceptual. There’s a pressing need to spell out the human stakes with more precision and insist that efficiency, however useful, doesn’t define the purpose of work or the full scope of workers’ rights.
The deeper issue is that work is increasingly being run through systems that go beyond just gathering information about people but turn that information into judgments, often without explanation and with very little room – and virtually no legal rights – to argue back.
Some governments are starting to respond. In the European Union, the Artificial Intelligence Act treats workplace AI used in hiring, firing, pay, and performance review as “high-risk,” which means companies have to document how these systems work, test them for bias, and keep a human in the loop. The General Data Protection Regulation also gives workers some basic rights to access their data and challenge fully automated decisions that materially affect them.
The United States, by contrast, is still muddling through with a patchwork. Existing labor and privacy laws can sometimes be stretched to cover workplace surveillance or algorithmic scoring, but enforcement is uneven and the rules weren’t made for systems that outsource judgment to models. Most of the legal framework still assumes there’s an actual person somewhere making a decision you can point to. Increasingly, there’s nobody human there.
That gap matters, because law on its own is not going to rebalance this. Workers need unions, bargaining agreements, and organizing capacity that can actually shape how these systems get used in practice. Where unions are robust, they’ve started to push back: demanding transparency around monitoring tools, limiting the use of algorithmic scores in discipline and pay, and insisting on human review when automated systems flag or rank workers.
None of this is abstract. It’s the real difference between having a voice in how you’re evaluated and discovering, after the fact, that you were evaluated at all.
It matters very much who sees the data and who controls it, and also how it impacts people to feel that they’re being constantly interpreted by systems that don’t really understand context. Most human work isn’t a series of clean, measurable outputs. It’s messy. You have bad days, recovery, learning curves, distraction, improvisation, and judgment calls that don’t translate neatly into data points.
Ultimately, there’s a kind of right to indeterminacy at stake: the right not to be pinned down by systems that are always trying to infer what kind of worker you are from whatever trace you leave behind. Nobody expects to be free of measurement altogether—that’s not realistic—but we can and should expect a limit on what those measurements are allowed to mean, and how much authority they get to carry.
Without those limits, work starts to feel less like something you do and more like something you’re constantly being translated into. Once that happens, a worker is reduced to a mere profile that’s continuously updated and eternally scored.
This isn’t to argue that AI doesn’t belong in the workplace. That ship has sailed, and many tools can be useful if applied thoughtfully and transparently, with plenty of worker input. The question is: are they going to be tools that support human beings and shared prosperity, or do we allow them to be the newest means by which management extracts more control and less resistance?
Only one of those paths makes room for dignity.