Citing a paper by Lisanne Bainbridge from the early 1980s, Carl Hendrick describes a paradox of automation.  Those who automate systems tend to view people as the weak link, and thus replace humans with automation wherever possible.  This leaves a problem:

“The designer who tries to eliminate the operator still leaves the operator to do the tasks which the designer cannot think how to automate. What remains after automation is not a simplified role but an arbitrary residue of the most demanding, most ambiguous, and least supported work in the entire system. The human is not replaced. In other words, the human is paradoxically left with the hardest parts, and given almost no preparation for them”

As he immediately notes:

“Forty years on, Bainbridge’s paper reads less like a historical document than a prophecy. It describes, with an accuracy that might trouble us, exactly what is now happening to knowledge workers across every sector in which AI has taken hold. And it raises, with particular accuracy, a question that education has barely begun to confront.”

Here, I want to pursue the thought that AI changes the architectural constraints on cognitive labor.  I’ll explain what that means after more from Hendrick.

Hendrick describes two recent studies published in Harvard Business Review.  The first showed that AI didn’t make anybody’s work easier.  Instead, it intensified it.  This happens in three ways: AI enables (a) task expansion, (b) blurred boundaries between work and non-work, and (c) more multitasking. Hendrick suggests that this is enabled by changes in the difficulty of doing work: “the scope of work expands not because anyone demands it but because the tools make expansion feel possible, even effortless. What disappears is not effort but the friction that once gave effort its shape.”

This is a phenomenon that’s been familiar to internet governance ever since Lawrence Lessig’s Code book from over 25 years ago, when he described governance as located at the intersection of four variables.  The first is law: legal systems constrain what I can do.  The second is markets: what I can do is meaningfully regulated by what I can afford.  The third is social norms: if everybody will hate me if I do something, I’m less likely to do it.  The final, and the relevant one here, is what Lessig calls “code” or what one might also call “architecture.”  If I want to go outside now, I can’t just walk forward because I’ll hit a wall.  Changes in architecture can regulate people do by making some things harder and other things easier (put on your economic reasoning hat; think about nudges).  This is the gist of Langdon Winner’s famous example of the low bridges on the Long Island Expressway, which he argues were there to make it impossible for buses to pass under.  If buses can’t get down the road, it suddenly becomes harder for the poor to get down the road too, and so fewer poor people visit the rich people’s beaches.  A less sinister example (but more annoying on a daily basis, for sure) is speed bumps: they make drivers slow down not by threat of expensive tickets, but by making it physically harder to drive fast. 

If not Lessig, at least Winner’s bridge example and Latour’s development of the speed bump idea (which he talks about as a form of normativity, along with a number of other examples.  This is perhaps my favorite STS paper of all time!) are standard fare in STS.  I think what Hendrick shows us is that this question about architectural regulation helps us characterize what AI is doing: it has changed the architectural constraints on cognitive labor.  To see the point, consider an analogy to music.  The internet removed a constraint on copying music by making it essentially cost-free (no more complicated mix tapes).  Once that constraint went away, people started doing it a lot more.  It turned out that copyright law was enforced not so much by norms against copying or even law, but by the difficulties in copying at scale.  Once that went away, the music industry tried to regulate entirely by law, an effort that didn’t work very well.

Consider now student plagiarism.  It is well known that AI is destroying the college essay as we have known it.  Advocates say the essay is a bad proxy for critical thinking and writing skills and we should bid it goodbye (though we should pause: the ability to focus and develop an argument over time, and to write it down coherently, is itself a skill.  It’s a little hard to think of an artifact that requires that other than… checks notes… writing a coherent paper that develops an argument.  Perhaps the AI advocates don’t think we should have that skill?  Amanda Marcotte on Salon points out that conservatives don’t like critical thinking, and have often used technological innovations to try to rid themselves of it. I digress).  Set that aside for a moment and notice what AI has done.  Cheating used to be less common because it was more difficult.  It wasn’t that lots of students didn’t offload their essay-writing tasks onto devices because they didn’t want to, or because they valued their education (as opposed to their grade/credential) (norms).  They didn’t do so because it was hard (code/architecture), bespoke plagiarisms cost money (market), or internet plagiarisms could be at least semi-reliably detected and penalized (law).  AI removes all three of these difficulties, and so now lots of people do it (this explains why bluebooks are back in fashion: they restore the difficulty in cheating.  Of course, they also can’t require a process as complex as a 4-6 page paper).  Other technological changes could change the governance environment and resulting incentives again, such as getting AI companies to watermark AI-produced content (the EU is making progress on this), which would re-enable law-like sanctions.

Hendrick’s use of “friction” suggests that this strategy can help to explain what AI is doing more generally.  In making certain kinds of cognitive labor easier, it’s causing a lot more of it to get done, while simultaneously depreciating its value.  This generates a new task for workers to slog through all the slop, not unlike the impossible job teachers now have of trying to police AI essays. Telling teachers not to assign the kind of work that can be plagiarized is kind of like telling bosses not to assign reports.  You can definitely go that route, and maybe a lot of what white collar workers produce is bullshit, but you’re talking about huge structural changes in either case.

Hendrick describes the second study:

“Surveying nearly fifteen hundred full-time workers across industries, roles, and seniority levels, the researchers found that intensive oversight of AI tools was the single most mentally taxing form of engagement their participants described. Workers required to monitor AI agents closely reported fourteen percent more mental effort, twelve percent more mental fatigue, and nineteen percent greater information overload than those whose AI engagement was less demanding”

This is because the AI workslop makes it a lot easier to do less work if you’re the one producing it, but a lot harder if you’re the one having to deal with it.  Hendrick continues that workslop is:

“AI-generated content that masquerades as good work but lacks the substance to meaningfully advance a given task. The particular cruelty of workslop is that it doesn’t announce its own inadequacy; it arrives fluent, confident, and formatted, offloading the cognitive labour of detecting its failures entirely onto the recipient. In other words, the person who did the least thinking ends up doing the least work, and the person who receives it ends up doing the most.”

That’s intended as a description of the workplace, but it works pretty well for a college classroom, with the primary difference being that the cost to students for producing workslop is externalized, as they don’t have to read each others’ AI workslop reports.  In the short term it’s displaced onto their teachers and strains the system; in the long term, it devalues their credentials and leaves them unprepared for work.  These are negative externalities, which are notoriously hard to deal with, as the climate crisis shows us.

The underlying pedagogical problem is of course that education requires work, and so the goal is actually different from industry.  Careless cognitive offloading to AI hurts education for that reason.  What’s interesting is that the student and the low-end knowledge worker end up with the same incentive structure: produce more, and cheaper.  In both cases, this incentive structure and resulting AI slop risk breaking the entire system.  AI doesn’t so much reduce the cognitive labor as move it around, often piling huge amounts of it on those who have to manage the AI output, to the detriment of their actual performance.

The workplace analog is the intensification noted in the first study.  Again, Hendrick:

“The natural tendency of AI-assisted work is not contraction but intensification, and the costs of that intensification accumulate quietly, in the degradation of judgement, the erosion of skill, the slow compression of recovery time, and the rising frequency of errors made by people who are producing more than ever and thinking less carefully than they know”

Hendrick talks about ways to use AI in education better, in particular in pursuit of greater equity. I don’t want to pursue the education and equity point here (Hendrick has lots of interesting thoughts), except to note that AI right now is making it hard for teachers (at least, those of us who teach things like critical thinking, broadly construed) to do their jobs.  It makes it very easy for students to cognitively offload, which then (per Lessig) means they’ll do it a lot more.  That overproduction of AI slop not only prevents them from being educated, but it means that their teachers have to work harder, not smarter.  Everybody loses.

Hendrick suggests that a final irony of automation is that it leaves automated systems extremely fragile.  They work really well – until they don’t.  As he cites Bainbridge, “the longer the machine runs without incident, the more degraded the human backup; and yet it is in those rare, high-stakes moments of failure that the human is most needed and least prepared.” It is very hard not to think (part one, part two) about automated vehicles in this context, and the idea that they are going to pass off control to drivers in an emergency.  People can’t do that, and so they end up holding the bag. Here, we risk producing workers who are even less up to the task.

Posted in

Leave a comment