• By Gordon Hull

    “Factory work exhausts the nervous system to the uttermost; at the same time, it does away with the many-sided play of the muscles, and confiscates every atom of freedom, both in bodily and in intellectual activity” (Marx, Capital I [Penguin Ed.], 548).

    A recent piece by Josh Dzieza in the Verge about the working conditions of those subject to discipline by AI in the workplace is wrenching.  It tells the story of Amazon workers and call center employees and others whose work lives have become an exhausting monotony of ever-intensifying robot-like demands.  The “AI boss” becomes ever more demanding, filling every moment at work with the requirement to be more productive and with absurd metrics for deciding when its demands are met.  Amazon workers report that they are basically ground down by the unsustainability of the pace of productivity; they are either fired or forced to quit, often with injuries.  Call center employees are subject to Orwellian monitoring by a machine that purports (often laughably inaccurately) to monitor and tweak their affect when speaking to customers.  Folks working from home find it impossible to go to the bathroom, because they are forced to sit in front of their computer all the time, which takes frequent pictures of them and reports on their productivity, as measured by lines of code or keystrokes per unit time.

    All of this optimized misery reminds us of something important: AI systems are being deployed under capitalism.  A worker subject to an “AI Boss” is not some sort of new-fangled social form; they are fundamentally a worker subject to capitalist exploitation.  What AI has done is enable capitalism to be more intensively exploitative.  The stories in the article echo the stories Marx reports in Capital in his discussion of machinery, as does a lot of the logic.  This suggests that two popular narratives about AI need to be displaced.  First, it may be that AI will someday swoop in and take away everyone’s jobs.  As Dzieza notes, this narrative deflects attention from the pressing reality of deteriorating workplace conditions under AI supervision.  Second, Marx’s work on machines and technology is frequently read (as by the autonomists) through the Grundrisse “Fragment on Machines,” and in a way that emphasizes affective labor and cognitive capital.  The reality of AI today serves as an important reminder that the treatment in Capital is still very important.  Let me explain.

    (more…)

  • You might have heard that minorities are hesitant about getting a Covid vaccine?  Well, about that.  According to polling reported by Axios, the group least likely to want a vaccine is White Republicans… to the point that "White Americans are now less likely than Black and Latino Americans to say they plan to get the vaccine."  I

    Vax

  • This time Margaret Mitchell, one of the other authors on the fabulous "Stochastic Parrots" paper (that's my post on it.  The paper is here) on natural language processing.  This was obviously coming, since they'd suspended her email account weeks ago.  In case you haven't read the paper (and you really should!), it's worth mentioning that the paper does not mention Google, and the goal is to point to strategies to make AI better, i.e., less likely to entrench racism, sexism and so forth while destroying the planet in climate catastrophe.  It turns out that those are related, because bigger and bigger datasets are both carbon intensive and likely to pick up on and magnify hegemonic and toxic speech.  Attention to efficiency and curation moves the needle on both.

    No commitment to even looking like they care about ethical AI.

  • By Gordon Hull

    Not long ago, Google summarily dumped Timnit Gebru, one of its lead AI researchers and one of the few Black women working in AI.  Her coauthor Emily Bender has now posted the paper (to be presented this spring) that apparently caused all the trouble.  It should be required reading for anybody who cares about the details of how AI and data systems can perpetuate racism, or who cares more generally about the social implications of brute-force approaches to AI.  Bender and Gebru take up a common approach to natural-language processing (NLP), which involves an AI system learning how to anticipate what speech is likely to follow a given unit of speech.  If, for example, I say “Hello, how are,” the system learns by studying a dataset of existing phrases and text snippets that the next word is likely to be “you,” but almost certainly will not be “ice cream.”  How good the computer gets at this game is going to be substantially determined by the quantity and quality of its training data, i.e., the text that it examines.

    Bender and Gebru outline the social costs of one approach to this problem, which is basically brute force.  As processing power increases, it’s possible to train computers with larger and larger datasets, and the use of larger datasets reliably improves system performance.  But should we be doing that?  Bender and Gebru detail several kinds of problems.  The first is environmental justice: all that processing power uses a lot of energy.  Although some of it may come from carbon-neutral sources, the net climate cost is significant.  Worse, the NLP systems being produced don’t benefit the people that are going to suffer the most from climate change.  As they memorably put it:

    “Is it fair or just to ask, for example, that the residents of the Maldives (likely to be underwater by 2100) or the 800,000 people in Sudan affected by drastic floods, pay the environmental price of training and deploying ever larger English LMs, when similar large-scale models aren’t being produced for Dhivehi or Sudanese Arabic?”

    (more…)

  • By Gordon Hull

    As of this writing, approximately 421,000 people in the United States have officially died of Covid-19.  We also know that this number is fewer than the number that have actually died of Covid for a variety of reasons.  For example, early in the pandemic, there was nowhere near enough testing, and so many people who died of Covid-19 never received an official diagnosis.  For that sort of reason, measuring “excess deaths” during a given period is one way to try to get a handle on how many people actually died of Covid-19.  If n more people die during a given period this year than last year and/or typically, that number can be a useful indicator for how many deaths can be attributed to the pandemic.

    Some of these will of course be indirect: people who didn’t go to the ER over a heart attack because they were afraid of contracting Covid, for example.  The isolation of Covid may be driving an increase in opioid overdose deaths.  Others will be direct, deaths where Covid was the underlying cause. STAT News is now reporting on the results of a recent study that takes a dive into this question, concluding that the overall direct Covid death toll is 31% higher than official figures.  That’s the top line, and it means that the national number of Covid deaths is approaching 550,000.  What should surprise no one (but is surprising, because deaths are so much more reliable for understanding infection rates than reported cases) is that the mortality data is deeply shaped by politics: excess deaths are a window into the sociology of the pandemic.

    (more…)

  • Now up on SSRN.  This paper uses Foucault's works on disciplinary power to develop a typology for understanding different models of Internet governance.  Here is the abstract:

    Following Foucault’s remarks on the importance of architecture to disciplinary power, this paper offers a typology of power relations expressed in different models of Internet governance. Infrastructure governance understands the Internet as a common pool or public resource, on the model of traditional infrastructures like roads and bridges. Modulation, which I study by way of Net Neutrality debates in the U.S., understands Internet governance as traffic shaping. Portal governance, which I study by way of data collection policies of dominant platform companies, understands the Internet as creating a user experience that facilitates data mining. The latter two are forms of architectural disciplinary power that undermine the first. I then argue that the rise of portal and modulation governance primarily serves to remake parts of civil society by fostering market norms of consumption and entrepreneurialism. In that sense, efforts to shape Internet architecture need to be understood as techniques of subjectification.

     

  • Knee pain is common and debilitating, and it’s often caused by osteoarthritis in the knee.  Treatment options range from analgesics (including opioids) to knee-replacement surgery.  If you go to the doctor with arthritic knee pain, you can get an x-ray which can then be interpreted using standard rubrics like the Kellgren–Lawrence Grade (KLG) to quantify damage to your knee and then guide treatment options. The KLG isn’t perfect in that the correlation between pain and objective scores of damage to the knee isn’t perfect.  Some people’s knees are a wreck and they report no pain; others have pain beyond what their KLG score indicate. But here’s the thing: Black patients consistently report more knee pain than white patients.  They also tend to have more knee damage on the KLG – but even when you factor that in, Black patients report much more knee pain than white patients with comparable KLG scores.  What’s going on?

    One possibility is that factors external to the knee – stress, for example – explain the higher pain.  If that’s the case, then patients need less knee treatment.  But what if their knees were in worse shape?  To answer that question, you’d have to ask yourself what in an x-ray indicated poor knee condition.

    Disease is often measured through indicators, and we know that these indicators can lead to all sorts of complexity.  In the context of Covid, for example, there are all sorts of questions about testing and sensitivity that I’ve talked about before.  Along the way, I referred to a fantastic paper on malaria testing in sub-Saharan Africa – suffice it to say that “cases of malaria” reported to donor organizations is a difficult number to parse for reasons having to do with vagaries in testing and diagnosis.

    In a new paper in Nature Medicine, a team led by Emma Pierson makes ingenious use of artificial intelligence to tackle the problem of racial disparities in knee pain.  Since algorithms and data are so often implicated in increasing or magnifying racial disparities (see, for example, Safiya Noble on Google, or Timnit Gebru on facial recognition, or Margaret Hu’s chilling “Algorithmic Jim Crow”), it’s encouraging to learn about machine learning working to undermine racial disparities.  Ordinarily, you train an algorithm to perform like an excellent clinician.  In this case, that would mean training it to look at radiography and determine the correct KLG score.  The trick here was to instead train it to look at pain: to determine what features of the x-ray predicted that the patient would report pain.  It turns out that the algorithm’s diagnoses reduced racial disparities in diagnosis by a jaw-dropping 47%.

    (more…)

  • By Gordon Hull

    I’ve written about the importance of Illinois’ Biometric Information Privacy Act (BIPA) before (see also here). Briefly, BIPA is the most important and powerful of the (relatively few) state laws designed to protect biometric privacy. The statute establishes a notice-and-consent regime (sigh. better than nothing, though N&C doesn’t work well, and is disturbing as a norm) for private parties that collect biometric information like face scans, establishes the need for data retention policies, establishes a private right of action (individuals can sue; other states make you go through the state attorney general) and establishes a statutory harm – as underlined by the Illinois Supreme Court, violating the statute is enough to collect damages.

    Companies like Facebook have been fighting BIPA hard, because it’s bad for their business model, and some of the main litigation has been around Facebook’s phototagging feature. A lot of the issue has been about standing – whether aggrieved parties have the right to sue. Standing sounds simple: in order to have standing under Article 3 of the Constitution, three conditions need to be met. Going through them, however, will indicate why this is harder than it looks.

    • First, the plaintiff must have suffered an “injury in fact,” an invasion of a legally protected interest which is (a) concrete and particularized, and (b) actual or imminent, not “conjectural” or “hypothetical.”
    • Second, there must be a causal connection between the injury and the conduct complained of, the injury has to be fairly traceable to the challenged action of the defendant, and not the result of the independent action of some third party not before the court.
    • Third, it must be "likely," as opposed to merely "speculative," that the injury will be "redressed by a favorable decision."

    (more…)

  • To review the issue: The Oxford/AstraZeneca vaccine uses a modified adenovirus, as do several other vaccines in development, most notably the Russian Gamaleya Institute one.  Early, puzzling results suggested that the Oxford vaccine was about 70% effective overall, but that the overall number obscured a disparity between two groups: a two-dose group of all ages that showed efficacy in the 62% range but 90% in a group (including no elderly people) that had received a half-strength dose first.  Eh?  Even more strangely, the half-dose group seemed to be have given that dose… by mistake?  What happened?

    Reuters does a deep dive.  There's a lot of threads, but it appears that the problem began when the Oxford team didn't trust measurements of the strength of a batch of vaccine from an Italian manufacturer.  The Oxford team then measured it using a different technique, concluded it was more potent than the manufacturer said, and trusted its own measurement.  Guess who was right? [Time to scream at the void: WHY on earth, given discrepancies between two measurements, both supposedly reliable but using different techniques, WOULD YOU NOT MEASURE AGAIN UNTIL YOU FELT VERY GOOD ABOUT THE DISCREPANCY AND HOW IT HAPPENED?]

    "Oxford’s measurement showed that the batch was more potent than the Italian manufacturer had found, the documents show. Oxford trusted its own result and wanted to remain consistent with a measuring tool it had used throughout an earlier trial phase. So it asked Britain’s drugs regulator for permission to reduce the volume of vaccine injected into trial participants from the K.0011 batch. Permission was granted …. The documents published in The Lancet confirm that the error lay with the Oxford researchers. A common emulsifier, polysorbate 80, used in vaccines to facilitate mixing, had interfered with the ultraviolet-light meter that measures the quantity of viral material, according to the documents. As a result, the vaccine’s viral concentration was overstated and Oxford ended up administering half doses of vaccine, believing they were full doses."

    This vaccine matters because it is one that developing countries are depending on: it is cheap, and it is stable in refrigerator temperatures, which makes the logistics a lot easier.  There is a hypothesis that explains the half-dose/full-dose discrepancy: it's possible that the lower first does primes the immune system better than the higher one; adenovirus variants also circulate in some human populations, and so it's possible that some combination of prior infection and the first dose generated an immune response sufficiently robust that participant's own immune systems destroyed the second dose's virus before it could generate further immune response.   The Russians report a 90% efficacy, and that vaccine uses two different adenovirus vectors across the two doses to avoid precisely this risk.  These questions need more data, and we're still waiting for the full results of ongoing trials.  Adenovirus vaccines matter too: in addition to Oxford, Gamaleya and a Chinese vaccine, the single-dose Johnson & Johnson vaccine, with trial results expected in January, is also adenovirus.  In the meantime, Gamaleya is sharing their adenovirus vector, in order to enable a combined trial: one does of the Russian vaccine, and one of the Oxford/Astrazeneca one.

     

     

  • By Gordon Hull

    In an important recent article, Robin Kar and Margaret Radin propose a way to interpret the volumes of boilerplate that accompany pretty much any electronically-mediated consumer transaction.  Rather, they propose a way to interpret the phenomenon of the deluge of such boilerplate.  We all know the scenario: you decide to buy a song for $0.99 on a site called SketchyFiles.com, and at some point in the process, you click to indicate your acceptance of the “terms and conditions.”  Did you read those terms and conditions?  Of course you did not: if you were to print them, they’d probably run in excess of 30 pages, most of them using language that you don’t understand.  It’s not rational to slog through and try to understand all of that for a 99-cent purchase!

    But the flip side is that SketchyFiles is very much going to interpret this as a contract when it suits them.  Say, for example, that you think that your purchase included a virus that destroyed a couple of files on your computer. You spend a few minutes online, and discover that there are 50,000 people to whom this exact thing happened!  You lawyer up and file suit against SketchyFiles, including a request for class certification, since your files aren’t individually worth a lot, but the aggregate of them are, and you think SketchyFiles ought to have to own the problem and scan their songs for viruses.  You will promptly discover in court that the “contract you signed” included a “mandatory arbitration clause,” which says that you agree that any and all disputes involving your purchase are to be settled out-of-court, using an arbitration procedure and a venue chosen by SketchyFiles.  Not only that, you’ve probably also contractually agreed that no class certifications are possible: all users’ claims must be adjudicated one at a time.

    (more…)