• By Gordon Hull

    Foucault thinks Marxism is bossy.  In Society must be Defended, he lays down the gauntlet clearly enough: totalizing theories get in the way of useful things at the local level.  As he notes, one should beware of:

    “the inhibiting effect specific to totalitarian theories, or at least – what I mean is – all-encompassing and global theories.  Not that all-encompassing and global theories haven’t, in fairly constant fashion, provided – and don’t continue to provide – tools that can be used at the local level; Marxism and psychoanalysis are living proof that they can. But they have, I think, provided tools that can be used at the  local level only when, and this  is the real point ,the theoretical unity of their discourse is, so to speak, suspended, or at least cut up, ripped up, torn to shreds, turned inside out, displaced, caricatured, dramatized, theatricalized, and so on. Or at least that the totalizing approach always has the effect of putting the brakes on” (SMD 6).

    That is, when you insist on your theoretical unities, you get in the way of actually doing anything.  What we need are to unearth “subjugated knowledges” and specific histories, and such activity requires the “removal of the tyranny of overall discourses” (SMD 8).  In his 1978 interviews with the Italian Communist Duccio Trombadori, Foucault underlines that “I absolutely will not play the part of one who prescribes solutions.  I hold that the notion of the intellectual today is not that of establishing laws or proposing solutions or prophesying, since by doing that one can only contribute to the functioning of a determinate situation of power that to my mind must be criticized” (Remarks on Marx, 157).

    He similarly says at the end of an earlier 1978 interview in Japan that “I think that the role of intellectuals, in reality, absolutely does not consist in playing [the role of] prophets or legislators” (D&E #236; Vol II, 264-5 (2 vol, 2001 ed.)).  Marxism is theological, as he says in his 1979-80 Collège de France Lectures:

    “With Marxism, it’s the same thing.  You have the model of the fall, alienation and dis-alienation.  You have the model of the two ways: Mao Zedong.  And you have, of course, the problem of the stain of those who are originally soiled and must be purified: Stalinism.  Marx, Mao, Stalin; the three models of the two ways, the fall, and the stain” (Government of the Living,  108)

    At one level this is clear enough.   But Foucault also of course is an advocate of social change, and he wants his works to be picked up and used in local struggles, as he also says repeatedly.  Here I want to add a little specificity to the question about Foucault’s relation to Marxism (at least as he understands it around 1978) by picking up on his remarks in the Japan interview. Immediately after saying that intellectuals should not be prophets or legislators, he adds that: “for two thousand years, philosophers have always spoken of what we should have done [de ce qu’on devait faire].  But this always led to a tragic end.  What is important is that philosophers speak of what is currently happening, but not of what could happen” (624).  The first thing to note is the implicit reference to Lenin, at least in the context of a discussion of Marxism: Lenin’s What is to be Done is translated into French as Que Faire (the French rendering is correct: the book is Что Дълать, literally “What to Do”).  More significantly, the connection to prophetic discourse is something Foucault repeats in his slightly earlier interview with Yoshimoto, where he says that:

    (more…)

  • Via Foucault News:

    “Paul Rabinow, UC Berkeley professor emeritus of anthropology and world-renowned anthropologist, died April 6 at the age of 76 in his Berkeley home.

    Rabinow spent about 41 years at UC Berkeley between 1978 to 2019, serving as the director of anthropology for the Contemporary Research Collaboratory and as the former director of human practices for the Synthetic Biology Engineering Research Center.”

    Rabinow was of course a vital conduit for Foucault’s work into English; my introduction to Foucault in grad school was by way of the Foucault Reader he edited.

    His work was also important to me as I worked to understand biopolitics beyond Foucault’s own texts, extending the inquiry into topics like the human genome project and medical risk, and offering cautions against appropriations of the concept by Negri and Agamben. 

    He writes at the end of his Anthropos Today that “my diagnosis is that worldviews concerned with progress and decadence as essential elements of a totalizing figure should be allowed to retire into the past, to take their place as historical memories.  By relinquishing them we will enable reason to better confront contemporary problems” (133-4), and concludes:

    “Jean Starobinski advocates a criticism that seeks neither “the totality (as with the gaze from above), nor [. . .] intimacy (as does a self-identificatory intuition).” The critical practice is one that finds the means to navigate these relations of distance and closeness. There is no “quasi-divinity” present here, only a disciplined human curiosity. Let us agree with Starobinski that method re-quires motion. A movement that goes on “inlassablement,” tirelessly, steadfastly, persistently. I advocate pursuing in our thought and writing something like the motion, through different scales and different subject positions, that Starobinski proposes in the quote and exemplifies in his criticism. Such movement is easy to initiate and hard to master. Yet I firmly believe that in the actual conjuncture of things, it is a paramount challenge for philosophy and the human sciences to experiment with forms that will be, if not fully adequate to, at least cognizant of, the need for such movement through scale and subjectivity. Such motion might help us to leave notions like progress behind and even to help us to take better care of things, ourselves, and others” (136)

  • I've been commenting off and on about the vagaries of Covid data – for example, in knowing what "covid cases" refers to (and here); states' early conflation  of PCR and antibody tests;  the vagaries of different testing technologies, or the ways that even death certificates can mislead about mortality.  This reflective piece by the co-founders of the Covid Tracking Project at the Atlantic – which became the de facto national source for Covid information as the federal government fell flat on its face – is absolutely worth the read.  Here's one short paragraph that says everything you need to know about data in general:

    "Data are just a bunch of qualitative conclusions arranged in a countable way. Data-driven thinking isn’t necessarily more accurate than other forms of reasoning, and if you do not understand how data are made, their seams and scars, they might even be more likely to mislead you"

  • By Gordon Hull

    “Factory work exhausts the nervous system to the uttermost; at the same time, it does away with the many-sided play of the muscles, and confiscates every atom of freedom, both in bodily and in intellectual activity” (Marx, Capital I [Penguin Ed.], 548).

    A recent piece by Josh Dzieza in the Verge about the working conditions of those subject to discipline by AI in the workplace is wrenching.  It tells the story of Amazon workers and call center employees and others whose work lives have become an exhausting monotony of ever-intensifying robot-like demands.  The “AI boss” becomes ever more demanding, filling every moment at work with the requirement to be more productive and with absurd metrics for deciding when its demands are met.  Amazon workers report that they are basically ground down by the unsustainability of the pace of productivity; they are either fired or forced to quit, often with injuries.  Call center employees are subject to Orwellian monitoring by a machine that purports (often laughably inaccurately) to monitor and tweak their affect when speaking to customers.  Folks working from home find it impossible to go to the bathroom, because they are forced to sit in front of their computer all the time, which takes frequent pictures of them and reports on their productivity, as measured by lines of code or keystrokes per unit time.

    All of this optimized misery reminds us of something important: AI systems are being deployed under capitalism.  A worker subject to an “AI Boss” is not some sort of new-fangled social form; they are fundamentally a worker subject to capitalist exploitation.  What AI has done is enable capitalism to be more intensively exploitative.  The stories in the article echo the stories Marx reports in Capital in his discussion of machinery, as does a lot of the logic.  This suggests that two popular narratives about AI need to be displaced.  First, it may be that AI will someday swoop in and take away everyone’s jobs.  As Dzieza notes, this narrative deflects attention from the pressing reality of deteriorating workplace conditions under AI supervision.  Second, Marx’s work on machines and technology is frequently read (as by the autonomists) through the Grundrisse “Fragment on Machines,” and in a way that emphasizes affective labor and cognitive capital.  The reality of AI today serves as an important reminder that the treatment in Capital is still very important.  Let me explain.

    (more…)

  • You might have heard that minorities are hesitant about getting a Covid vaccine?  Well, about that.  According to polling reported by Axios, the group least likely to want a vaccine is White Republicans… to the point that "White Americans are now less likely than Black and Latino Americans to say they plan to get the vaccine."  I

    Vax

  • This time Margaret Mitchell, one of the other authors on the fabulous "Stochastic Parrots" paper (that's my post on it.  The paper is here) on natural language processing.  This was obviously coming, since they'd suspended her email account weeks ago.  In case you haven't read the paper (and you really should!), it's worth mentioning that the paper does not mention Google, and the goal is to point to strategies to make AI better, i.e., less likely to entrench racism, sexism and so forth while destroying the planet in climate catastrophe.  It turns out that those are related, because bigger and bigger datasets are both carbon intensive and likely to pick up on and magnify hegemonic and toxic speech.  Attention to efficiency and curation moves the needle on both.

    No commitment to even looking like they care about ethical AI.

  • By Gordon Hull

    Not long ago, Google summarily dumped Timnit Gebru, one of its lead AI researchers and one of the few Black women working in AI.  Her coauthor Emily Bender has now posted the paper (to be presented this spring) that apparently caused all the trouble.  It should be required reading for anybody who cares about the details of how AI and data systems can perpetuate racism, or who cares more generally about the social implications of brute-force approaches to AI.  Bender and Gebru take up a common approach to natural-language processing (NLP), which involves an AI system learning how to anticipate what speech is likely to follow a given unit of speech.  If, for example, I say “Hello, how are,” the system learns by studying a dataset of existing phrases and text snippets that the next word is likely to be “you,” but almost certainly will not be “ice cream.”  How good the computer gets at this game is going to be substantially determined by the quantity and quality of its training data, i.e., the text that it examines.

    Bender and Gebru outline the social costs of one approach to this problem, which is basically brute force.  As processing power increases, it’s possible to train computers with larger and larger datasets, and the use of larger datasets reliably improves system performance.  But should we be doing that?  Bender and Gebru detail several kinds of problems.  The first is environmental justice: all that processing power uses a lot of energy.  Although some of it may come from carbon-neutral sources, the net climate cost is significant.  Worse, the NLP systems being produced don’t benefit the people that are going to suffer the most from climate change.  As they memorably put it:

    “Is it fair or just to ask, for example, that the residents of the Maldives (likely to be underwater by 2100) or the 800,000 people in Sudan affected by drastic floods, pay the environmental price of training and deploying ever larger English LMs, when similar large-scale models aren’t being produced for Dhivehi or Sudanese Arabic?”

    (more…)

  • By Gordon Hull

    As of this writing, approximately 421,000 people in the United States have officially died of Covid-19.  We also know that this number is fewer than the number that have actually died of Covid for a variety of reasons.  For example, early in the pandemic, there was nowhere near enough testing, and so many people who died of Covid-19 never received an official diagnosis.  For that sort of reason, measuring “excess deaths” during a given period is one way to try to get a handle on how many people actually died of Covid-19.  If n more people die during a given period this year than last year and/or typically, that number can be a useful indicator for how many deaths can be attributed to the pandemic.

    Some of these will of course be indirect: people who didn’t go to the ER over a heart attack because they were afraid of contracting Covid, for example.  The isolation of Covid may be driving an increase in opioid overdose deaths.  Others will be direct, deaths where Covid was the underlying cause. STAT News is now reporting on the results of a recent study that takes a dive into this question, concluding that the overall direct Covid death toll is 31% higher than official figures.  That’s the top line, and it means that the national number of Covid deaths is approaching 550,000.  What should surprise no one (but is surprising, because deaths are so much more reliable for understanding infection rates than reported cases) is that the mortality data is deeply shaped by politics: excess deaths are a window into the sociology of the pandemic.

    (more…)

  • Now up on SSRN.  This paper uses Foucault's works on disciplinary power to develop a typology for understanding different models of Internet governance.  Here is the abstract:

    Following Foucault’s remarks on the importance of architecture to disciplinary power, this paper offers a typology of power relations expressed in different models of Internet governance. Infrastructure governance understands the Internet as a common pool or public resource, on the model of traditional infrastructures like roads and bridges. Modulation, which I study by way of Net Neutrality debates in the U.S., understands Internet governance as traffic shaping. Portal governance, which I study by way of data collection policies of dominant platform companies, understands the Internet as creating a user experience that facilitates data mining. The latter two are forms of architectural disciplinary power that undermine the first. I then argue that the rise of portal and modulation governance primarily serves to remake parts of civil society by fostering market norms of consumption and entrepreneurialism. In that sense, efforts to shape Internet architecture need to be understood as techniques of subjectification.

     

  • Knee pain is common and debilitating, and it’s often caused by osteoarthritis in the knee.  Treatment options range from analgesics (including opioids) to knee-replacement surgery.  If you go to the doctor with arthritic knee pain, you can get an x-ray which can then be interpreted using standard rubrics like the Kellgren–Lawrence Grade (KLG) to quantify damage to your knee and then guide treatment options. The KLG isn’t perfect in that the correlation between pain and objective scores of damage to the knee isn’t perfect.  Some people’s knees are a wreck and they report no pain; others have pain beyond what their KLG score indicate. But here’s the thing: Black patients consistently report more knee pain than white patients.  They also tend to have more knee damage on the KLG – but even when you factor that in, Black patients report much more knee pain than white patients with comparable KLG scores.  What’s going on?

    One possibility is that factors external to the knee – stress, for example – explain the higher pain.  If that’s the case, then patients need less knee treatment.  But what if their knees were in worse shape?  To answer that question, you’d have to ask yourself what in an x-ray indicated poor knee condition.

    Disease is often measured through indicators, and we know that these indicators can lead to all sorts of complexity.  In the context of Covid, for example, there are all sorts of questions about testing and sensitivity that I’ve talked about before.  Along the way, I referred to a fantastic paper on malaria testing in sub-Saharan Africa – suffice it to say that “cases of malaria” reported to donor organizations is a difficult number to parse for reasons having to do with vagaries in testing and diagnosis.

    In a new paper in Nature Medicine, a team led by Emma Pierson makes ingenious use of artificial intelligence to tackle the problem of racial disparities in knee pain.  Since algorithms and data are so often implicated in increasing or magnifying racial disparities (see, for example, Safiya Noble on Google, or Timnit Gebru on facial recognition, or Margaret Hu’s chilling “Algorithmic Jim Crow”), it’s encouraging to learn about machine learning working to undermine racial disparities.  Ordinarily, you train an algorithm to perform like an excellent clinician.  In this case, that would mean training it to look at radiography and determine the correct KLG score.  The trick here was to instead train it to look at pain: to determine what features of the x-ray predicted that the patient would report pain.  It turns out that the algorithm’s diagnoses reduced racial disparities in diagnosis by a jaw-dropping 47%.

    (more…)