• In the previous two posts (first, second), I took up the invitation provided by a recent paper by Daniele Lorenzini to develop some thoughts on the relationship between Foucault’s thought and theorizing around epistemic injustice.  In particular, Miranda Fricker’s account both draws heavily from Foucault and pushes back against his historicism to advocate for a more a-historical normative ground for the theory: testimonial injustice “distorts” who someone is.  Last time, I looked at some of Foucault’s own work in the lectures leading up to Discipline and Punish to develop a sense of how both “truth” and “power” are relevant – and distinguishable – in that work, even as they both are historical constructs. In particular, following Lorenzini, we can distinguish between “x is your diagnosis” and “therefore, you ought to do y.” Here I begin with the complexity introduced in Foucault’s work by his addition of an embryonic genealogy of truth practices. 

    Let’s begin with the Psychiatric Power lectures, where Foucault had been talking about the strange role of science (and its personification in the doctor) in the governance of asylums.  There, when speaking of the historical contingency of the modern scientific enterprise, Foucault writes:

    (more…)

  • Now published in Critical Review.  Here's the abstract:

    Foucault distanced himself from Marxism even though he worked in an environment—left French theory of the 1960s and 1970s—where Marxism was the dominant frame of reference. By viewing Foucault in the context of French Marxist theoretical debates of his day, we can connect his criticisms of Marxism to his discussions of the status of intellectuals. Foucault viewed standard Marxist approaches to the role of intellectuals as a problem of power and knowledge applicable to the Communist party. Marxist party intellectuals, in his view, had developed rigid and universal theories and had used them to prescribe action, which prevented work on the sorts of problems that he uncovered—even though these problems were central to the development of capitalism.

    The paper is an attempt to cut a path through some (mostly 1970s) texts to get a handle on what Foucault is doing with his inconsistent references to Marx and Marxism.  There's a complex tangle of issues here, many related to the vicissitudes of the reception of Marx, and I hope that others will be able to add to our understanding of them and the period.

    A huge thanks to Shterna Friedman, whose editorial work resulted in a much better article.  Also, my paper is going to be part of a special issue of Critical Review on Foucault – the other papers should be appearing relatively soon.

  • By Gordon Hull

    Last time, I took the opportunity provided by a recent paper by Daniele Lorenzini to develop some thought on the relationship between Foucault’s thought and theorizing around epistemic injustice.  Lorenzini’s initial point, with which I agree fully, is that Fricker’s development of epistemic injustice is, on her own terms, incompatible with Foucault because she wants to maintain a less historicized normative standpoint than Foucauldian genealogy allows.  Epistemic injustice, on Fricker’s reading, involves a distortion of someone’s true identity.  Lorenzini also suggests that Foucault’s late work, which distinguishes between an epistemic “game of truth” and a normative/political “regime of truth” offers the distinction Fricker’s theory needs, by allowing one to critique the regime of truth dependent on a game of truth.  In terms of Foucault’s earlier writings, he does not fully reduce knowledge to power, in the sense that is can be useful to analytically separate them. Here I want to look at a couple of examples of how that plays out in the context of disciplinary power.

    Consider the case of delinquency, and what Foucault calls the double mode of disciplinary power (Discipline and Punish, 199): a binary division into two categories (sane/mad, etc.) and then the coercive assignment of individuals into one group or the other.  The core modern division is between normal and abnormal, and we have a whole “set of techniques and institutions for measuring, supervising and correcting the abnormal” (199).  The delinquent, then, is defined epistemically or juridically (in other words, as a matter of science or law; as I will suggest below, Foucault thinks that one of the ways that psychology instituted itself as a science was by successfully blurring the science/law distinction), and then things are done to her.  This is the sort of gap that epistemic injustice theory, at least in its testimonial version, needs: in Fricker’s trial example, there is the epistemic apparatus of “scientific” racism, and then there is the set of techniques that work during the trial.  Both of these can be targets of critique, but testimonial injustice most obviously works within the second of the two.

    (more…)

  • By Gordon Hull

    Those of us who have both made extensive use of Foucault and made a foray into questions of epistemic injustice have tended to sweep the question of the relation between the two theoretical approaches under the rug.  Miranda Fricker’s book, which has basically set the agenda for work on epistemic injustice, acknowledges a substantial debt to Foucault, but in later work she backs away from the ultimate implications of his account of power on the grounds that his historicism undermines the ability to make normative claims.  In this her argument makes a fairly standard criticism of Foucault (whose “refusal to separate power and truth” she aligns with Lyotard’s critique of metanarratives (Routledge Handbook of Epistemic Injustice, 55).  As she describes her own project:

    “What I hoped for from the concept of epistemic injustice and its cognates was to mark out a delimited space in which to observe some key intersections of knowledge and power at one remove from the long shadows of both Marx and Foucault, by forging an on-the-ground tool of critical understanding that was called for in everyday lived experience of injustice … and which would rely neither on any metaphysically burdened theoretical narrative of an epistemically well-placed sex-class, nor on any risky flirtation with a reduction of truth or knowledge to de facto social power” (Routledge Handbook, 56).

    On this reading, then, Marxism relies too much on ideology-critique, on the one hand, and on privileging the position of women/the proletariat (or other, singular subject position).  Foucault goes too far and reduces the normative dimension altogether. 

    In a new paper, Daniele Lorenzini addresses the Foucault/Fricker question head-on, centrally focusing on the critique of Foucault’s supposed excessive historicism.  Lorenzini’s contribution, to which I will return later, is to suggest that Foucault’s later writings (1980 and forward) distinguish between “games” of truth and “regimes” of truth. The distinction is basically illustrated in the following sentence: “I accept that x and y are true, therefore I ought to do z.”  The game of truth is the epistemic first half of the sentence, and the “regime” of truth – the part that governs human behavior – is the second half, the “therefore I ought…”  On this reading, genealogy is about unpacking and bringing to light the tendency of the “therefore” to disappear as we are governed by its regime, and to unpacking the power structures that make it operate.  In other words genealogy doesn’t collapse questions of truth and power; rather, it allows us to separate them by showing that a given game of truth does not entail the regime of truth that goes with it.

    (more…)

  • I wrote a piece a little more than a year ago about how intellectual property rights are getting in the way of global vaccine equity, condemning a lot of people to die in lower-income countries.  There have been various initiatives to address the situation.  A piece in Nature by Amy Maxmen today highlights somewhat encouraging news on this front in the form of a consortium of sites across Africa, Asia and South America working to develop mRNA vaccines and manufacturing capacity.  Part of the goal is to deal with Covid – but it’s also to redress the fact that many places are suffering from not just Covid, but tuberculosis, HIV, malaria and other problems.  Developing local capacity will help get vaccines to those who need them, as well as preparing researchers in these countries to respond to new pandemics.

    Of course, this is a tremendous challenge, as Maxmen details  It should surprise no one that patents are getting in the way:

    “Despite the hub’s efforts, next-generation mRNA vaccines might still be entangled in patent thickets if some components of the technology have been claimed by others. An astounding number of patents — estimated at more than 80 — surround the mRNA vaccines, according to one analysis. A thorny IP landscape isn’t as daunting for big companies with the capital to litigate, explains Tahir Amin, a lawyer and co-founder of the Initiative for Medicines, Access & Knowledge (I-MAK), a non-profit group based in New York City. Amin says that the hub could boldly move forward, too, and harness public condemnation if Moderna or other companies file a lawsuit. But this option is off the table because the Medicines Patent Pool vows not to infringe on patents. Indeed, the agency’s model relies on persuading pharmaceutical companies to voluntarily license their technologies to alternative manufacturers, often in exchange for royalty fees”

    Moderna appears to be a particularly bad offender:

    “Moderna did not respond to requests from Nature for comment. But in an interview with The Wall Street Journal, Moderna’s chief executive Stéphane Bancel said that the company won’t impede Afrigen’s work in South Africa; he made no mention of the 15 larger companies working with the hub. He added, “I don’t understand why, once we’re in an endemic setting when there’s plenty of vaccine and there’s no issue to supply vaccines, why we should not get rewarded for the things we invented.””

    Maybe it’s time to remind everyone of the amount of public money that supported Moderna’s vaccine development, or that the company is on track to record $19 billion in revenue this year (Maxmen does both in the article).  Bancel’s language invokes tired and slippery uses of “endemic” to minimize Covid.  All “endemic” means is that a disease is spread at a more-or-less constant (and deemed acceptable) rate, without big surges.  Since the world is still suffering from big surges of Covid, Bance’s claim is straightforwardly false.  It’s also ignoring the moral and medical issue; if large numbers of people are dying from something (Covid, tuberculosis) it’s not acceptable, even if it comes at a predictably steady rate.

    But even if the claim about endemicity were true, the rest of the sentence is still false.  Very few people in low-income countries have been vaccinated, which means that the claim that there’s “plenty of vaccine” is wrong.  It might be the case that there is plenty of vaccine here in the U.S., but if it’s not getting into the arms of Africans, then there by definition not “plenty” of vaccines there, and it is also an issue to supply them: the measure of success is vaccinations, not vaccines in a warehouse.  Finally, Bancel pulls the oldest trick in the IP maximalist’s arsenal: to equate the ability to extract monopoly rents with the ability to profit at all.  If you want to achieve social welfare, then pharma should get just enough IP protection to incentivize product development and not a penny more.

    Overcompensating Pharma mints billionaires, but it’s still social murder.

  • From the Department of Shameless Self-Promotion, here is the abstract for my new paper, "Dirty Data Labeled Dirt Cheap: Epistemic Injustice in Machine Learning Systems:"

    "Artificial Intelligence (AI) and Machine Learning (ML) systems increasingly purport to deliver knowledge about people and the world or to assist people in doing so.  Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities, suggesting that they are “unfair” to members of those groups.  However, critique based on formal concepts of fairness seems increasingly unable to account for these problems, partly because it may well be impossible to simultaneously satisfy intuitively plausible operationalizations of the concept and partly because fairness fails to capture structural power asymmetries underlying the data AI systems learn from.  This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice.  I argue that (1) pretrial detention systems and physiognomic AI systems commit testimonial injustice because their target variables reflect inaccurate and unjust proxies for what they claim to measure; (2) classification systems, such as facial recognition, commit hermeneutic injustice because their classification taxonomies, almost no matter how they are derived, reflect and perpetuate racial and other stereotypes; and (3) epistemic injustice better explains what is going wrong in these types of situations than does (un)fairness."

    The path from idea to paper here was slow, but I hope the paper is convincing on the point that the literature on epistemic injustice can offer some needed resources for understanding harms caused by (some kinds of ) AI/algorithmic systems.

  • By Gordon Hull

    UPDATE: 6/14: Here's a nice takedown ("Nonsense on Stilts") of the idea that AI can be sentient.

    I don’t remember where I read about an early text-based chatbot named JULIA, but it was likely about 20 years ago. JULIA played a flirt, and managed to keep a college student in Florida flirting back for something like three days.  The comment in whatever I read was that it wasn’t clear if JULIA had passed a Turing test, or if the student had failed one.  I suppose this was inevitable, but it appears now that Google engineer Blake Lemoine is failing a Turing test, having convinced himself that the natural language processing (NLP) system LaMDA is “sentient.”

    The WaPo article linked above includes discussion with Emily Bender and Margaret Mitchell, which his exactly correct, as they’re two of the lead authors (along with Timnit Gebru) on a paper (recall here) that reminds everyone that NLP is basically a string prediction task: it scrapes a ton of text from the Internet and whatever other sources are readily available, and gets good at predicting what is likely to come next, given a particular input text.  This is why there’s such concern about bias being built into NLP systems: if you get your text from Reddit, then for any given bit of text, what’s likely to come next is racist or sexist (or both).  The system may sound real, but it’s basically a stochastic parrot, as Bender, Gebru and Mitchell put it.

    So point one: LaMDA is not sentient, any more than ELIZA and JULIA were sentient, but chatbots are getting pretty good at convincing people they are.  Still, it’s disturbing that the belief is spreading to people like Lemoine who really, really ought to know better.

    (more…)

  • By Gordon Hull

    As a criterion for algorithmic assessment, “fairness” has encountered numerous problems.  Many of these emerged in the wake of ProPublica’s argument that Broward County’s pretrial detention system, COMPAS, was unfair to black suspects.  To recall: In 2016, ProPublica published an investigation piece criticizing Broward County, Florida’s use of a software program called COMPAS in its pretrial detention system.  COMPAS produced a recidivism risk score for each suspect, which could then be used in deciding whether someone should be detained prior to their trial.  ProPublica’s investigation found that, among suspects that did not have a rearrest prior to their trial, black suspects were much more likely to have been rated as “high risk” for rearrest than white suspects.  Conversely, among suspects who were arrested a second time, white suspects were more likely to have been labeled “low risk” than black ones.  The system thus appeared to be discriminating against black suspects.  The story led to an extensive debate (for an accessible summary with cites, see Ben Green’s discussion here) over how fairness should be understood in a machine learning context.

    The debate basically showed that ProPublica focused on outcomes and demonstrated that the system failed to achieve separation fairness, which is met when all groups subject to the algorithm’s decisions receive the same false negative/positive rate.  The system failed because “high-risk” black suspects were much more likely than white to be false positives.  In response, the software vendor argued that the system made fair predictions because among those classified in the same way (high or low risk), both racial groups exhibited the predicted outcome at the same rate.  In other words, among those classified as “high risk,” there was no racial difference in how likely they were to actually be rearrested.  The algorithm thus satisfied the criterion of sufficiency fairness.  In the ensuing debate, computer scientists arrived at a proof that, except in very limited cases, it was impossible to simultaneously satisfy both separation and sufficiency fairness criteria.

    In the meantime, on the philosophy side, Brian Hedden has argued that a provably fair algorithm could nonetheless be shown to potentially violate 11 of 12 possible fairness conditions.  In a response piece, Benjamin Eva showed the limits of the twelfth with a different test and proposed a new criterion:

    (more…)

  • Luke Stark argues that Facial recognition should be treated as the “plutonium of AI” – something so dangerous that it’s use should be carefully controlled and limited.  If you follow  the news, you’ll know that we’re currently treating it as the carbon dioxide of AI, a byproduct of profit-making that doesn’t look too awful on its own until you realize its buildup could very well cause something catastrophic to happen.  Activists have worried about this pending catastrophe for a while, but lots of big money supports facial recognition, so they have thrown up a smokescreen of distractions – in one case, Facebook denied that its phototagging software in fact recognized faces (!) – in order to lull everyone into accepting it.

    One of the worst offenders is a secretive company called Clearview, whose business model is to scrape the web of all the pictures it can find and then sell the technology to law enforcement.  The company even has an international presence: in one disturbing instance, the Washington Post documents the use of its technology by Ukrainians to identify dead Russian soldiers by way of their Instagram and other social media accounts, and then sometimes to contact their families.  More generally, the Post revealed internal documents showing that the company' database is nearing 100 billion images and that "almost everyone in the world will be identifiable."  They're going all-in; the Post reports that "the company wants to expand beyond scanning faces for the police, saying in the presentation [obtained by the WP] that it could monitor 'gig economy' workers and is researching a number of new technologies that could identify someone based on how they walk, detect their location from a photo or scan their fingerprints from afar."

    Clearview is also one of a cohort of companies that has been sued for violating Illinois’ Biometric Information Privacy Act (BIPA).  BIPA, uniquely among American laws, requires opt-in assent for companies to use people’s biometric information (the Facebook case is central to my argument in this paper (preprint here); for some blog-level discussion see here and here).  Of course, BIPA is a state-level law, so its protections do not automatically extend to anyone who lives outside of Illinois.  That’s why yesterday’s news of a settlement with the ACLU is really good news.  The Guardian reports:

    Facial recognition startup Clearview AI has agreed to restrict the use of its massive collection of face images to settle allegations that it collected people’s photos without their consent.  The company in a legal filing Monday agreed to permanently stop selling access to its face database to private businesses or individuals around the US, putting a limit on what it can do with its ever-growing trove of billions of images pulled from social media and elsewhere on the internet. The settlement, which must be approved by a federal judge in Chicago, will end a lawsuit brought by the American Civil Liberties Union and other groups in 2020 over alleged violations of an Illinois digital privacy law. Clearview is also agreeing to stop making its database available to Illinois state government and local police departments for five years. The New York-based company will continue offering its services to federal agencies, such as US Immigration and Customs Enforcement, and to other law enforcement agencies and government contractors outside Illinois.

    Of course, the company denies the allegations in the lawsuit, and insists that it was just in the process of rolling out a “consent-based” product.  Ok, sure!  This is still a win for privacy and for one of the very few pieces of legislation in the U.S. that has any chance of limiting the use of biometric data.

  • People make snap judgments about those they see the first time – mentally categorizing someone as friendly, threatening, trustworthy, etc.  Most of us know that those impressions are idiosyncratic, and suffused with cultural biases along race, gender and other lines.  So obviously I know what you’re thinking… we need an AI that do that, right?  At least that’s what this new PNAS paper seems to think (h/t Nico Osaka for the link).  The authors start right in with the significance:

    “We quickly and irresistibly form impressions of what other people are like based solely on how their faces look. These impressions have real-life consequences ranging from hiring decisions to sentencing decisions. We model and visualize the perceptual bases of facial impressions in the most comprehensive fashion to date, producing photorealistic models of 34 perceived social and physical attributes (e.g., trustworthiness and age). These models leverage and demonstrate the utility of deep learning in face evaluation, allowing for 1) generation of an infinite number of faces that vary along these perceived attribute dimensions, 2) manipulation of any face photograph along these dimensions, and 3) prediction of the impressions any face image may evoke in the general (mostly White, North American) population”

    Let’s maybe think for a minute, yes?  Because we know that people make these impressions on unsound bases!

    First, adversarial networks are already able to produce fake faces that are indistinguishable from real ones.  Those fake faces can now be manipulated to appear more or less trustworthy, hostile, friendly, etc.  When you make fake political ads, for example, that’s going to be useful.  Already 6 years ago, one “Melvin Redick of Harrisburg, Pa., a friendly-looking American with a backward baseball cap and a young daughter, posted on Facebook a link to a brand-new website,” saying on June 8, 2016 that “these guys show hidden truth about Hillary Clinton, George Soros and other leaders of the US. Visit #DCLeaks website. It’s really interesting!” Of course, both Melvin Redick and the site he pointed to were complete fabrications by the Russians.  Now we can make Melvin look trustworthy, and Clinton less so.

    Second, the ability to manipulate existing face photos is a disaster-in-waiting.  Again, we saw crude efforts with this before – making Obama appear darker than he is, for example.  But here news photos could be altered to make Vladimir Putin appear trustworthy, or Mr. Rogers untrustworthy.  This goes nowhere good, especially when combined with deepfake technology that already takes people out of their contexts and puts them in other ones (disproportionately, so far, women are pasted into porn videos, but the Russians recently tried to produce a deepfake of Zelensky surrendering.  Fortunately that one was done sloppily).

    Third, and I think this one is possibly the scariest, what about scanning images to see whether someone will be assessed as trustworthy?  AI-based hiring is already under-regulated!  Now employers will run your photo through the software and make customer-service hiring decisions based on who customers will perceive as trustworthy.  What could go wrong?

    All of this of course assumes that this sort of software actually works.  The history of physiognomic AI, which uses all sorts of supposedly objective cues to determine personality and which is basically complete (usually racist) bunk suggests that the science is probably not as good as the article acts like.  So maybe we’re lucky and this algorithm does not actually work as advertised.  Of course, the fact that AI software is garbage doesn't preclude its being used to make people's lives miserable.  Just consider the bizarre case of VibraImage.

    But don’t worry.  The PNAS authors are aware of ethics, noting that “the framework developed here adds significantly to the ethical concerns that already enshroud image manipulation software:”

    “Our model can induce (perceived) changes within the individual’s face itself and may be difficult to detect when applied subtly enough. We argue that such methods (as well as their implementations and supporting data) should be made transparent from the start, such that the community can develop robust detection and defense protocols to accompany the technology, as they have done, for example, in developing highly accurate image forensics techniques to detect synthetic faces generated by SG2. More generally, to the extent that improper use of the image manipulation techniques described here is not covered by existing defamation law, it is appropriate to consider ways to limit use of these technologies through regulatory frameworks proposed in the broader context of face-recognition technologies.” 

    Yes, the very effective American regulation of privacy does inspire confidence!  Also, “There is also potential for our data and models to perpetuate the biases they measure, which are first impressions of the population under study and have no necessary correspondence to the actual identities, attitudes, or competencies of people whom the images resemble or depict.”

    Do you think?  As Luke Stark put it, facial recognition is the “plutonium of AI:” very dangerous and with very few legitimate uses.  This algorithm belongs in the same category, and should similarly be regulated like nuclear waste.  For example, as Ari Waldman and Mary Anne Franks have written, one of the problems with deepfakes is that the fake version gets out there on the internet, and it is nearly impossible to make it go away (if you even know about it).  Forensic software gets there too late, and those without resources aren’t going to be able to deploy it anyway.  Lawsuits are even less useful, since they're time-consuming and expensive to pursue, and lots of defendants won't be jurisdictionally available or have pockets deep enough to make the chase worth it.  In other words, not everybody is going to be able to defend themselves like Zelensky, who both warned about deepfakes and was able to produce video of himself not surrendering.  In the meantime, faked and shocking things generally get diffused faster and further than real news.  After all, “engagement” is the business model of social media.  Further, to the extent that people stay inside filter bubbles (like Fox News), they may never see the forensic corrections, and they probably won’t believe the real one is real, even if they do.

    And as for reinforcing existing biases, Safiya Noble already wrote a whole book on how algorithms that guess what you’re probably thinking about someone can do just that.