• You know how sometimes your students don't do the reading?  And then how, when you give them a writing prompt based on it, they try to guess their way to a good answer from the everyday meaning of the words in the prompt?  And how, sometimes, the outcome is spectacularly, wonderfully wrong?

    Well, I don't know what else ChatGPT can do, but it can do an uncannily good imitation of such a student!

    (oh, and like that student, it blew through the wordcount, apparently on the theory that a lot of words would makeup for a lack of reading)

    This was a prompt from my existentialism class (the instructions also tell them they have to quote the text, but I omitted that here, because we already know ChatGPT can't do that).  It's two images because I am technically incompetent to capture the longer-than-a-screen answer into one image:

    (more…)

  • The MA Program at UNC Charlotte has a number of funded lines for our two-year MA program in philosophy.  We're an eclectic, practically-oriented department that emphasizes working across disciplines and philosophical traditions.  If that sounds like you, or a student you know – get in touch!  You can email me (ghull@uncc.edu), though for a lot of questions I'll pass you along to our grad director, Andrea Pitts (apitts5@uncc.edu).  Or, there's a QR code in the flyer below

     

    MA Flyer

    (more…)

  • By Gordon Hull

    Large Language Models (LLMs) like Chat-GPT burst into public consciousness sometime in the second half of last year, and Chat-GPT’s impressive results have led to a wave of concern about the future viability of any profession that depends on writing, or on teaching writing in education.  A lot of this is hype, but one issue that is emerging is the role of AI authorship in academic and other publications; there’s already a handful of submissions that list AI co-authors.  An editorial in Nature published on Feb. 3 outlines the scope of the issues at hand:

    “This technology has far-reaching consequences for science and society. Researchers and others have already used ChatGPT and other large language models to write essays and talks, summarize literature, draft and improve papers, as well as identify research gaps and write computer code, including statistical analyses. Soon this technology will evolve to the point that it can design experiments, write and complete manuscripts, conduct peer review and support editorial decisions to accept or reject manuscripts”

    As a result:

    “Conversational AI is likely to revolutionize research practices and publishing, creating both opportunities and concerns. It might accelerate the innovation process, shorten time-to-publication and, by helping people to write fluently, make science more equitable and increase the diversity of scientific perspectives. However, it could also degrade the quality and transparency of research and fundamentally alter our autonomy as human researchers. ChatGPT and other LLMs produce text that is convincing, but often wrong, so their use can distort scientific facts and spread misinformation.”

    The editorial then gives examples of LLM-based problems with incomplete results, bad generalizations, inaccurate summaries, and other easily-generated problems.  It emphasizes accountability (for the content of material: the use of AI should be clearly documented) and the need for the development of truly open AI products as part of a push toward transparency.

    (more…)

  • By Gordon Hull

    Last time, I introduced a number of philosophy of law examples in the context of ML systems and suggested that they might be helpful in thinking differently, and more productively, about holding ML systems accountable.  Here I want to make the application specific.

    So: how do these examples translate to ML and AI?  I think one lesson is that we need to specify what exactly we are holding the algorithm accountable for.  For example, if we suspect an algorithm of unfairness or bias, it is necessary to specify precisely what the nature of that bias or unfairness is – for example, that it is more likely to assign high-risk status to Black defendants (for pretrial detention purposes) than it is white ones.  Even specifying fairness in this sense can be hard, because there are conflicting accounts of fairness at play.  But assuming that one can settle that question, we don’t need to specify tokens or individual acts of unfairness (or demand that each of them rise to the level where they would individually create liability) to demand accountability of the algorithm or the system that deploys it – we know that the system will have treated defendants unfairly, even if we don’t know which ones (this is basically a disparate impact standard; recall that one of the original and most cited pieces on how data can be unfair  was framed precisely in terms of disparate impact).

    Further, given the difficulties of individual actions (litigation costs, as well as getting access to the algorithms, which defendants will claim as trade secrets) in such cases, it seems wrong to channel accountability through tort liability and demand that individuals prove the algorithm discriminated against him (how could they?  The situation is like the blue bus: if a group of people is 80% likely to reoffend or skip bail, we know that 20% of that group will not, and there is no “error” for which the system can be held accountable).  Policymakers need to conduct regular audits or other supervisory activity designed to ferret out this sort of problem, and demand accountability at the systemic level.

    (more…)

  • By Gordon Hull

    AI systems are notoriously opaque black boxes.  In a now standard paper, Jenna Burrell dissects this notion of opacity into three versions.  The first is when companies deliberately hide information about their algorithms, to avoid competition, maintain trade secrets, and to guard against gaming their algorithms, as happens with Search Engine Optimization techniques.  The second is when reading and understanding code is an esoteric skill, so the systems will remain opaque to all but a very small number of specially-trained individuals.  The third form is unique to ML systems, and boils down to the argument that ML systems generate internal networks of connections that don’t reason like people.  Looking into the mechanics of a system for recognizing handwritten numbers or even a spam detection filter wouldn’t produce anything that a human could understand.  This form of opacity is also the least tractable, and there is a lot of work trying to establish how ML decisions could be made either more transparent or at least more explicable.

    Joshua Kroll argues instead that the quest for potentially impossible transparency distracts from what we might more plausibly expect from our ML systems: accountability.  After all, they are designed to do something, and we could begin to assess them according to the internal processes by which they are developed to achieve their design goals, as well as by empirical evidence of what happens when they are employed.  In other words, we don’t need to know exactly how the system can tell a ‘2’ from a ‘3’ as long as we can assess whether it does, and whether that objective is serving nefarious purposes.

    I’ve thought for a while that there’s potential help for understanding what accountability means in the philosophy of law literature.  For example, a famous thought experiment features a traffic accident caused by a bus.  We have two sources of information about this accident.  One is an eyewitness who is 70% reliable and says that the bus was blue.  The other is the knowledge that 70% of the buses that were in the area at the time were blue.  Epistemically, these ought to be equal – in both cases, you can say with 70% confidence that the blue bus company is liable for the accident.  But we don’t treat them as the same: as David Enoch and Talia Fisher elaborate, most people prefer the witness to the statistical number.  This is presumably because when the witness is wrong, we can inquire what went wrong.  When the statistic is wrong, it’s not clear that anything like a mistake even happened: the statistics operate at a population level; when applied to individuals, the use of statistical probability will be wrong 30% of the time, and so we have to expect that.  It seems to me that our desire for what amounts to an auditable result is the sort of thing that Kroll is pointing to.

    (more…)

  • In the previous two posts (first, second), I took up the invitation provided by a recent paper by Daniele Lorenzini to develop some thoughts on the relationship between Foucault’s thought and theorizing around epistemic injustice.  In particular, Miranda Fricker’s account both draws heavily from Foucault and pushes back against his historicism to advocate for a more a-historical normative ground for the theory: testimonial injustice “distorts” who someone is.  Last time, I looked at some of Foucault’s own work in the lectures leading up to Discipline and Punish to develop a sense of how both “truth” and “power” are relevant – and distinguishable – in that work, even as they both are historical constructs. In particular, following Lorenzini, we can distinguish between “x is your diagnosis” and “therefore, you ought to do y.” Here I begin with the complexity introduced in Foucault’s work by his addition of an embryonic genealogy of truth practices. 

    Let’s begin with the Psychiatric Power lectures, where Foucault had been talking about the strange role of science (and its personification in the doctor) in the governance of asylums.  There, when speaking of the historical contingency of the modern scientific enterprise, Foucault writes:

    (more…)

  • Now published in Critical Review.  Here's the abstract:

    Foucault distanced himself from Marxism even though he worked in an environment—left French theory of the 1960s and 1970s—where Marxism was the dominant frame of reference. By viewing Foucault in the context of French Marxist theoretical debates of his day, we can connect his criticisms of Marxism to his discussions of the status of intellectuals. Foucault viewed standard Marxist approaches to the role of intellectuals as a problem of power and knowledge applicable to the Communist party. Marxist party intellectuals, in his view, had developed rigid and universal theories and had used them to prescribe action, which prevented work on the sorts of problems that he uncovered—even though these problems were central to the development of capitalism.

    The paper is an attempt to cut a path through some (mostly 1970s) texts to get a handle on what Foucault is doing with his inconsistent references to Marx and Marxism.  There's a complex tangle of issues here, many related to the vicissitudes of the reception of Marx, and I hope that others will be able to add to our understanding of them and the period.

    A huge thanks to Shterna Friedman, whose editorial work resulted in a much better article.  Also, my paper is going to be part of a special issue of Critical Review on Foucault – the other papers should be appearing relatively soon.

  • By Gordon Hull

    Last time, I took the opportunity provided by a recent paper by Daniele Lorenzini to develop some thought on the relationship between Foucault’s thought and theorizing around epistemic injustice.  Lorenzini’s initial point, with which I agree fully, is that Fricker’s development of epistemic injustice is, on her own terms, incompatible with Foucault because she wants to maintain a less historicized normative standpoint than Foucauldian genealogy allows.  Epistemic injustice, on Fricker’s reading, involves a distortion of someone’s true identity.  Lorenzini also suggests that Foucault’s late work, which distinguishes between an epistemic “game of truth” and a normative/political “regime of truth” offers the distinction Fricker’s theory needs, by allowing one to critique the regime of truth dependent on a game of truth.  In terms of Foucault’s earlier writings, he does not fully reduce knowledge to power, in the sense that is can be useful to analytically separate them. Here I want to look at a couple of examples of how that plays out in the context of disciplinary power.

    Consider the case of delinquency, and what Foucault calls the double mode of disciplinary power (Discipline and Punish, 199): a binary division into two categories (sane/mad, etc.) and then the coercive assignment of individuals into one group or the other.  The core modern division is between normal and abnormal, and we have a whole “set of techniques and institutions for measuring, supervising and correcting the abnormal” (199).  The delinquent, then, is defined epistemically or juridically (in other words, as a matter of science or law; as I will suggest below, Foucault thinks that one of the ways that psychology instituted itself as a science was by successfully blurring the science/law distinction), and then things are done to her.  This is the sort of gap that epistemic injustice theory, at least in its testimonial version, needs: in Fricker’s trial example, there is the epistemic apparatus of “scientific” racism, and then there is the set of techniques that work during the trial.  Both of these can be targets of critique, but testimonial injustice most obviously works within the second of the two.

    (more…)

  • By Gordon Hull

    Those of us who have both made extensive use of Foucault and made a foray into questions of epistemic injustice have tended to sweep the question of the relation between the two theoretical approaches under the rug.  Miranda Fricker’s book, which has basically set the agenda for work on epistemic injustice, acknowledges a substantial debt to Foucault, but in later work she backs away from the ultimate implications of his account of power on the grounds that his historicism undermines the ability to make normative claims.  In this her argument makes a fairly standard criticism of Foucault (whose “refusal to separate power and truth” she aligns with Lyotard’s critique of metanarratives (Routledge Handbook of Epistemic Injustice, 55).  As she describes her own project:

    “What I hoped for from the concept of epistemic injustice and its cognates was to mark out a delimited space in which to observe some key intersections of knowledge and power at one remove from the long shadows of both Marx and Foucault, by forging an on-the-ground tool of critical understanding that was called for in everyday lived experience of injustice … and which would rely neither on any metaphysically burdened theoretical narrative of an epistemically well-placed sex-class, nor on any risky flirtation with a reduction of truth or knowledge to de facto social power” (Routledge Handbook, 56).

    On this reading, then, Marxism relies too much on ideology-critique, on the one hand, and on privileging the position of women/the proletariat (or other, singular subject position).  Foucault goes too far and reduces the normative dimension altogether. 

    In a new paper, Daniele Lorenzini addresses the Foucault/Fricker question head-on, centrally focusing on the critique of Foucault’s supposed excessive historicism.  Lorenzini’s contribution, to which I will return later, is to suggest that Foucault’s later writings (1980 and forward) distinguish between “games” of truth and “regimes” of truth. The distinction is basically illustrated in the following sentence: “I accept that x and y are true, therefore I ought to do z.”  The game of truth is the epistemic first half of the sentence, and the “regime” of truth – the part that governs human behavior – is the second half, the “therefore I ought…”  On this reading, genealogy is about unpacking and bringing to light the tendency of the “therefore” to disappear as we are governed by its regime, and to unpacking the power structures that make it operate.  In other words genealogy doesn’t collapse questions of truth and power; rather, it allows us to separate them by showing that a given game of truth does not entail the regime of truth that goes with it.

    (more…)

  • I wrote a piece a little more than a year ago about how intellectual property rights are getting in the way of global vaccine equity, condemning a lot of people to die in lower-income countries.  There have been various initiatives to address the situation.  A piece in Nature by Amy Maxmen today highlights somewhat encouraging news on this front in the form of a consortium of sites across Africa, Asia and South America working to develop mRNA vaccines and manufacturing capacity.  Part of the goal is to deal with Covid – but it’s also to redress the fact that many places are suffering from not just Covid, but tuberculosis, HIV, malaria and other problems.  Developing local capacity will help get vaccines to those who need them, as well as preparing researchers in these countries to respond to new pandemics.

    Of course, this is a tremendous challenge, as Maxmen details  It should surprise no one that patents are getting in the way:

    “Despite the hub’s efforts, next-generation mRNA vaccines might still be entangled in patent thickets if some components of the technology have been claimed by others. An astounding number of patents — estimated at more than 80 — surround the mRNA vaccines, according to one analysis. A thorny IP landscape isn’t as daunting for big companies with the capital to litigate, explains Tahir Amin, a lawyer and co-founder of the Initiative for Medicines, Access & Knowledge (I-MAK), a non-profit group based in New York City. Amin says that the hub could boldly move forward, too, and harness public condemnation if Moderna or other companies file a lawsuit. But this option is off the table because the Medicines Patent Pool vows not to infringe on patents. Indeed, the agency’s model relies on persuading pharmaceutical companies to voluntarily license their technologies to alternative manufacturers, often in exchange for royalty fees”

    Moderna appears to be a particularly bad offender:

    “Moderna did not respond to requests from Nature for comment. But in an interview with The Wall Street Journal, Moderna’s chief executive Stéphane Bancel said that the company won’t impede Afrigen’s work in South Africa; he made no mention of the 15 larger companies working with the hub. He added, “I don’t understand why, once we’re in an endemic setting when there’s plenty of vaccine and there’s no issue to supply vaccines, why we should not get rewarded for the things we invented.””

    Maybe it’s time to remind everyone of the amount of public money that supported Moderna’s vaccine development, or that the company is on track to record $19 billion in revenue this year (Maxmen does both in the article).  Bancel’s language invokes tired and slippery uses of “endemic” to minimize Covid.  All “endemic” means is that a disease is spread at a more-or-less constant (and deemed acceptable) rate, without big surges.  Since the world is still suffering from big surges of Covid, Bance’s claim is straightforwardly false.  It’s also ignoring the moral and medical issue; if large numbers of people are dying from something (Covid, tuberculosis) it’s not acceptable, even if it comes at a predictably steady rate.

    But even if the claim about endemicity were true, the rest of the sentence is still false.  Very few people in low-income countries have been vaccinated, which means that the claim that there’s “plenty of vaccine” is wrong.  It might be the case that there is plenty of vaccine here in the U.S., but if it’s not getting into the arms of Africans, then there by definition not “plenty” of vaccines there, and it is also an issue to supply them: the measure of success is vaccinations, not vaccines in a warehouse.  Finally, Bancel pulls the oldest trick in the IP maximalist’s arsenal: to equate the ability to extract monopoly rents with the ability to profit at all.  If you want to achieve social welfare, then pharma should get just enough IP protection to incentivize product development and not a penny more.

    Overcompensating Pharma mints billionaires, but it’s still social murder.