• By Gordon Hull

    Last time, I offered a quick synopsis of Bernard Dionysius Geoghegan’s excellent new book Code.  Here, I’d like to track one specific Foucault reference in it.  Geoghegan takes Lévi-Strauss’s Savage Mind as a central text in the ambivalence French theorists came to feel about American communication theories, and he notes that the book “occasioned a broader reassessment of the human sciences marked by a new ascent of ‘coding’ as a key concept poised to dislocate and perhaps dissolve, existing scientific hierarchies” (152).  He adds:

    “Learning to code – that is, to cast cultural objects in terms of codes, relays, patterns, and systems – did more than reframe existing knowledge in cybernetic jargon. It also reflected a growing cynicism toward existing cultural and scientific nodes. From the 1960s onward, the semiotic task of deciphering obscure ‘codes’ in culture, politics, and science overtook the structuralist project. This crypto-structuralism shifted emphasis from the neutral connotations of ‘communication’ to antagonistic notions of code …. If these terms furthered the technocratic project of US foundations, they also set in motion a radical critique of scientific neutrality. Beneath the neutral science, something ‘savage’ lurked.” (152-3).

    Geoghegan cites Lacan, Barthes and the Tel Quel group (on which see Danielle Marx-Scouras’s excellent study).  He also quietly footnotes Foucault’s “Message ou bruit [message or noise]” “for a critical discussion of these same terms by Foucault” (215n81).

    (more…)

  • By Gordon Hull

    I made myself wait until I was settled into the summer to read Bernard Dionysius Geoghegan’s Code: From Information Theory to French Theory.  It was absolutely worth the wait. Code offers a look into the role of cybernetic theory in the development of postwar French theory, especially structuralism and what Geoghegan calls “crypto-structuralism.”  The story starts in the progressive era U.S., with the emergence of technocratic forms of government and expertise “against perceived threats of anarchy and communism” and the “progressive hopes to submit divisive political issues for neutral technical analysis” (25).  This governance as depoliticization then generates the postwar emphasis on cybernetics and information theory.  Along the way, it picks up and reorganizes psychology and anthropology in figures like Margaret Mead and Gregory Bateson, as the emerging information theory disciplines are given extensive funding by “Robber Baron philanthropies” (and later, covertly of course, by the CIA).  This then sets the stage for postwar cybernetic theory and the careful cultivation (again, substantially by philanthropies and the CIA) of intellectuals like Roman Jakobson and Lévi-Strauss.

    This is not a story I’d heard before – and I get the impression that almost no one has, at least not in philosophy, which is why this book is so important – and the details are fascinating.  It makes a compelling case for the need for those of us who work on the post-war French to get a handle on cybernetic theory in particular, especially because of the link to structuralism (more on that in a moment).  It calls to mind some of Katherine Hayles’ work – I’m thinking of How we Became Posthuman and My Mother Was a Computer – that probably needs rereading in this context.

    (more…)

  • In the face of the general disaster of the Republican majority on the Supreme Court’s ongoing power grab in the student loan case, I worry that the damage of the LGBTQ Wedding Website decision, Creative LLC v. Elenis, will get overlooked.  It seems to me, based mainly on a reading of Justice Sotomayor’s dissent, that the real forerunner of Creative LLC is a case mentioned nowhere in the decision or dissent: Burwell v. Hobby Lobby (2014).  Recall that in Burwell, the Court ruled that the Hobby Lobby Corporation could not be compelled by the Affordable Care Act to provide contraceptive coverage as part of its employees’ healthcare coverage, on account of the corporation’s religious beliefs.  At the time, I noted that Hobby Lobby seemed very happy to avail itself of things like police and fire protection.  I don’t usually quote myself in blog posts, but here’s what I said at the time:

    “Hobby Lobby is a large, big-box retail chain that employs over 13,000 people.  If those people (or others like them) didn’t exist or refused to work for Hobby Lobby, the corporation would go out of business immediately and the owners would have to find something else to do.  Hobby Lobby, Inc. takes advantage of the publicly-provided roads that its employees, managers, and customers take to get to its stores and that its owners use to get to their corporate offices.  Those offices were erected with the protection of enforceable building codes that make sure they don’t fall down, and that try to make sure that everyone can evacuate them in the event of a fire.  Hobby Lobby, Inc. also takes advantage of municipally provided services, including the installation of stormwater systems that deal with the massive runoff caused by big-box stores’ parking lots.  Hobby Lobby, Inc. also takes advantage of local police and fire services that protect their investment in their stores.  All of these things are provided substantially by property taxes paid by everyone living in the municipalities where the owners exercise their freedom to open a store.  Hobby Lobby, Inc. also freely avails itself of services provided by state and federal taxes, such as the Interstate highways on which it can transport its goods (highways which have to be widened at great public expense when suburbanization creates new local markets for its stores).  Hobby Lobby, Inc. also has no moral objections to taking advantage of the national defense system that keeps its stores safe from foreign intervention, or the publicly funded legal system that allowed them to challenge the ACA and that enables them to recover money from those who owe them.  No, in general, it seems that Hobby Lobby, Inc. depends quite a lot on the society in which it does business, even as its owners seek to excuse themselves from its rules.  In the meantime, Hobby Lobby’s owners also take advantage of the legal structure governing corporations (Hobby Lobby, Inc. isn’t a sole proprietorship!), such as the fact that they aren’t personally liable for any bad things that their corporation might do.  In other words, Hobby Lobby’s owners get to identify with the corporation when it’s a matter of religious belief, but not when doing so is inconvenient.”

    It was this line of thought that I most remembered when reading Justice Sotomayor’s dissent in Creative LLC.  She notes that:

    (more…)

  • Large Language Models (LLMs) like ChatGPT are well-known to hallucinate – to make up answers that sound pretty plausible, but have no relation to reality.  That of course is because they’re designed to produce text that sounds about right given a prompt.  What sounds kind of right may or may not be right, however.   ChatGPT-3 made up a hilariously bad answer to a Kierkagaard prompt I gave it and put a bunch of words into Sartre’s mouth.  It also fabricated a medical journal article to support a fabricated risk to oral contraception. ChatGPT-4 kept right on making up cites for me.  It has also defamed an Australian mayor and an American law professor.  Let’s call this a known problem.  You might even suggest, following Harry Frankfurt, that it’s not so much hallucinating as it is bullshitting.

    Microsoft’s Bing chatbot-assisted search puts footnotes in its answers.  So it makes sense to wonder if it also hallucinates, or if it does better.  I started with ChatGPT today and asked it to name some articles by “Gordon Hull the philosopher.”  I’ll spare you the details, but suffice it to say it produced a list of six things that I did not write.  When I asked it where I might read one of them, it gave me a reference to an issue of TCS that included neither an article by me nor an article of that title.

    So Bing doesn’t have to be spectacular to do better!  I asked Bing the same question and got the following:

    (more…)

  • Recall that ChatGPT a couple of months ago did a total face plant on the topic of Kierkegaard's knight of faith from the knight of infinite resignation.  Well, with the fullness of time and an upgrade, it's a lot better now: (screen grabs below the fold)

    (more…)

  • By Gordon Hull

    In the previous two posts (here and here) I’ve developed a political account of authorship (according to which whether we should treat an AI as an author for journal articles and the like is a political question, not one about what the AI is, or whether its output resembles human output), and argued that AIs can’t be property held accountable.  Here I want to argue that AI authorship risks social justice concerns.

    That is, there are social justice reasons to expand human authorship that are not present in AI.  As I mentioned in the original post, researchers like Liboiron are trying to make sure that the humans who put effort into papers, in the sense that they make it possible, get credit.  In a comment to that post, Michael Muller underlines that authorship interacts with precarity in complex ways.  For example, “some academic papers have been written by collectives. Some academic papers have been written by anonymous authors, who fear retribution for what they have said.”  Many authors have precarious employment or political circumstances, and sometimes works are sufficiently communal that entire communities are listed as authors. There are thus very good reasons to use authorship strategically when there are minoritized individuals or people in question.  My reference to Liboiron is meant only to indicate the sort of issue in the strategic use of authorship to protect minoritized or precarious individuals, and to gesture to the more complex versions of the problem that Muller points to.  The claim I want to make here is that , as a general matter, AI authorship isn’t going to help those minoritized people, and might well make matters worse.

    If anything, therre’s a plausible case that elevating an AI to author status will make social justice issues worse.  There’s at least two ways to get to that result, one specific to AI and one more generally applicable to cognitive labor.

    (more…)

  • As if Sartre didn't produce enough words all by himself!

    ChatGPT's response to the following prompt is instructive for those of us who are concerned about ChatGPT being used to cheat.  Read past the content of the answer to notice the made-up citations.  The "consciousness is a question…" line is in fact in the Barnes translation of Being and Nothingness, but is actually a term in the glossary provided by the translator (so it's not on p. 60 – it's on p. 629).  Where did the AI find this?  I'm guessing on the Wikipedia page for the book, which has a "special terms" section that includes the quote (and attributes it to Barnes.  I should add as an aside that Barnes puts it in quote marks, but doesn't reference any source).  The "separation" quote is, as far as I can tell, made up whole cloth.  It does sound vaguely Sartrean, but it doesn't appear to be in the Barnes translation, and I can't find it on Google.  It's also worth pointing out that neither quote is from the section about the cafe – both page numbers are from the bad faith discussion. 

    I don't doubt that LLMs will get better (etc etc etc) but for now, bogus citations are a well-known hallmark of ChatGPT.  Watch it make-up quotes from Foucault (and generally cause him to turn over in his grave) here.

    (more…)

  • By Gordon Hull

    As I argued last time, authorship is a political function, and we should be applying that construction of it to understand whether AI should be considered an author.  Here is a first reason for doing so: AI can’t really be “accountable.”

    (a) Research accountability: The various journal editors all emphasize accountability.  This seems fundamentally correct to me.  First, it is unclear what it would mean to hold AI accountable.  Suppose the AI fabricates some evidence, or cites a non-existent study, or otherwise commits something that, were a human to do it, would count as egregious research misconduct.  For the human, we have some remedies that ought, at least in principle, to discourage such behavior.  A person’s reputation can be ruined, their position at a lab or employer terminated, and so on.  None of those incentives would make the slightest difference to the AI.  The only remedy that seems obviously available is retracting the study.  But there’s at least two reasons that’s not enough.  First, as is frequently mentioned, retracted studies still get cited.  A lot.  Retraction Watch even keeps a list of the top-10 most cited papers that have been retracted.  The top one right now is a NEJM paper published in 2013 and retracted in 2018; it had 1905 cites before retraction and 950 after.  The second place paper is a little older, published in 1998 and retracted in 2010, and has been cited more times since its retraction than before.  In other words, papers that are bad enough to be actually retracted cause ongoing harm; a retraction is not a sufficient remedy for research misconduct.  If nothing else, whatever AI is going to find and cite it.  And all of this is assuming something we know to be false, which is that all papers with false data (etc) get retracted.  Second, it’s not clear how retraction disincentivizes an AI any more than any other penalty.  In the meantime, there is at least one good argument in favor making humans accountable for the output of an AI: it incentivizes them to check its work.

    (more…)

  • You know how sometimes your students don't do the reading?  And then how, when you give them a writing prompt based on it, they try to guess their way to a good answer from the everyday meaning of the words in the prompt?  And how, sometimes, the outcome is spectacularly, wonderfully wrong?

    Well, I don't know what else ChatGPT can do, but it can do an uncannily good imitation of such a student!

    (oh, and like that student, it blew through the wordcount, apparently on the theory that a lot of words would makeup for a lack of reading)

    This was a prompt from my existentialism class (the instructions also tell them they have to quote the text, but I omitted that here, because we already know ChatGPT can't do that).  It's two images because I am technically incompetent to capture the longer-than-a-screen answer into one image:

    (more…)

  • The MA Program at UNC Charlotte has a number of funded lines for our two-year MA program in philosophy.  We're an eclectic, practically-oriented department that emphasizes working across disciplines and philosophical traditions.  If that sounds like you, or a student you know – get in touch!  You can email me (ghull@uncc.edu), though for a lot of questions I'll pass you along to our grad director, Andrea Pitts (apitts5@uncc.edu).  Or, there's a QR code in the flyer below

     

    MA Flyer

    (more…)