recent posts
- (Very) Early Foucault on Humanism, Part 4: Kant, Anthropology, and Departing from Heidegger
- (Very) Early Foucault on Humanism, Part 3: Heidegger and Foucault on Kant
- AI Literacy Paper
- (Very) Early Foucault on Humanism, Part 2: Heidegger?
- (Very) Early Foucault on Humanism, Part 1: From Order back to Lille
about
Category: Uncategorized
-
By Gordon Hull As part of thinking through the implications of Lydia Liu’s papers (here and here) demonstrating a Wittgensteinian influence on the development of large language models, I’ve made a detour into Derrida’s critique of writing (my earlier parts: one, two, three). My initial suggestion last time was that Derrida’s discussion is designed to…
-
By Gordon Hull I’ve been looking (part 1, part 2) at a couple of articles by Lydia Liu (here and here) demonstrating a Wittgensteinian influence on the development of large language models. Specifically, Wittgenstein’s emphasis on the meaning of words as determined by their contexts and placement relative to other rules gets picked up by…
-
I’ve been loosely tracking the AI and copyright cases, most notably the Thaler litigation, where Thaler keeps losing the argument that work solely by an AI should get copyright protection. To summarize: everybody who has ruled on that said that only work involving humans can get copyright protection. As I said at the time, I…
-
By Gordon Hull There’s an emerging literature on Large Language Models (LLMs, like ChatGPT) that basically argues that they undermine a bunch of our existing assumptions about how language works. As I argued in a paper a year and a half ago, there’s an underlying Cartesianism in a lot of our reflections on AI, which…
-
The NSF had attempted to reduce indirect costs (F&A) on all future grants to 15%, in a somewhat more coherent version of the NIH's effort to do so for all ongoing and future grants. A federal court today enjoined the rate cut, vacating the new rule, finding that "National Science Foundation’s 15% Indirect Cost Rate…
-
I wish I’d come up with that title, but it actually belongs to a new study led by Natalia Kosmyna of the MIT Media Labs. The study integrates brain imaging with questions and behavioral data to explore what happens when people write essays using large language models (LLMs) like ChatGPT. I haven’t absorbed it all…
-
In a recent paper, Brett Frischmann and Paul Ohm introduce the idea of “governance seams,” which are frictions and inefficiencies that can be designed into technological systems for policy ends. In this regard, “Governance seams maintain separation and mediate interactions among components of sociotechnical systems and between different parties and contexts” (1117). Their first example…
-
I desperately and truly wish that I'd made this up. Alas, the Verge reports: "Economist James Surowiecki quickly reverse-engineered a possible explanation for the tariff pricing. He found you could recreate each of the White House’s numbers by simply taking a given country’s trade deficit with the US and dividing it by their total exports…
-
Here I want to complete my review of federal legal precedents for the Supreme Court’s sudden invocation of “injury in fact” language to understand judicial standing in its 1970 Data Processing decision (recall the earlier installments: first, second, third. The first one explains the issue; if you want to escape my rummaging through the archive,…
-
I want to take a break from judicial standing doctrine to note a recent and helpful paper by Emily Sullivan and Atoosa Kasirzadeh about explainable AI. Explainable AI is a research agenda – there’s a lot of papers and techniques (for a current lit review, see here) – that is designed to get at a…
