Category: Gordon Hull
-
By Gordon Hull AI (machine learning) and people reach conclusions in different ways. This basic point has ramifications across the board, as plenty of people have said. I’m increasingly convinced that the gap between how legal reasoning works and how ML works is a good place both to tease out the differences, and to think…
-
By Gordon Hull I have argued in various contexts that when we think about AI and authorship, we need to resist the urge to say that AI is the author of something. Authorship should be reserved for humans, because authorship is a way of assigning responsibility, and we want humans to be responsible for the…
-
By Gordon Hull In a recent paper in Ethics and Information Technology, Paul Helm and Gábor Bella argue that current large language models (LLMs) exhibit what they call language modeling bias, a series of structural and design issues that serve as a significant and underappreciated form of epistemic injustice. As they explain the concept, “A…
-
By Gordon Hull In previous posts (one, two, three), I’ve been exploring the issue of what I’m calling the implicit normativity in language models, especially those that have been trained with RLHF (reinforcement learning with human feedback). In the most recent one, I argued that LLMs are dependent on what Derrida called iterability in language,…
-
By Gordon Hull Large Language Models (LLMs) are well-known to “hallucinate,” which is to say that they generate text that is plausible-sounding but completely made-up. These difficulties are persistent, well-documented, and well-publicized. The basic issue is that the model is indifferent to the relation between its output and any sort of referential truth. In other…
-
By Gordon Hull There’s been a lot of concern about the role of language models in research. I had some initial thoughts on some of that based around Foucault and authorial responsibility (part 1, part 2, part 3). A lot of those concerns have to do with the role of ChatGPT or other LLM-based product…
-
By Gordon Hull Last time, I followed a reading of Kathleen Creel’s recent “Transparency in Complex Computational Systems” to think about the ways that RLHF (Reinforcement Learning with Human Feedback) in Large Language Models (LLMs) like ChatGPT necessarily involves an opaque, implicit normativity. To recap: RLHF improves the models by involving actual humans (usually gig…
-
By Gordon Hull This is somewhat circuitous – but I want to approach the question of Reinforcement Learning with Human Feedback (RLHF) by way of recent work on algorithmic transparency. So bear with me… RLHF is currently all the rage in improving large language models (LLMs). Basically, it’s a way to try to deal with…
-
Another case percolating through the system, this one about Westlaw headnotes. The judge basically ruled against a series of motions for summary judgment, which means that the case is going to a jury. Discussion here (link via Copyhype)
-
This article from Gizmodo reports on research done over at Mozilla. Newer cars – the ones that connect to the internet and have lots of cameras – are privacy disasters. Here’s a paragraph to give you a sense of the epic scope of the disaster: “The worst offender was Nissan, Mozilla said. The carmaker’s privacy…
