Category: Big Data

  • Here's the current draft of a new paper – "Translating Privacy for Data Subjects."  And here's the abstract: This essay offers a theoretical account of one reason that current privacy regulation fails.  I argue that existing privacy laws inherit a focus on judicial subjects, using language about torts and abstract rights.   Current threats to privacy,…

  • By Gordon Hull Last time, I looked at the Lawrence Tribe article that was the original source of the blue bus thought experiment.  Tribe’s article is notable for its defense of legal reasoning and processes against the introduction of statistical evidence in trials.  He particularly emphasizes the need for legal proceedings to advance causal accounts,…

  • By Gordon Hull AI (machine learning) and people reach conclusions in different ways.  This basic point has ramifications across the board, as plenty of people have said.  I’m increasingly convinced that the gap between how legal reasoning works and how ML works is a good place both to tease out the differences, and to think…

  • By Gordon Hull Last time, I followed a reading of Kathleen Creel’s recent “Transparency in Complex Computational Systems” to think about the ways that RLHF (Reinforcement Learning with Human Feedback) in Large Language Models (LLMs) like ChatGPT necessarily involves an opaque, implicit normativity.  To recap: RLHF improves the models by involving actual humans (usually gig…

  • By Gordon Hull This is somewhat circuitous – but I want to approach the question of Reinforcement Learning with Human Feedback (RLHF) by way of recent work on algorithmic transparency.  So bear with me… RLHF is currently all the rage in improving large language models (LLMs).  Basically, it’s a way to try to deal with…

  • This article from Gizmodo reports on research done over at Mozilla.  Newer cars – the ones that connect to the internet and have lots of cameras – are privacy disasters.  Here’s a paragraph to give you a sense of the epic scope of the disaster: “The worst offender was Nissan, Mozilla said. The carmaker’s privacy…

  • By Gordon Hull Large Language Models (LLMs) like Chat-GPT burst into public consciousness sometime in the second half of last year, and Chat-GPT’s impressive results have led to a wave of concern about the future viability of any profession that depends on writing, or on teaching writing in education.  A lot of this is hype,…

  • By Gordon Hull Last time, I introduced a number of philosophy of law examples in the context of ML systems and suggested that they might be helpful in thinking differently, and more productively, about holding ML systems accountable.  Here I want to make the application specific. So: how do these examples translate to ML and…

  • By Gordon Hull AI systems are notoriously opaque black boxes.  In a now standard paper, Jenna Burrell dissects this notion of opacity into three versions.  The first is when companies deliberately hide information about their algorithms, to avoid competition, maintain trade secrets, and to guard against gaming their algorithms, as happens with Search Engine Optimization…

  • By Gordon Hull As a criterion for algorithmic assessment, “fairness” has encountered numerous problems.  Many of these emerged in the wake of ProPublica’s argument that Broward County’s pretrial detention system, COMPAS, was unfair to black suspects.  To recall: In 2016, ProPublica published an investigation piece criticizing Broward County, Florida’s use of a software program called…