• Last week we heard the latest installment in the prophesized AI jobs apocalypse.  This time, it was Dario Amodi, the CEO of Anthropic, who told Axios that “AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years” (italics original).  Axios adds: “Imagine an agent writing the code to power your technology, or handle finance frameworks and analysis, or customer support, or marketing, or copy editing, or content distribution, or research. The possibilities are endless — and not remotely fantastical. Many of these agents are already operating inside companies, and many more are in fast production …. Make no mistake: We've talked to scores of CEOs at companies of various sizes and across many industries. Every single one of them is working furiously to figure out when and how agents or other AI technology can displace human workers at scale. The second these technologies can operate at a human efficacy level, which could be six months to several years from now, companies will shift from humans to machines.”  The piece then argues that this will be different from previous technological disruptions because of the speed with which it will occur.

    Someone should tell that to the workers placed out of work all-but overnight by the development of machinery in the nineteenth century, as detailed by Marx (who helpfully notes in the Machinery chapter in Capital that the drive to full, steam-engine driven automation is motivated by the inability of capitalists to extract any more surplus value from over-exploited workers).  One should also remember, with Jathan Sadowski, that these sorts of proclamations are in part designed to create their own reality, such that “the power of expectations can have a disciplining effect on what people think” and that “the capitalist system is designed to pummel us into submission, preventing us from imagining life could be any other way, let alone allowing us to go on the offensive” (The Mechanic and the Luddite, 196, 207).  When Axios adds that “this will likely juice historic growth for the winners: the big AI companies, the creators of new businesses feeding or feeding off AI, existing companies running faster and vastly more profitably, and the wealthy investors betting on this outcome,” one can thus hardly be too surprised.

    Here, I want to take a slightly different angle however, and think a little bit about the kinds of jobs that are supposed to go away.  It’s hard not to notice the parallels between the Axios list and this one:

    (more…)

  • By Gordon Hull

    Over a couple of posts (first, second), I’ve used a recent paper by Brett Frischmann and Paul Ohm on “governance seams” – basically, inefficiencies (sometimes deliberate) in sociotechnical systems that are moments for governance – to think about what I called “phenomenological seams,” which are corresponding disruptions in our experience of the world.  I suggested that the combination of the two ideas could be usefully explored by way of Albert Borgmann’s criticism of the stereo as a way to listen to music, rather than direct instrumentation, against the background of Heidegger’s account of breakdowns in our phenomenological experience that occur when tools aren’t as expected.

    Borgmann’s objections are notable because of how the stereo, as opposed to the instrument, recalibrates the seams that structure the boundaries of the home.  This is obvious enough in retrospect, in a world with Spotify and wireless earbuds, but the connection shows a couple of things.

    First, as Heidegger’s examples also indicate, there is a strong connection between technological governance and phenomenology.   Frischmann and Ohm emphasize that governance seams can be there both to announce their own presence and to achieve transparency. This interruption makes transparent the phenomenological relations that govern a particular experience in the same way that a broken hammer does.  For all that’s wrong with them, the GDPR-mandated cookie notices on websites try to change the experience of websites and nudge users to think about the amount of data they are surrendering.  This example also shows the ways that phenomenological experience can limit the regulatory effect of seams: privacy is way too much work, and users are (as a result) cynical, confused, and disillusioned.  But the cookie requirement establishes a new relation to the websites, precisely because the regulatory seam has phenomenological import.

    (more…)

  • The House budget bill is deeply stupid.  No, I don’t mean the massive tax cut extensions for people who don’t need them, done on the backs of food and medical care for the poor, although it also does that and it’s stupid.  I mean the provision that bans states from regulating AI.  Tucked inside is a provision that says “no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”  This is making the news if you’re in the right circles, but like so much of the deluge coming out of Washington these days, it’s not getting the national attention it deserves.  Over on Lawfare, Katie Fry Hester and Gary Marcus have the details.  The key point is that the law attempts to pre-empt all state AI regulations.  It would probably also take out big chunks of state privacy laws, just as we’re starting to see some of them.

    Hester and Marcus emphasize some of the problems: the bill likely violates the 10th Amendment, it is absolutely a policy change of the sort that the Senate Parliamentarian should rule out-of-bounds for a reconciliation bill, and it’s deeply unpopular: the public is worried about AI and wants it regulated.  A standard debate about state vs. federal regulation cites the Brandeis “laboratories of democracy” vs. the need for uniform federal rules.  This is a reasonable debate, and it probably depends on the topic which you’d favor.  They are not necessarily exclusive choices either, as federal regulation can set floors and ceilings that states may go above/below, and sometimes federal regulations develop out of state rules.  Federal copyright law preempts state, and that system makes obvious sense.  State laws around gambling or alcohol make sense given the diversity of local cultures.

    That debate from your civics class is however not what this is about.  The problem of course is not that we’d have federal policy instead of state.  The problem is that there is absolutely no chance that this Congress will pass meaningful AI regulation, so the choice is between a patchwork of state rules and nothing.  Congress hasn’t even passed meaningful privacy regulation yet, and right now they’re in thrall to an autocrat who issued an Executive Order in his first week in office directing agencies to “suspend, revise, or rescind such actions, or propose suspending, revising, or rescinding such actions” taken in compliance with the Biden Administration’s (bare minimum) efforts at AI regulation, in the name of “revok[ing] certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence.”

    (more…)

  • Last time, I set up a topic by reading Brett Frischmann’s and Paul Ohm’s “Governance Seams.”  Governance seams are frictions and inefficiencies that can be designed into technological systems for policy ends.  In this regard, “Governance seams maintain separation and mediate interactions among components of sociotechnical systems and between different parties and contexts” (1117).  Here I want to suggest that governance seams have a very close relation to phenomenological ones.  To get there, let me take a detour into an older philosophy of technology paper, Albert Borgmann’s “Moral Significance of the Material Culture.”  Borgmann is concerned with what he takes to be the way that moral and ethical theory ignore material culture, whether they emphasize theory or practice.   Via a paper by Csikszentmihalyi and Rochberg-Halton, he arrives at a distinction between things that he calls “commanding” and “disposable.”  The moral complaint is about the “decline of commanding and the prominence of disposable reality” (294).  Following them, Borgmann distinguishes between a musical instrument and a stereo.

    “A traditional musical instrument is surely a commanding thing,” he writes:

    “It is such simply as a physical entity, finely crafted of wood or metal, embodying centuries of development and refinement, sometime showing the very traces of its service to many generations. An instrument particularly commands the attention of the student who, unless she is a prodigy, must through endless and painstaking practice adjust her body to the exacting requirements of this eminently sensitive thing.” (294)

    After some more similar description, emphasizing the multisensory experience of witnessing someone play an instrument, he turns to the stereo.  Certainly a “stereo produces music as well or, in fact, much better” and some stereos are big.  Nonetheless, “as a thing to be operated, a stereo is certainly not demanding. Nor do we feel indebted to its presence the way we do when we listen to a musician.  We respect a musician, we own a stereo” (295).  The stereo is on the rise, perhaps because “the history of the technology of recorded music is the history of obliging ever more fully the complaint about the burden and confinement of live music,” or “more positively” it is a “promise to provide music freely and abundantly” which is tied to “the promise of general liberty and prosperity – the promise that inaugurated the modern era” (295).

    (more…)

  • In a recent paper, Brett Frischmann and Paul Ohm introduce the idea of “governance seams,” which are frictions and inefficiencies that can be designed into technological systems for policy ends.  In this regard, “Governance seams maintain separation and mediate interactions among components of sociotechnical systems and between different parties and contexts” (1117).  Their first example is a university’s procedure for anonymous exams.  There, a number of different friction points are added to make sure that professors do not know whose exams they are grading: students receive unique identifiers from the registrar and use only these on the exams; they type but do not write answers; once the exams are scored, the registrar matches the numbers back with individual student names; and so forth.  The professor will likely not be in the room during the exam, so the university will have to provide a neutral proctor.  The exam might also take place in a specified location.  There will also be rules and penalties designed to ensure that none of the seams are crossed without permission.  Together, these governance seams design the system for fairness, or at least to eliminate one potential source of bias in grading.  They’re also pretty inefficient in that they require a bunch of resources be allocated to them, but places with that sort of anonymous grading figure it’s worth it for the fairness bump.

    As the paper goes on to argue, such governance seams are ubiquitous and important, because they enable us to design sociotechnical systems to facilitate certain outcomes that might otherwise not happen.  That is, seams open a space for governance:

    (more…)

  • As a final installment of reviewing some older “injury in fact” cases, I’d like to look at a few older state libel cases, because the distinction emerges especially clearly in them.  A North Carolina case, for example, noted that “he who publishes slanderous words even as those of a third person with the intent, (to be collected from the mode, extent and circumstances of the publication,) that the charges should be believed, does an injury in fact to the person slandered and ought to answer for it” (Hampton v. Wilson, 15 N.C. 468, 470 (1834).  Here’s a few cases in more detail.  The 19c gender politics is really helpful in seeing how their minds worked on defamation per se.

     

    (a) Chastity in Iowa

    A pair of Iowa cases are particularly clear.  In Abrams v. Foshee, the Court was asked to rule whether accusing a woman of having an abortion was actionable as slander.  The Court lays out its reasoning particularly clearly:

    “To maintain an action of slander, the consequence of the words spoken, must be to occasion some injury or loss to the plaintiff, either in law or fact. As the declaration in this case, claims no special damages, or a loss or injury, in fact, we are left to inquire whether the charges referred to in the instructions refused, was of such a character as to amount to an injury in law. To determine this, it becomes material to ascertain in what cases this action may be maintained, without proof of special damages. Starkie, in his work on Slander, page 9, lays down the rule, that such action may be maintained "when a person is charged with the commission of a crime; when an infectious disorder is imputed; and when the imputation affects the plaintiff in his office, profession, or business." In this case, we only need examine the rule so far as it relates to the charge of a crime. And what is that rule? In Cox and wife v. Bunker and wife, Morris, 269, the Supreme Court of this territory, recognized the rule laid down in Miller v. Parish, 25 Mass. 384, 8 Pick. 384, as the proper one. And in that case it is said, that " whenever an offense is charged, which if proved, may subject the party to a punishment, though not ignominious, but which brings disgrace upon the party falsely accused, such an accusation is actionable. And this is, perhaps, as correct, and at the same time as brief a statement of the general rule, as has been given. For while the rule is variously stated, by different authors and judges, yet in all of them, it is laid down as necessary that the charge shall impute a punishable offense.” (Abrams v. Foshee, 3 Iowa 274, 277-8 (1856)).

    That is, if the false statement would have subjected the victim to legal punishment if true, it was considered libel per se – actionable as an act, independent of any damages sustained.  In 1843, “willful killing of an unborn quick child, by an injury, etc., was made manslaughter” (278).  This statute was repealed in 1851. So abortion was not a crime. Plaintiffs urged that the fetus was a “human being” and thus subject to murder.  The Court, at length, disagreed, citing both statute and common law precedents (including Coke and Blackstone) to the effect that abortion was not “murder” even if it were a misdemeanor or otherwise bad.

    (more…)

  • I desperately and truly wish that I'd made this up.  Alas, the Verge reports:

    "Economist James Surowiecki quickly reverse-engineered a possible explanation for the tariff pricing. He found you could recreate each of the White House’s numbers by simply taking a given country’s trade deficit with the US and dividing it by their total exports to the US. Halve that number, and you get a ready-to-use “discounted reciprocal tariff.” The White House objected to this claim and published the formula it says that it used, but as Politico points out, the formula looks like a dressed-up version of Surowiecki’s method. In case you weren’t sure, Surowiecki calls this approach “extraordinary nonsense.” So why did Trump’s team use it? Well, like plenty of people who’ve realized their homework is due in three hours’ time, it seems like they may have been tempted by AI."

    Wait, what?

    "A number of X users have realized that if you ask ChatGPT, Gemini, Claude, or Grok for an “easy” way to solve trade deficits and put the US on “an even playing field”, they’ll give you a version of this “deficit divided by exports” formula with remarkable consistency. The Verge tested this with the phrasing used in those posts, as well as a question based more closely on the government’s language, asking chatbots for “an easy way for the US to calculate tariffs that should be imposed on other countries to balance bilateral trade deficits between the US and each of its trading partners, with the goal of driving bilateral trade deficits to zero.” All four platforms gave us the same fundamental suggestion.

    There is some variation. Grok and Claude specifically suggested halving the tariff figure to generate what Grok calls a “reasonable” result, much like Trump’s “discount” idea. Ask for a 10 percent baseline tariff and the systems also disagree on whether that should be added to the total tariff rate or not. But answers from across the four chatbots have more similarities than differences.

    As I write this, the Dow Jones is down 3.98%. 

  • I’ve been indirectly pursuing the question of the problems faced by privacy plaintiffs in data cases by looking at the origins of the Supreme Court’s standing doctrine.  Basically, plaintiffs have to show an “injury in fact,” and courts often find privacy harms not to meet this standard.  Although presented as dating from time immemorial, the injury in fact requirement was actually announced rather abruptly in 1970 (all of this is part 1).  I’ve been exploring the historical antecedents that will help understand what that language implies – in a very early Supreme Court case (part 2), in other federal case law (part 3), and in federal cases about the Administrative Procedure Act (part 4).  Here I want to extend the genealogy into some early state cases; I’ll draw a somewhat arbitrary cutoff at 1930.  This time I’ll look at a general potpourri of cases. Next time I want to specifically look at a few libel cases because the language is especially clear in them.  I don’t claim this to be exhaustive (and I’m ignoring some of the cases around trusts and deeds because the facts in them are often very confusing, but I think it collectively paints a pretty good picture of what “injury in fact” connoted in Data Processing.

    On the whole, the cases point to the legal vs non-legal harm distinction I’ve been developing, As the New Jersey Supreme Court used the concept in an estate case, “there was no injury, in fact or in contemplation of law, to prevent in this case the merger” of the estates (Den ex dem. Wills v. Cooper, 25 N.J.L. 137, 159 (1855).

    (more…)

  • AI Copyright sad chatbotThe Federal Circuit has affirmed the denial of copyright protection to an AI-generated image on the grounds that copyright requires a human author.  As far as I know this was the expected outcome; I certainly think it’s correct.  I talked about the case a bit and made a couple of policy arguments against AI copyright here, when the lower-court ruling came out.

    The appellate decision lists several reasons AI cannot be an author: (1) copyright authorship is premised on the capacity to hold property, which AI cannot; (2) copyright duration is tied to the author’s lifespan; (3) copyright includes inheritance conditions, and machines don’t have heirs; (4) Copyright transfer requires a signature, but “machines lack signatures, as well as the legal capacity to provide an authenticating signature;” (5) authors are protected regardless of their “nationality or domicile,” but machines have neither; (6) authors have intentions whereas “Machines lack minds and do not intend anything;” (7) when the copyright act does talk about machines, it always talks about them as tools.

    As the court summarizes:

    “All of these statutory provisions collectively identify an “author” as a human being. Machines do not have property, traditional human lifespans, family members, domiciles, nationalities, mentes reae, or signatures. By contrast, reading the Copyright Act to require human authorship comports with the statute’s text, structure, and design because humans have all the attributes the Copyright Act treats authors as possessing. The human-authorship requirement, in short, eliminates the need to pound a square peg into a textual round hole by attributing unprecedented and mismatched meanings to common words in the Copyright Act.” (12)

    (more…)

  • In a recent piece on Lawfare, Simon Goldstein and Peter N. Salib make the case that AI with cooperation is better than attempting some sort of AI race, unlike what virtually all of the relevant policymakers in the US advocate.  Thus, in response to the Chinese DeepSeek model, US policymakers are doubling down on the idea that the US must “dominate” AI and win against its geopolitical rival.  Goldstein and Salib write:

    “In any high-stakes competition to obtain powerful military technology, the closer the game, the more sense it makes to declare a truce and cooperate. Cooperation can help to ensure that both superpowers obtain transformative AI around the same time. This preserves the current balance of power, rather than unsettling it and inviting extreme downside risks for both nations.”

    (more…)