• It seems that Trump is descending into his usual self-aggrandizement and racism – somebody must have slipped something into his Diet Coke for Monday's (semi) coherent performance.   Unfortunately, that's now worn off.  If you haven't seen it, the Washington Post skewers the blatant hypocrisy of Fox News.  Also, the Trump administration was given the opportunity to think about pandemic response during the transition from the Obama administration.  Most of the people involved are no longer with the administration, even the subset of them who managed to act interested.

    One of my arguments last time was that the U.S. social fabric was ill-equipped to handle COVID-19.  Here's a nice, short piece that condenses a lot of the reasons why the social safety net is a disaster.  For something more optimistic about the social fabric, read this.

    I also suggested that the point of suppression and curve flattening needed to be viewed as stalling for time.  I focused on the development of treatments; over in The Atlantic, Aaron E. Carroll and Ashish Jha point out that the time could be used to put in place a comprehensive testing regime that would enable more targeted suppression efforts, instead of the blanket lockdowns we have now.  They also emphasize that if we succeed in slowing the disease now, we urgently need to avoid developing a false sense of security from that fact.

    Finally, this is the first piece I've seen that details the gender dynamics of the coronavirus response.

     
  • As communal life comes screeching to a collective halt for the indefinite future (including not-quite-Italy-but-close bans on movement in San Francisco), and as public health officials go into “more lockdown is better” mode, it seems important to underline at least three things: (1) no one knows what they are doing, hence (2) these measures are a rough approximation of acting according to a maximin principle, and (3) they are not socially sustainable.  I'll conclude with what strike me as the only two currently-plausible off-ramps.

    Let’s start with (1).  John Ioannides makes the best case I’ve seen for the uncertainty behind what we’re doing in a recent piece on StatNews.  The depth of our collective ignorance is astonishing.   Before we even get to Ioannides’ argument, note that (a) Everyone knows about the catastrophic, Trump-induced failure to get testing and surveillance going early on, and that this failure cannot ever be remedied because we will never again be in those early stages.  Trump’s and FOX News’ collective failures here should be treated as cases of criminal negligence.  But notice that this means that we have absolutely no idea about basic facts about the disease prevalence.  (b) Even if we had good testing, recent data says that lots and lots of COVID-19 is escaping detection.  How many cases?  We don’t know, of course, because these are undetected cases.  They may not be as infectious.  But whatever the number, it profoundly affects everything from the case fatality rate to how long a hypothetical “herd immunity” would take to develop.  (c) As Ioannides points out, a lot of the data we do have isn’t worth much.  For example, the mortality data is meaningless.  I’ll let Ioannides give you a sense of why:

    (more…)

  • We are here on the balcony because of a pan-demic, a terrible pandemic that we have handled perfectly.  More testing for the virus is coming soon.  But also not everybody needs or should get testing.  Also we are going to cap off the petroleum reserve gas tank at a great price.  Many wonderful amazing people are here to tell you what an honor it is to work with The Leader.  The Leader’s ministers are now partnering with the private sector to produce a drive-through test.  Which will be available soon!  Then things will be more perfect, and more tremendousness is just around the corner.  When will the test be available?  Soon!  What about emergency legislation that the ministers have been working with the opposition party on?  The Opposition doesn’t want enough, doesn’t want what we want.  There are so many problems with the Opposition. 

    The Leader would like to shake your hand.  If there were another solar eclipse, The Leader would gaze at it again.

    (more…)

  • By Gordon hull

    The coronavirus outbreak showcases a lot of what is wrong with the Trump administration and the stupid, preening narcissist at its head (this is a president who claimed not to have known that flu kills thousands of people every year – but whose own grandfather was a victim of the 1918 pandemic).  At the Washington Post, Jennifer Rubin thinks that coronavirus is likely to bring down the entire GOP house of cards in November, as it relentlessly exposes the corrupt ineptitude of Trump; as goes Trump, so go his sycophantic enablers in the Senate.  This would be karmic.  It would also be no comfort to the many, many people that Donald Trump is about to kill.  Indeed, not just the actions of Trump’s egomaniacal incompetence but our collective need to think about it (guilty!) is itself part of why we aren’t prepared.

    That is, people will die because the failure of the government to do anything meaningful towards containment in the early stages of the epidemic, in no small part because of Trump’s narcissistic dithering and lying.  Even now, as we’ve moved from “containment” to “mitigation,” nowhere near enough people are being tested.  This makes it much harder to know who is at risk of contracting the disease and spreading it to others, which in turn makes it more likely that they do so.  More people will get sick.  Not testing enough people also makes it impossible to know how deadly the disease is, because it denies you a sensible denominator for predicting the case fatality rate.  When Trump said he had a “hunch” that the WHO’s widely-circulated 3.4% fatality rate was too high, he was probably right (with lots of provisos) – but his own stonewalling on testing make it impossible to substantiate the point.  Trump will also kill people through his prior stupid policy decisions, like forcing asylum-seekers to wait in Mexico until their number is someday called, in makeshift camps that lack even the running water necessary for hand-washing.

    (more…)

  • Last time, I began the to make the case that there is evidence of an engagement with Deleuze in Foucault’s “What is an Author.”  Specifically, I made the case that there is an implict Platonism behind the concept of authorship as Foucault articulates it.  This time, I will look at the way that Barthes overturns authorship, and how Foucault’s language distances himself from that, while of course beginning with the proposition that the author is, in fact, a fiction.  For Deleuze, the question of difference, when posed against Platonism, is substantially a question of attending to the “swarming” differences that lie outside the Platonic metaphycial schema, and which are accordingly illegible wihtin it, insofar as they cannot be referred back to the anchoring eidos.

    This language of swarming differences that are inexpressible and illegible from within a representative schema is also found in Barthes, who is often Foucault’s presumed interlocutor in “Author.”  As Barthes  sees clearly, “writing” and “text” radically exceed this space of authorship, bringing into play indefinitely many differences and ways of thinking difference.  Writing is, to revert to Deleuzian terms, “a world of impersonal individuations and pre-individual singularities” (DR 277).   Barthes’ “Death of the Author” is the most obviously relevant contribution here, and he famously opens “Death of the Author” with praise of “writing:”

    (more…)

  • By Gordon Hull

    Toward the end of “What is an Author,” Foucault distinguishes between the “founder” and “initiator [instaurateur]” of a discourse.  Galileo is the paradigmatic example of the former, and Marx of the latter.  This is a puzzling distinction, to say the least.  Let’s begin with the terminology: Although “founder [fondateur]” is common enough, as far as I know, Foucault doesn’t use “instaurateur” anywhere else.  At least, a computer search of the text of Les Mots et Les Choses, Archéologie du Savoir and the pre-1975 Dits et Écrits didn’t turn up anything.  Other things being equal, those seem like the most likely places to find it (if I’m missing uses of the term, I’d love to learn about them!).  In particular, Order is a likely bet, because in the French seminar version (the one in D&E – see my initial thoughts here and Stuart Elden’s discussion of the textual history here) of “Author,” Foucault frames the text as partly responding to some leftover business from Order, where he admits that he both refuses to organize texts by authors, but also uses authorial names.  The nominal “instauration” occurs a few times in these texts in a way that something more substantial than a blog post would need to investigate, but as far as I can tell, the term of art in “Author” – “instauration discursive,” naming somebody rather than an event – is specific to that lecture.  So something is going on here!

    It seems to me that it helps to understand this distinction by putting Foucault in conversation with Deleuze.  Specifically, it seems to me that the instaurateur is an application of what Deleuze calls difference or repetition outside the order of representation.  I’ll make an initial, obviously sketchy, case for that thought over the next few posts, with the caveat that I am not a Deleuze scholar.

    (more…)

  • By Gordon Hull

    Last time, I suggested that a recent paper by Mala Chatterjee and Jeanne Fromer is very helpful in disentangling what is at stake in Facebook’s critique of Illinois’ Biometric Information Privacy Act (BIPA). Recall that BIPA requires consent before collecting biometric identifiers, and a group of folks sued FB over phototagging. Among FB’s defenses is the claim that its software doesn’t depend on human facial features; rather it “learns for itself what distinguishes different faces and then improves itself based on its successes and failures, using unknown criteria that have yielded successful outputs in the past.” (In re Facebook Biometric Info. Privacy Litig., 2018 U.S. Dist. LEXIS 810448, p. 8). Chatterjee and Fromer apply the phenomenal/function distinction from philosophy of mind to the question of how mental state requirements in law apply to AI, with an extended case study of liability for copyright infringement. Basically, there’s an ambiguity buried in the mental state requirements, and we need to decide – probably on a case-by-case basis – whether the law’s objective is better served by a phenomenal or functional account of the mental state in question.

    In applying the distinction, I suggested that we assume for the sake of argument that the software does not do the same thing that an embodied human being does when they identify a face. In other words, I was suggested that we accept arguendo that the software in question does not achieve the same phenomenal state as one of us does when we recognize a face. I also said I think that assumption, while clearly correct in a literal sense, may not be able to do as much work as it needs to. Here’s why.

    It should be fairly clear that the experience of recognizing Pierre in a café is not identical between different people, or probably even the same person in different times. For that to be true, the molecular structure and electrical activity in their respective brains would have to be identical, which isn’t going to be the case. It’s also not clear that we don’t “learn[] for [ourselves] what distinguishes different faces and then improve[] [ourselves] based on [our] successes and failures, using unknown criteria that have yielded successful outputs in the past,” just like FB. After all, if you ask me why I recognize somebody, I will produce some criteria – but if it’s somebody I know, it’s not like I consciously apply that criteria as a rule. Neither the FB system nor I are using the old-fashioned “AI” of an ELIZA program. It would therefore at least require some argument to say that I recognize the face by means of that criteria, rather than offering it as a post facto explanation. Indeed, recognition does not appear to be a “conscious” process in the relevant sense at all. So that can’t be the issue.

    (more…)

  • By Gordon Hull

    Facial recognition technology is an upcoming privacy mess.  An early example of why is photo-tagging on Facebook.  The privacy problem was noted a while ago by Woody Hartzog and Frederic Stutzman: “once a photo is tagged with an identifier, such as a name or link to a profile, it becomes searchable …making information visible to search significantly erodes the protection of obscurity, and, consequently, threatens a user’s privacy” (47; on obscurity, recall here and here).  A while ago, I noted litigation surrounding Illinois’ Biometric Information Privacy Act (BIPA).  BIPA basically establishes notice and consent rules for companies that collect and use biometric information.  For example, it stipulates that “no private entity may collect, capture, purchase, receive through trade, or otherwise obtain a person's or a customer's biometric identifier or biometric information, unless it first” informs the person in question of what’s happening and what the entity is doing with the data, and then obtains a written release (740 ILCS 14/15(b)).  As I suggested, this regime is subject to the obvious problems with notice and consent privacy, but companies like Facebook are resisting providing even that de minimis protection for their customers.  In a landmark ruling last year, the Illinois Supreme Court upheld the law’s statutory damages provision.

    More generally and as parallel federal litigation underscores, BIPA presents a significant threat to FB.  The issue in question is precisely photo-tagging.  As the 9th Circuit described the process:

    “In 2010, Facebook launched a feature called Tag Suggestions. If Tag Suggestions is enabled, Facebook may use facial-recognition technology to analyze whether the user’s Facebook friends are in photos uploaded by that user. When a photo is uploaded, the technology scans the photo and detects whether it contains images of faces. If so, the technology extracts the various geometric data points that make a face unique, such as the distance between the eyes, nose, and ears, to create a face signature or map. The technology then compares the face signature to faces in Facebook’s database of user face templates (i.e., face signatures that have already been matched to the user’s profiles). If there is a match between the face signature and the face template, Facebook may suggest tagging the person in the photo” (Patel v. Facebook, 6).

    A group of representative Illinois FB users sued the company and filed for class certification,  arguing that “Facebook violated sections 15(a) and 15(b) of BIPA by collecting, using, and storing biometric identifiers (a “scan” of “face geometry,” id. 14/10) from their photos without obtaining a written release and without establishing a compliant retention schedule” (Patel v. FB, 7).

    (more…)

  • By Gordon Hull

    Consider the following, too brief summary: following Foucault, one can say that biopolitics is about optimizing populations, or something to that effect.  This involves a lot of work on the part of the administrative state, which sets itself up to provide services, everything from sewers and other infrastructure to social safety nets.  Different places do this differently, but the goal is to provide for the general welfare.  At the same time, as Foucault noted from the get-go (see the last lecture in Society must be Defended), the biopolitical “make live or allow to die” generated a correlative “if you want to live, they must die” (SMD 255).  Those Others were excluded from the “population” that the state tried to optimize and so were either allowed to die, or (in cases such as the Nazis) actively killed, or (in the case of American Jim Crow) actively suppressed and marginalized and often killed.  These various forms of necropolitics are intimately related as both a matter of historical fact (the Nazis thought Jim Crow an excellent example of race management) and conceptual structure under the rubric of something like state racism.  So too, the biopolitics of optimization is historically tied to the necropolitics of state racism, as scholars like Dell McWhorter make clear in the American case, or Agamben does in the case of the German Versuchpersonen.

    But biopolitics has more than one variant, or at least so I’ve tried to argue.  I think it’s useful to view biopolitics in an earlier phase – what I call “public biopolitics” and a more recent neoliberal version.  The two can be distinguished in part by how they conceptualize the members of the population they are trying to benefit, and how they think they might do so.  For example, intellectual property on the public model is about public welfare and the benefits to everyone of encouraging inventive activity.  On the neoliberal version, the emphasis is much more on individual creators and markets. But the neoliberal version very much embeds a view of public welfare – a society of individual entrepreneurs and consumers, whose well-being is measured by various indicia of consumer welfare.

    But what about the necropolitics of neoliberal biopolitics?  One obvious avenue to pursue is that those who are excluded from markets are allowed to die.  The rhetoric of consumer choice is often then utilized to suggest that they deserve their fate because of a failure to opt-in.  Whatever the merits of that avenue, it seems clear enough that more can be said.  One component of the U.S. piece is going to be data and surveillance.  Biopolitics itself arose in the context of data governance mechanisms like the census, and the importance of data to contemporary capitalism is large and growing.  One way the surveillance state implements necropolitics is by what Margaret Hu calls “big data blacklisting:” classifying certain kinds of people as ineligible for many of the things to which other members of the population are entitled (such as employment or the ability to board a commercial aircraft), or rendering them subject to ceremony-free death by drone strike.

    In a fascinating recent paper, Michelle Gilman and Rebecca Green offer some important clues to another aspect of the story.  Most of the literature about data privacy focuses on what happens when people don’t have enough privacy.  Gilman and Green focus on the inverse problem: what happens when people have too much privacy, i.e., when they become invisible in a society predicated on visibility?  Working primarily with examples of the undocumented, the homeless, day laborers and those with felony convictions, Gilman and Green tell the story of “populations that remain outside seemingly omnipresent surveillance systems” (257), those who live in the “surveillance gap.”  If too much surveillance is bad, so is too little:

    (more…)