• Last time, I began the to make the case that there is evidence of an engagement with Deleuze in Foucault’s “What is an Author.”  Specifically, I made the case that there is an implict Platonism behind the concept of authorship as Foucault articulates it.  This time, I will look at the way that Barthes overturns authorship, and how Foucault’s language distances himself from that, while of course beginning with the proposition that the author is, in fact, a fiction.  For Deleuze, the question of difference, when posed against Platonism, is substantially a question of attending to the “swarming” differences that lie outside the Platonic metaphycial schema, and which are accordingly illegible wihtin it, insofar as they cannot be referred back to the anchoring eidos.

    This language of swarming differences that are inexpressible and illegible from within a representative schema is also found in Barthes, who is often Foucault’s presumed interlocutor in “Author.”  As Barthes  sees clearly, “writing” and “text” radically exceed this space of authorship, bringing into play indefinitely many differences and ways of thinking difference.  Writing is, to revert to Deleuzian terms, “a world of impersonal individuations and pre-individual singularities” (DR 277).   Barthes’ “Death of the Author” is the most obviously relevant contribution here, and he famously opens “Death of the Author” with praise of “writing:”

    (more…)

  • By Gordon Hull

    Toward the end of “What is an Author,” Foucault distinguishes between the “founder” and “initiator [instaurateur]” of a discourse.  Galileo is the paradigmatic example of the former, and Marx of the latter.  This is a puzzling distinction, to say the least.  Let’s begin with the terminology: Although “founder [fondateur]” is common enough, as far as I know, Foucault doesn’t use “instaurateur” anywhere else.  At least, a computer search of the text of Les Mots et Les Choses, Archéologie du Savoir and the pre-1975 Dits et Écrits didn’t turn up anything.  Other things being equal, those seem like the most likely places to find it (if I’m missing uses of the term, I’d love to learn about them!).  In particular, Order is a likely bet, because in the French seminar version (the one in D&E – see my initial thoughts here and Stuart Elden’s discussion of the textual history here) of “Author,” Foucault frames the text as partly responding to some leftover business from Order, where he admits that he both refuses to organize texts by authors, but also uses authorial names.  The nominal “instauration” occurs a few times in these texts in a way that something more substantial than a blog post would need to investigate, but as far as I can tell, the term of art in “Author” – “instauration discursive,” naming somebody rather than an event – is specific to that lecture.  So something is going on here!

    It seems to me that it helps to understand this distinction by putting Foucault in conversation with Deleuze.  Specifically, it seems to me that the instaurateur is an application of what Deleuze calls difference or repetition outside the order of representation.  I’ll make an initial, obviously sketchy, case for that thought over the next few posts, with the caveat that I am not a Deleuze scholar.

    (more…)

  • By Gordon Hull

    Last time, I suggested that a recent paper by Mala Chatterjee and Jeanne Fromer is very helpful in disentangling what is at stake in Facebook’s critique of Illinois’ Biometric Information Privacy Act (BIPA). Recall that BIPA requires consent before collecting biometric identifiers, and a group of folks sued FB over phototagging. Among FB’s defenses is the claim that its software doesn’t depend on human facial features; rather it “learns for itself what distinguishes different faces and then improves itself based on its successes and failures, using unknown criteria that have yielded successful outputs in the past.” (In re Facebook Biometric Info. Privacy Litig., 2018 U.S. Dist. LEXIS 810448, p. 8). Chatterjee and Fromer apply the phenomenal/function distinction from philosophy of mind to the question of how mental state requirements in law apply to AI, with an extended case study of liability for copyright infringement. Basically, there’s an ambiguity buried in the mental state requirements, and we need to decide – probably on a case-by-case basis – whether the law’s objective is better served by a phenomenal or functional account of the mental state in question.

    In applying the distinction, I suggested that we assume for the sake of argument that the software does not do the same thing that an embodied human being does when they identify a face. In other words, I was suggested that we accept arguendo that the software in question does not achieve the same phenomenal state as one of us does when we recognize a face. I also said I think that assumption, while clearly correct in a literal sense, may not be able to do as much work as it needs to. Here’s why.

    It should be fairly clear that the experience of recognizing Pierre in a café is not identical between different people, or probably even the same person in different times. For that to be true, the molecular structure and electrical activity in their respective brains would have to be identical, which isn’t going to be the case. It’s also not clear that we don’t “learn[] for [ourselves] what distinguishes different faces and then improve[] [ourselves] based on [our] successes and failures, using unknown criteria that have yielded successful outputs in the past,” just like FB. After all, if you ask me why I recognize somebody, I will produce some criteria – but if it’s somebody I know, it’s not like I consciously apply that criteria as a rule. Neither the FB system nor I are using the old-fashioned “AI” of an ELIZA program. It would therefore at least require some argument to say that I recognize the face by means of that criteria, rather than offering it as a post facto explanation. Indeed, recognition does not appear to be a “conscious” process in the relevant sense at all. So that can’t be the issue.

    (more…)

  • By Gordon Hull

    Facial recognition technology is an upcoming privacy mess.  An early example of why is photo-tagging on Facebook.  The privacy problem was noted a while ago by Woody Hartzog and Frederic Stutzman: “once a photo is tagged with an identifier, such as a name or link to a profile, it becomes searchable …making information visible to search significantly erodes the protection of obscurity, and, consequently, threatens a user’s privacy” (47; on obscurity, recall here and here).  A while ago, I noted litigation surrounding Illinois’ Biometric Information Privacy Act (BIPA).  BIPA basically establishes notice and consent rules for companies that collect and use biometric information.  For example, it stipulates that “no private entity may collect, capture, purchase, receive through trade, or otherwise obtain a person's or a customer's biometric identifier or biometric information, unless it first” informs the person in question of what’s happening and what the entity is doing with the data, and then obtains a written release (740 ILCS 14/15(b)).  As I suggested, this regime is subject to the obvious problems with notice and consent privacy, but companies like Facebook are resisting providing even that de minimis protection for their customers.  In a landmark ruling last year, the Illinois Supreme Court upheld the law’s statutory damages provision.

    More generally and as parallel federal litigation underscores, BIPA presents a significant threat to FB.  The issue in question is precisely photo-tagging.  As the 9th Circuit described the process:

    “In 2010, Facebook launched a feature called Tag Suggestions. If Tag Suggestions is enabled, Facebook may use facial-recognition technology to analyze whether the user’s Facebook friends are in photos uploaded by that user. When a photo is uploaded, the technology scans the photo and detects whether it contains images of faces. If so, the technology extracts the various geometric data points that make a face unique, such as the distance between the eyes, nose, and ears, to create a face signature or map. The technology then compares the face signature to faces in Facebook’s database of user face templates (i.e., face signatures that have already been matched to the user’s profiles). If there is a match between the face signature and the face template, Facebook may suggest tagging the person in the photo” (Patel v. Facebook, 6).

    A group of representative Illinois FB users sued the company and filed for class certification,  arguing that “Facebook violated sections 15(a) and 15(b) of BIPA by collecting, using, and storing biometric identifiers (a “scan” of “face geometry,” id. 14/10) from their photos without obtaining a written release and without establishing a compliant retention schedule” (Patel v. FB, 7).

    (more…)

  • By Gordon Hull

    Consider the following, too brief summary: following Foucault, one can say that biopolitics is about optimizing populations, or something to that effect.  This involves a lot of work on the part of the administrative state, which sets itself up to provide services, everything from sewers and other infrastructure to social safety nets.  Different places do this differently, but the goal is to provide for the general welfare.  At the same time, as Foucault noted from the get-go (see the last lecture in Society must be Defended), the biopolitical “make live or allow to die” generated a correlative “if you want to live, they must die” (SMD 255).  Those Others were excluded from the “population” that the state tried to optimize and so were either allowed to die, or (in cases such as the Nazis) actively killed, or (in the case of American Jim Crow) actively suppressed and marginalized and often killed.  These various forms of necropolitics are intimately related as both a matter of historical fact (the Nazis thought Jim Crow an excellent example of race management) and conceptual structure under the rubric of something like state racism.  So too, the biopolitics of optimization is historically tied to the necropolitics of state racism, as scholars like Dell McWhorter make clear in the American case, or Agamben does in the case of the German Versuchpersonen.

    But biopolitics has more than one variant, or at least so I’ve tried to argue.  I think it’s useful to view biopolitics in an earlier phase – what I call “public biopolitics” and a more recent neoliberal version.  The two can be distinguished in part by how they conceptualize the members of the population they are trying to benefit, and how they think they might do so.  For example, intellectual property on the public model is about public welfare and the benefits to everyone of encouraging inventive activity.  On the neoliberal version, the emphasis is much more on individual creators and markets. But the neoliberal version very much embeds a view of public welfare – a society of individual entrepreneurs and consumers, whose well-being is measured by various indicia of consumer welfare.

    But what about the necropolitics of neoliberal biopolitics?  One obvious avenue to pursue is that those who are excluded from markets are allowed to die.  The rhetoric of consumer choice is often then utilized to suggest that they deserve their fate because of a failure to opt-in.  Whatever the merits of that avenue, it seems clear enough that more can be said.  One component of the U.S. piece is going to be data and surveillance.  Biopolitics itself arose in the context of data governance mechanisms like the census, and the importance of data to contemporary capitalism is large and growing.  One way the surveillance state implements necropolitics is by what Margaret Hu calls “big data blacklisting:” classifying certain kinds of people as ineligible for many of the things to which other members of the population are entitled (such as employment or the ability to board a commercial aircraft), or rendering them subject to ceremony-free death by drone strike.

    In a fascinating recent paper, Michelle Gilman and Rebecca Green offer some important clues to another aspect of the story.  Most of the literature about data privacy focuses on what happens when people don’t have enough privacy.  Gilman and Green focus on the inverse problem: what happens when people have too much privacy, i.e., when they become invisible in a society predicated on visibility?  Working primarily with examples of the undocumented, the homeless, day laborers and those with felony convictions, Gilman and Green tell the story of “populations that remain outside seemingly omnipresent surveillance systems” (257), those who live in the “surveillance gap.”  If too much surveillance is bad, so is too little:

    (more…)

  • In the shameless self-promotion dept., I have a new paper out – actually a review essay in Ethics & International Affairs (SSRN link here) of two recent books on privacy, Ari Ezra Waldman's Privacy as Trust and Jennifer Rothman's Right of Publicity.  Both books are well-worth the read!  The essay also pushes my thesis about the difficulties of assessing privacy on purely economic grounds.  Here's the abstract:

    Most current work on privacy understands it according to an economic model: individuals trade personal information for access to desired services and websites. This sounds good in theory. In practice, it has meant that online access to almost anything requires handing over vast amounts of personal information to the service provider with little control over what happens to it next. The two books considered in this essay both work against that economic model. In Privacy as Trust, Ari Ezra Waldman argues for a new model of privacy that starts not with putatively autonomous individuals but with an awareness that managing information flows is part of how people create and navigate social boundaries with one another. Jennifer Rothman’s Right of Publicity confronts the explosive growth of publicity rights — the rights of individuals to control and profit from commercial use of their name and public image — and, in so doing, she exposes the poverty of treating information disclosure merely as a matter of economic calculation. Both books emphasize practical and doctrinal solutions to the problems they identify. In this essay, I take a step back and draw out the extent to which they converge on a fundamentally important point: the blunt application of market logic with its tools of property and contracts fails to protect the interests that lead us to turn to privacy in the first place; the tendency to economize privacy is a significant part of why we inadequately protect it.

  • By Gordon Hull

    Last time, I offered some thoughts on Woody Hartzog’s (and co-authors’) development of “obscurity” as a partial replacement for privacy.  On Hartzog’s account, privacy is subject to a number of problems, not least of which is that we tend to think in terms of an unsustainable binary: things are either “private” or “public,” which means that any information you disclose even once is permanently out there.  This doesn’t track how people live their lives: we share information all the time, for all kinds of reasons; however, we reasonably expect it to remain within certain social contexts, and we expect that it will take effort for someone to wrest it our of those contexts.  The latter of these is more or less what “obscurity” indicates.  There is a lot of information out there, but many of those who nominally have access to it don’t actually know enough to do anything with it.  Hartzog analogizes the situation to talking in a restaurant.  People at adjacent tables can likely hear the words you say, but they lack the context for them to be meaningful.  In that sense, communication, even in public, often remains obscure.  This is true both online and off; one of the reasons we need to worry about privacy now is that various technologies make it a lot easier to fill in that context, especially online.  So we don’t become less or more private online, but we do become less obscure.

    I concluded by promising a point about latent ambiguity in this context.  Recall that in Lessig’s Code, he suggests that a number of important legal concepts – “privacy” and “fair use” – embed an ambiguity in their meaning.  That is, we don’t know quite what they mean because the people who wrote them into law had never thought through situations analogous to a current one.  For fair use in copyright, for example, it used to be difficult to stop people from making personal copies of works or to meter how many times they used them. So those uses became “fair” and defensible.  If you were accused of violating copyright, you could offer fair use as a defense, and norms arose against pursuing those violations.  Now that technological developments make it easy to stop copying and meter use, we have to confront the question of whether we want fair use for normative reasons, or if we simply had it because of those disappearing inefficiencies. Should fair use protect only use that is inefficient to meter?  There is a clear analogy to obscurity: do we have obscurity because it was difficult to know enough metadata to figure out what the neighbors were gossiping about over the fence, or because we think it’s a bad idea to pry?

    (more…)

  • By Gordon Hull

    In a series of articles (and a NYT op-ed; my $.02 on that is here), Woddy Hartzog and several co-authors have been developing the concept of “obscurity” as a partial replacement for “privacy.”  The gist of the argument, as explained by Hartzog and Evan Selinger in a recent anthology piece (“Obscurity and Privacy” (=OP, pagination to the ssrn version)), is that “obscurity is the idea that information is safe – at least to some degree – when it is hard to obtain or understand” (OP 2).  This is because “we should not underestimate how much of a deterrent effort can be” (OP 2), and information that is hard to understand imposes similar costs in terms of effort.  They argue that obscurity functions better as a concept than privacy, in part because it avoids the binarism associated with the public/private dichotomy:

    “Because activities that promote obscurity can limit who monitors our disclosures without being subject to explicit promises of confidentiality, the tendency to classify information in binary terms as either ‘public’ or ‘private’ is inadequate. It lacks the nuance needed to describe a range of empirically observable communicative practices that exist along a continuum” (OP 4)

    The public/private dichotomy has been the object of sustained criticism, in part because it does not track how people live their lives.  For example, U.S. privacy law tends to regard information that an individual has voluntarily disclosed once as no longer private, as if the context of disclosure doesn’t matter at all.  The obscurity argument is designed to start with this basic thought: we share information all the time, and do so with the expectation that others will manage it appropriately.  This is Helen Nissenbaum’s point about the “contextual integrity” of information; it is also how Ari Waldman starts with his recent reformulation of privacy as trust.  The general stability of these information and contextual flows is behind Lior Strahilevitz’s “social networks” account of privacy and its violation, as well as Dan Solove’s account of how suddenly viral spreads of information online occasions the need to rethink reputation.

    (more…)

  • CoverI'm very pleased to announce that my new book, The Biopolitics of Intellectual Property, is now out in print/electronically on Cambridge UP.    Here's a blurb:

    "Intellectual property is power, but what kind of power is it, and what does it do?  Building on the work of Michel Foucault, this study examines different ways of understanding power in copyright, trademark and patent policy: as law, as promotion of public welfare, and as promotion of neoliberal privatization.  It argues that intellectual property policy is moving toward neoliberalism, even as that move is broadly contested in everything from resistance movements to Supreme Court decisions.  The struggle to conceptualize IP matters, because different regimes of power imagine different kinds of subjects, from the rights-bearing citizen to the economic agent of neoliberalism.  As a central part of the regulation of contemporary economies, IP is central to all aspects of our lives.  It matters for the works we create, the brands we identify and the medicines we consume. The kind of subjects it imagines are the kinds of subjects we become"

     

    The CUP page for the book has not just the text as a whole but the chapters (the main ones are a theoretical discussion, and one each on copyright, trademark and patent), and each of the chapters has an abstract.  For now, here's a little text from the introductory chapter that should give a better idea of what I'm up to.  As you'll see, I want to say something about IP, of course, but also about how I think biopolitics works (in both what I call its earlier, "public," form, and current neoliberalism) and about the fundamental but neglected importance of including law and legal institutions in our genealogical work:

    "The core of my argument is that the kind of power expressed in IP is subtly changing. Initial evidence for this claim is that new doctrinal developments have been difficult to incorporate into traditional models of IP. For example, retroactive  copyright extension is hard to square with a theory that says copyright is about incentives to create new works. Presumably, Walt Disney will be unmotivated by any changes in IP today. Trademark dilution, which allows action against expression that damages a brand’s image in consumers’ minds, is difficult to square with the standard theory that says that trademark is about avoiding consumer confusion. And the patentability of living organisms and (until recently) isolated genetic fragments is difficult to reconcile with the traditional view that products of nature should not receive patent protections. In cases such as these, I will argue, it is necessary to recognize that IP is performing a different and new social function, one that requires a rethinking of the kind of power expressed by IP laws and regulations.

    "I take my theoretical starting point from the work of Michel Foucault, for whom modern power has operated in two basic forms. The first, associated with the social contract tradition, conceptualizes a rights-bearing, juridicial subject, for whom law operates as a system of constraint and coercion. That which law does not prohibit is allowed, and the most important questions revolve around the limits to law’s ability to prohibit. The second, associated with the modern, administrative state, Foucault calls “biopower” or “biopolitics,” and it is concerned with productively managing and even optimizing populations through such measures as public health and education programs. Biopower is thus fundamentally generative. Closely aligned with the rise of capitalism, biopower has emerged as central to the operation of the modern state, which tends to emphasize regulatory agencies and administrative law, even if it also retains a framework of judicial rights.

    (more…)

  • By Gordon Hull

    As I noted last time, the Supreme Court has decided to take up a case about copyright in state codes.  Specifically, Georgia contracts with Lexis to produce an annotated version of its code, which is the state then blesses with the title “Official Georgia Code Annotated” and claims copyright in.  The question is whether the annotations are part of the code; if they are, they are public domain because the law is public domain.  The 11th Circuit said that they are, because the legislature officially adopts them, courts refer to them, etc.  If it walks like a duck…

    One of the decisions cited in the 11th Circuit opinion cites along the way establishes that model building codes, once incorporated into statute, lose whatever copyright protection they had.  In Veeck v. Southern Building Code Congress International (293 F.3d 791 (5th Cir. 2002)), SBCCI was “a non-profit organization consisting of approximately 14,500 members from government bodies, the construction industry, business and trade associations, students, and colleges and universities.”  SBCCI’s purpose was to develop model building codes for municipal governments to adopt, which the small north Texas towns of Anna and Savoy did.  Veeck ran a web site about northern Texas, and wanted to put the building codes online.  When he had some difficulty getting them from Anna and Savoy, he paid SBCCI $72 for the codes, and then posted that online, correctly labeling them as the building codes of Anna and Savoy.  The question, then, was whether in being enacted as part of the municipal law of Anna and Savoy, the codes lost the copyright protection they enjoyed as products of SBCCI.  The 5th Circuit, relying on the premises that “law” is not copyrightable, the copyright idea/expression dichotomy, and extant caselaw, ruled that the codes were no longer copyrightable.

    If the Georgia case attempted to think through what the conceptual underpinnings of the thesis that law is not copyrightable, SBCCI offers a chance to think about what that means in practice, and how it interacts with the more commercial IP system.  Indeed, one of SBCCI’s arguments in favor of protection was quite precisely the commercial incentives justification for copyright.  I want to approach all this somewhat elliptically.  Quite some time ago, I used Deleuze’s critique of Platonism (in Difference and Repetition and Logic of Sense) to suggest that the original/copy distinction in copyright functions like the eidos/copy distinction in Platonism.  For Deleuze this distinction isn’t about metaphysics so much as police work: it’s about knowing how to distinguish legitimate copies from illegitimate simulacra.  Deleuze writes:

    (more…)