• By Gordon Hull

    One of the things that marketers like about big data is that they can personalize ads.  That operation is getting increasingly sophisticated.  We’ve known for a while that basic personality traits (like introversion/extraversion) can be predicted from Facebook likes.  I missed this paper when it came out, but some of the same authors as the initial Facebook “likes” paper have now done the inevitable follow-up.  By targeting Facebook ads on the basis of openness and extraversion (where the correlation with likes is fairly robust), they were able to make users 1.54 times more likely to make a purchase (thus about 40% more likely) than with non-“psychologically-tailored” advertising.  The study size, as with most research on FB, is enormous – the study reached some 3.5 million users (a point the authors use to try to defuse some objections).  The authors duly note the murky ethical issues that emerge: on the one hand, you could proactively try to assist people who show signs of depression; on the other hand, weaknesses like susceptibility to addictive gambling or dubious political targeting could be exploited.  That observation is pretty obvious now, in the wake of Cambridge Analytics.  Two other takeaways stood out more to me.  First, the study predicted personality on the basis of only one like.  As the authors note, that means the study likely underestimates the potential effect of psychological targeting.  Second, the authors emphasize that a point that a lot of us have been trying to make for a while: that these results undermine a lot of the current regulatory strategy for privacy. I’ll let them speak:

    (more…)

  • By Gordon Hull

    We’ve all heard of a version of the experiment: you set a kid down with a marshmallow, and tell him that if he can sit there and not eat it for a while, he can have two.  Some kids can do it, and others can’t.  A famous paper suggests that whether the child has the willpower to wait is predictive of his future success in life.  Apparently, not so much.  According to a piece by Jessica McCrory Calarco in the Atlantic, new research casts this finding into doubt.  It seems that the original study enrolled fewer than 90 children, all of them selected from Stanford’s lab school.  Let’s just call that an unrepresentative sample.  A new, larger and more representative study concludes that willpower isn’t what’s driving the result; it’s socioeconomic status.  As Calarco puts it, “the capacity to hold out for a second marshmallow is shaped in large part by a child’s social and economic background—and, in turn, that that background, not the ability to delay gratification, is what’s behind kids’ long-term success.”  For example, once you factor in whether the mother has a college degree or not, the children’s ability to wait wasn’t predictive:

    (more…)

  • By Gordon Hull

    In a recent paper, Karen Yeung introduces the concept of a ‘hypernudge’ as a way to capture the way Big Data intensifies design-based ‘nudges’ as a form of regulation.  Yeung’s discussion draws partly from discussions of Internet regulation, partly from literature on design, and partly from legal literature around privacy and big data.  Yeung’s basic argument is that, in the context of big data:

    “Despite the complexity and sophistication of their underlying algorithmic processes, these applications ultimately rely on a deceptively simple design-based mechanism of influence – ‘nudge.’ By configuring and thereby personalizing the user’s informational choice context, typically through algorithmic analysis of data streams from multiple sources claiming to offer predictive insights concerning the habits, preferences and interests of targeted individuals (such as those used by online consumer product recommendation engines), these nudges channel user choices in directions preferred by the choice architect through processes that are subtle, unobtrusive, yet extraordinarily powerful” (119)

    Ordinary nudging technologies – she cites the humble speed bump – are static.  In contrast, the sorts of nudges provided by data analytics are dynamic, continuously and invisibly updating the choices a user sees.  They work both to make decisions automatically based on what users have done or can be predicted to do, and by guiding decision-making by influencing what choices are available (and how they are presented).  Because of both the dynamism and invisibility, data-driven nudges can be incredibly powerful in comparison to their static cousins. 

    Yeung’s paper also enables one to advance a couple of points in the context of information ethics.

    (more…)

  • Yesterday was a big news day.  The biggest story was probably our Tinpot Dictator’s decision to unilaterally violate the Iranian nuclear deal.  In addition to alienating almost everyone not named Bibi Netanyahu or John Bolton, and making the world less safe, the main thing this proves is that Trump can’t see more than ten minutes into the future: very soon, he will supposedly be sitting down with North Korea’s Kim Kong Un to… negotiate a nuclear deal.  What a wonderful time to petulantly scream that the U.S. does not abide by its nuclear deals!  Back home, if you pay attention to local elections, you’ll have heard that in the Republican primary that includes a sizable part of Charlotte, U.S. House incumbent Robert Pittinger narrowly lost to Rev. Mark Harris, a far right preacher who led the state’s 2012 anti-gay-marriage constitutional amendment.  Conventional wisdom is that Harris will be easier for Democratic nominee McReady to challenge in the fall.

    The news that I suspect didn’t make it out of the local area is that Mecklenburg County’s Sheriff Irwin Carmichael just lost – badly – his re-election bid.  There were three candidates in the Democratic primary, and nobody on the Republican ticket, so the winner last night won the office.  Carmichael had defended the County’s use of 287(g) rules that allowed the ICE agents access to those detained in Mecklenburg County (which includes Charlotte; Charlotte police do not participate in  287(g)) jails.  His support for that program is, by all accounts, why he lost. Both of his opponents came out against it, and there was a strong grass-roots mobilization campaign.  The winner also plans to roll back two of Carmichael’s ending in-person visits with inmates and condoning the use of solitary confinement. 

    That’s a good day for those of us who care about criminal justice reform and a reminder that local politics can matter.

  • By Gordon Hull

    The Supreme Court issued a landmark patent ruling yesterday in Oil States v. Greene.  The most recent major revision to the Patent statute specifies that the validity of patents – in terms of whether they meet conditions of patentability (utility, non-obviousness and novelty – the opinion does not directly specify whether questions about patentable subject matters are included here, but it cites §101, so I think that’s probably covered too)  – can be challenged and resolved through an administrative inter partes review.  This review process has a number of procedural requirements, but at the end of the day the decision reached can result in a patent revocation and is conducted entirely within the administrative apparatus of the PTO.  The question posed is therefore whether the government can revoke a patent without going through the courts.  The answer delivered in a 7-2 opinion by Justice Thomas, is yes.  I haven’t digested the opinion fully, and there was another, somewhat related case yesterday that I haven’t even started on.  That said, Oil States is a very interesting decision, including the dissent authored by Justice Gorsuch.  Here’s some initial reflections on it (I did some context-setting earlier: see here).  I’ll first talk about the opinion, and then end with a thought about the underlying policy problem, for which inter partes review is basically a band-aid solution.

    The basic argument of the opinion is that patents aren’t private property so much as they are a public franchise, and as such aren’t the sort of thing the Constitution is talking about when it says property claims have to run through the judicial branch.  As Thomas argues, “the decision to grant a patent is a matter involving public rights—specifically, the grant of a public franchise” (op. slip, 7, his emphasis).  As 19c case law establishes, a patent “take[s] from the public rights of immense value, and bestow[s] them upon the patentee” (op slip, 8), by granting a right of exclusion (traditionally the core of property).  It does so to incentivize invention.  It then follows logically that the decision to remove a patent is also a matter of public franchise.  Thomas cites a 1966 ruling that administrative review covers the “issuance of patents whose effects are to remove existent knowledge from the public domain” (8-9).  In other words, if the patent doesn’t cover something novel, it takes knowledge that was available to the public and privatizes it.

    The ruling strikes me as exemplifying what I call “public biopolitics,” which is basically the pre-neoliberal version that Foucault identifies (especially in Security, Territory, Population and Birth of Biopolitics) with classic liberalism.  They don’t quote Mill, but the opinion is the sort of thing that the Mill of the Principles of Political Economy could get on board with. For example, Mill justifies the departure from laissez faire on the grounds that inventions are of tremendous public value, but require nurturing by the state. Similar instances of justified state intervention include public funding of things like universities (p. 968).  He also explains what happens in terms of a publicly-granted patent license: “this is not making the commodity dear for [the inventor’s] benefit, but merely postponing a part of the increased cheapness which the public owe to the inventor, in order to compensate and reward him for the service” (p. 928). Thus, even in the case of patents, Mill conceives of the production of knowledge as the production of something that benefits the public generally, and which the application of laissez-faire will not supply. The point is not to internalize externalities for the sake of the inventor.  This sense of knowledge as a public good is specific to modern liberalism.  As Foucault puts is, “activity that may go beyond this pure and simple subsistence will in fact be produced, distributed, divided up, and put in circulation in such a way that the state really can draw its strength from it” (STP 326).

    (more…)

  • By Gordon Hull

    A little more than a year ago, I floated a version of the thesis that Big Data functions as a form of capitalist accumulation by dispossession.  “Accumulation by Dispossession” is David Harvey’s term for what Marx called “primitive accumulation,” and the basic idea is that capital has to extract value from individuals in a way that pushes them into its system of value extraction.  It does that by depriving them of other sources of value.  For example, the enclosure laws in 16th-Century England served to dispossess commoners and small-scale farmers of the ability to subsist off the land, and so thrust them, “free” into the urban labor pool.  Absent this initial dispossession, the formation of the urban market in “free labor” would have been impossible.  My argument focused on the dispossession of preferences, and was that data analytics function to deprive us of the capacity to form and express preferences outside of the logic of capitalist markets.  That is significant because the more our “preferences” are restricted to those things that we can buy, the more our lives are determined by market logic – and the more things that exceed market logic become invisible.  We are dispossessed of the ability to imagine life differently.

    So.  Back to Facebook.  A recent piece by Sam Biddle in the Intercept, based on a leaked internal FB document, suggests that Facebook is deeply engaged in exactly this process.  According to Biddle:

    (more…)

  • By Gordon Hull

    Surprise! Facebook is back in the news and the doghouse, this time for allowing vast amounts of user data to find its way to Cambridge Analytica, which then used it to try to elect Donald Trump.  The only surprise is that anyone is surprised.  I’ll review why that is first, then offer a term that gets at what I take to be the real problem: we live in a deliberately-created hostile information architecture.  Let’s first rehearse why the FB-Cambridge Analytica nexus is not a surprise.

    (more…)

  • By Gordon Hull

    In the two previous posts, I first suggested that Thomas Merrill’s logical argument for why the right to exclude was the sine qua non of any conception of property was inconclusive.  I then offered a brief reading of the Foucauldian distinction between juridical and biopower, applying it to Locke to suggest that in Locke’s case, both aspects of power were present, but juridical was dominant.  In what follows, I want to argue that the opposite is the case with the contemporary Demsetzian account: here, biopower is dominant.  In other words, a look at endpoint historical instances of property theory suggests that the view of power underlying them has changed from Locke’s time to ours, even as the (quasi-juridical) right to exclude remains a common thread.

    To return to Merrill’s thoughts on exclusion, how might we combine exclusion with Foucauldian theories of power?  An initial argument is straightforward, and goes something like this: any conception of property that says the right to exclude is essential retains at the very least that much of a juridical understanding of property.  Since rights are an aspect of juridical power, and since juridical power is about the right to repress and to prevent, it’s easy enough to see the exclusionary right as juridical.  Merrill’s arguments about the priority of the right to exclude over the right to use and develop suggests that any biopolitical emphasis on optimization is ultimately secondary to a basic ability to repress.  Importantly, Merrill extends his argument to commons-based regimes: any internal use privileges are secondary to the initial ability of the villagers to exclude non-members.  In short, even as we increasingly live by biopolitical regimes, juridical power retains its force at the very core of those regimes: the property right which has been the central feature of capitalism.

    (more…)

  • By Gordon Hull

    Last time, I suggested that Thomas Merrill’s logical argument for why the right to exclude was the sine qua non of any conception of property was inconclusive.  With that space cleared, I want to focus on what I think a focus on the right to exclude does emphasize.  Merrill is right that property theory has tended to take the right to exclude as fundamental.  A historical dive into how that right is conceptualized says something important about how we have understood power.

    That is, the focus on the right to exclude does show us something important has changed.  If we use Locke and Demsetz as points of comparison (in the sense that they present historical endpoints), it is clear that both theories of property rely on both juridical and biopolitical justifications.  However, in the Lockean case the biopolitical end is subordinate to the juridical one; in the Demsetzian case, the opposite holds.  In other words, we have away moved from a model of property in which juridical power is dominant to one where biopolitical power is dominant.

    (more…)

  • By Gordon Hull

    In a 1998 paper, Thomas W. Merrill argues that the presence of the right to exclude others is the necessary and sufficient condition for the presence of a property right.  In this, he views himself as arguing against a “nominalist” interpretation of the right.  This nominalist interpretation, associated with legal realism and the familiar Hohfeldian “bundle of rights,” says that property is best conceived as being located in a conventionally established set of rights, the exact contours of which will vary between jurisdictions and time periods, and within which no one element is necessary.  At the risk of importing a somewhat anachronistic term, we might say that the Hohfeldian bundle leads somewhat directly to a “family resemblance” theory of property (Julie Cohen applies it to property in the paper I’m using below; the approach has also been used to good effect for privacy, and with explicit reference to Wittgenstein).  In defense of this proposition, Merrill offers three kinds of justifications: a logical one, a historical one, and one based on established legal use.  More about the first in a moment; the latter two rely on accumulated historical record and precedent.

    (more…)