• A recent paper by Ermanno Bencivenga in Philosophical Forum argues that it’s “time for philosophy to step into the conversation” (135) about big data, in particular to refute the thesis, which the article identifies in a 2008 piece in Wired, that big data will mean that we no longer need theory: “with enough data, the numbers speak for themselves” (qt. on 135).  The paper draws on concerns about spurious correlations: to demonstrate that a correlation is legitimate, it “must be shown to manifest a lawlike regularity; there must be a theoretical account of it,” that laws have to cohere with one another, and so on (139).  In other words, “knowledge is constitutionally dependent on theory” (ibid.).   Bencivenga concludes:

    “Big Data enthusiasts are (unwittingly advocating a new definition of what it is to know.  Their agenda is (unwittingly) semantical.  Except that it is not worked out, and any attempt at developing it in the semantical terms that have been current (and antagonistic) for the past two millennia is hopeless.  I will not rule out that a new set of terms might be forthcoming, but the burden is / on those enthusiasts to provide it; simply piling up data and being awed by them will not do.  What would be needed, ironically, is a new theory of knowledge, which so far I have not seen.  This is the reason why I have made an effort to get clearer about the claims being made, so that we can have a more orderly discussion of them and what it would take to make progress in it” (141-2).

    Fair enough, though I do want to note that the paper does not engage with any literature about big data other than the dated piece from Wired; and to hear enthusiastic techno-babble from Wired is not surprising.  That’s what they do.

    It’s also worth pointing out that these sorts of concerns have been expressed before.  Here are danah boyd and Kate Crawford from a widely-cited paper in 2012.  After noting that “Big data reframes key questions about the constitution of knowledge, the processes of research, how we should engage with information, and the nature and categorization of reality” (665), they caution that:

    “Interpretation is at the center of data analysis. Regardless of the size of a data, it is subject to limitation and bias. Without those biases and limitations being understood and outlined, misinterpretation is the result. Data analysis is most effective when researchers take account of the complex methodological processes that underlie the analysis of that data” (668)

    These are cherry-picked quotes – arguably, the entire paper is a response to the sort of enthusiasm in the Wired piece, and the focus on our hidden rules for interpretation is clearly directed at the view that somehow data self-interprets.  And there are certainly more papers that bring up this and analogous topics; Luciano Floridi raises similar ones here (Floridi’s worries resonate well with Amoore’s, discussed below).  That said, I think there’s something to be said for speaking of big data in Kantian terms, though not perhaps for the reasons Bencivenga advances.

    (more…)

  • Foucault reminds us that biopolitics is describes a kind of power structure according to which some will be compelled to live (or have their lives as members of a favored population optimized), while others will be allowed to die.  As he puts it, “the ancient right to take life or let live was replaced by a power to foster life or disallow it to the point of death” (HS1, 138).  Although much work has focused on techniques by which biopower works to optimize a population, it is worth attending to the disallowance of life, the thanatopolitics that is the other half of biopower, because the Republican party is engaged in producing a very effective case study.

    As Foucault says a few pages later, “a power whose task is to take charge of life needs continuous regulatory and corrective mechanisms.  It is no longer a matter of bringing death into play in the field of sovereignty, but of distributing the living in the domain of value and utility” (HS1, 144).  Those who are deemed to be without sufficient value or utility are allowed to die.  Much attention here has focused on Foucault’s brief remarks on “race” war in Society must be Defended, where he proposes that biopower creates a “race” of those who must die, so that “we” can live.  He is thinking of Nazi Germany and the logic of cold war deterrence, but  current Republican policies show that thanatopolitics can operate at a much more granular and subtle level.

    I am of course talking about the AHCA, the mean-spirited tax cut for the wealthy, sponsored by gutting the (limited) ability of the Affordable Care Act (ACA) to provide for the needy.  Various theorists after Foucault – François Ewald in particular – have emphasized the role of insurance in biopolitics as it comes to regulate and define acceptable behaviors according to actuarial risk.  Here we see its role in thanatopolitics.  There’s a good analysis of the AHCA’s evolution and structure here; suffice it to say that the law, if enacted, would end up in at least 24 million people losing access to insurance (largely by gutting Medicaid and ending the mandate that people have insurance. Republicans are flatly lying about this).  It also allows states to scale back guarantees for coverage of those with pre-existing conditions, and allow states to scale back what services have to be included in health insurance; the primary target appears to be maternity and contraceptive coverage.  Apparently the old, white men who are behind this forgot that they had mothers who required medical care.  There’s a fig leaf $8 billion to fund high-risk pools, which no one qualified thinks comes near to the cost of funding those pools.  It’s not a full regress to pre-ACA levels, but it’s about as far as things can go without requiring enough votes in the Senate to overcome a filibuster.

    (more…)

  • I know there’s a lot of material to pick from here, but the following two positions are hard to reconcile with a straight face.  Since Trump’s and his surrogates’ big mouths have been used against him before in Court, perhaps some court will see this one.  On the one hand, the mass deportation program is demanding that local law enforcement help federal immigration officials out.  Although (surprise!) the term has no clear meaning, Trump has targeted “sanctuary cities” for federal punishment.  The cities, like San Francisco, are fighting back, arguing (apparently with good precedent, though I stress that I don’t know much about this area of the law. For a decent discussion, start here) that it’s unconstitutional to force local law enforcement to do the work of the feds.  The entire state of California is joining in.  The substantive argument is that local law enforcement needs immigrant communities to be willing to help deal with crime, and that people will be justifiably terrified to come forth if they fear deportation.  So local complicity in mass deportation will make local crime-fighting much more difficult.  California Lt. Governor Gavin Newsom wrote on his Facebook page that “on average, 35.5 fewer crimes are committed per 10,000 people in sanctuary counties, the median household annual income is $4,353 higher, the poverty rate is 2.3% lower, and unemployment is 1.1% lower. Before you take away our funds, I suggest you take a look at the facts, Mr. Attorney General.”  The status of all of this is up in the air; San Francisco is reporting that the administration is backing down considerably from its earlier bluster.  But the bluster is quite clear: local law enforcement is to be part of the deportation effort.

    On the other hand, the Jeff Sessions Justice Department has decided to throw away everything that’s ever been done by the DOJ to deal with police violence in inner cities, trying especially to nullify existing consent decrees in places like Baltimore.  Local police, says the Sessions DOJ, should be allowed, even encouraged, to resume racist stop-and-frisk policies and the like.  Why? In a memo to DOJ employees, there were several bullet points, including that “the safety and the protection of the public is the paramount duty and concern of law enforcement officers;” and that “local control and local accountability are necessary for effective local policing.  It is not the responsibility of the federal government to manage non-federal law enforcement agencies.”  There is a further bullet point that says cooperation with the federal government is important, and that local authorities need to comply with DOJ grant conditions and federal law.  But the direction of the memo is clearly to say that the feds should butt out of local law enforcement.

    Maybe he should copy ICE on that one.

  • Apparently Burger King ran an ad that attempted to trigger Google Home by having a Burger King employee say “OK, Google: What is the Whopper burger?”  First the ad was up, then it was down, now BK says that it might come back.  The ad was supposed to trigger Google Home to read the first line of the Wikipedia definition of the Whopper. Annoyed Google customers promptly changed the line to say the burger contained cyanide.  A handle that looked suspiciously like a BK executive then changed it back into effusive praise.  As of this moment, the Wikipedia front page reads much more neutrally.  So apparently it’s been reverted, or the BK sock puppet isn’t at work yet.

    We don’t have a good regulatory strategy for this one.  The only one that comes immediately to mind is some sort of creative deployment of the Computer Fraud and Abuse Act (CFAA), which essentially criminalizes accessing a computer beyond authorization.  But there are reasons to be very concerned with this strategy.  The CFAA was used in an ill-conceived federal cyberbullying case, where the problem was not that the bullying, which resulted in a teenager committing suicide, was not bad (cyberbullying is epidemic, usually misogynistic and generally awful), but that the CFAA interpretation advocated in the case would have legitimated the idea that using a fake name or email address online was a federal crime.  That said, the CFAA might put a stop to this sort of saturation of all of life with advertising.  You probably did not authorize Burger King to access your home system, although I wouldn’t be surprised if terms of service didn’t start taking that sort of complaint off the table very soon.  In any case, don’t expect the current government to help: while the Republicans were involved in their circular firing squad trying to destroy the ACA, they did pass legislation blocking impending regulations protecting the privacy of ISP-collected data.

    (more…)

  • As you probably have heard, in a flurry of activity yesterday, the North Carolina legislature repealed and replaced its omnibus LGBT-hate law, HB 2.  The state was clearly moved to act by an NCAA deadline (repeal by Thursday, or no championships until 22) and an AP report earlier in the week that said the law would cost the state $3.7 billion over ten years.  Almost no one is happy with the new measure, particularly LGBTQ advocates.  The replacement law indeed eliminates the restriction on trans bathroom access, though it allows harassment of trans people under “indecency” laws.  It does not, however, as many of its advocates (who are being parroted by national media like NPR as I type this) claim “reset” the state to its pre-HB 2 status.  The replacement bill also locks into place an explicit refusal to offer employment and other civil rights protections to LGBTQ individuals in state law, and it forbids other governmental entities in the state (cities, schools, universities) from going further either on bathroom access or other employment laws.  Two clauses of the very short bill accomplish this.  Section 2 says:

    "State agencies, boards, offices, departments, institutions, branches of government, including The University of North Carolina and the North Carolina Community College System, and political subdivisions of the State, including local boards of education, are preempted from regulation of access to multiple occupancy restrooms, showers, or changing facilities, except in accordance with an act of the General Assembly.”

    Section 3, which is staggeringly broad if you read it literally (the literal read was apparently not intended by those who  wrote it) says that “No local government in this State may enact or amend an ordinance regulating private employment practices or regulating public accommodations.”  At the very least, cities like Charlotte are prohibited from enacting LGBTQ protective ordinances like the one HB 2 struck down.  Hence we have not returned to the status quo.  The law does sunset in 2020, but of course the legislature would be free to push that into the indefinite future.

    I don’t have a lot to say here on the bill itself, because it clearly does not put the state where it ought to be.  Refusing to protect gender non-conforming individuals from discrimination is a disgrace.  It’s also bad policy to ban municipalities from raising the minimum wage (an aspect of all this that was explicitly in HB 2 and which remains under section 3, despite it getting almost no media attention), though I won’t make that case here.  It might or might not even bring the NCAA back.  I also am still not sure that it would survive a Romer-based challenge, since the clear intent is to freeze-out employment discrimination protection for LGBTQ people. 

    I do want to note one thing that’s not getting a lot of press.  In signing the bill, Democratic Governor Cooper said that “In a perfect world, with a good General Assembly, we would have repealed HB2 fully today, and added full statewide protections for L.G.B.T. North Carolinians.”  In this he is absolutely right.  The NC legislature has been on a far right crusade for years and is desperate to hang on to political power as a way to continue that process.  They do that through gerrymandering, voter suppression, and efforts to control local election boards.  As I noted in the linked post above, more people voted Democrat than Republican in North Carolina in both 2012 and 2014.  And yet somehow the Republicans have veto-proof majorities in both houses of the legislature.  That outcome ought to be unacceptable because it says quite clearly that the legislature lacks democratic legitimacy.  A lot of December’s attempt to make election boards Republican has been struck down.  There is litigation pending, including a 4th Circuit decision that said the most recent legislative maps target minority voters with “surgical precision,” and a pending order to have statewide elections in 2017 under new maps (the state has appealed this to the Supreme Court).  If the Courts get around to striking down all of this, things might get better.  But in many respects, the HB 2 debacle is also a symptom of a deeper failure of governance and democratic legitimacy.

  • Brands are of increasing importance to capitalism.  As an insightful book by Franck Cochoy argues, this is part of the logic of commodification, which generates a perpetual demand for product differentiation.  At the point that a product becomes a commodity – i.e., at the point that it leaves the bazaar, where individual vendors measure out products like food in bulk to individual customers, and enters the process of circulation in a capitalist market – it becomes necessary to distinguish commodities from one another.  This is because the initial process of commodification first produces a necessary standardization (where weights, notions of what a given product name signifies, etc. become more uniform – in Marxian terms, where exchange value becomes measurable and money becomes the primary way by which one measures equivalence between commodities whose use value is presumed equivalent), from which producers then have to distinguish their commodities.  This demand for differentiation generates packaging, brands, trademark, and the final detachment of commodities from brands, such that brands have value that can be applied to other commodities.  A good literature from anthropology and cultural studies illustrates this process with such products as “quality” salmon, canola oil, and teak.

    There is currently a lively debate in the context of the internet about whether brands actually produce (surplus) value in the Marxian sense.  The pro side is represented in a recent piece by Adam Arvidsson and Elanor Colleoni, who argue that standard Marxian notions of labor apply poorly to the generation of value in places like social media, because the Marxian notion of labor is too tightly connected to time spent laboring.  Instead, and following Negri specifically and autonomist Marxism more generally, they claim that “in effect, social media platforms like Facebook function as channels by means of which affective investments on the part of the multitude can be translated into objectified forms of abstract affect that support financial valuations” (146).  That is, prosumers produce surplus value by means of affective investment in brands (it is perhaps worth pointing out that this argument is not confined to theories about information; for a similar argument from the anthropology literature, see this piece), and this unremunerated attachment is harvested by social media companies as surplus value.

    In a critique of Arvidsson and Colleoni’s claim that brands produce value, Jakob Rigi and Robert Prey argue that affect “does not produce new value but instead helps the owner of the brand to appropriate a larger portion of the surplus value produced by workers in the realm of production” (400).  They identify three primary ways this happens: (1) by allowing brand owners to increase demand for their commodity at the expense of other commodities, enabling them to sell their commodities at prices above their value; (2) by allowing brand owners to extract monopoly rents in the form of intellectual property licensing; and (3) by allowing speculative value and what Marx calls “fictitious capital” to attach to the brand via the stock market.  I want to take a closer look at the first claim here, because it comes across to me as mistaken.  At the very least, it seems to me to require more argument than Rigi and Prey supply to defeat the supposition that brands create value, even if Arvidsson and Colleoni aren’t quite right to speak in terms of affective labor (though I’m going to defend at least a version of that claim below; part of my goal here is to be able to define it more precisely.  As I’ve suggested in the context of big data (see also here), I think that the surplus value discussion here is incomplete without reference to primitive accumulation).

    (more…)

  • A recent paper by Hamid Ekbia presents an interesting Marxian theory of the relation between exploitation and computer networks.   The paper is intended as an intervention in to discussions of the accumulation of value in what is now called cognitive capitalism (I’ve attempted to synthesize some of that literature here).  The most interesting part of the Ekbia’s paper seems to me that he’s able to construct a coherent notion of class (or close to class – he acknowledges that it’s not quite a class in the strict Marxian sense) within those who are part of the networked economy.  In particular, he is able to locate those who are exploited and to roughly define them as a group: the “condensers.”  The problem of locating a specific exploited class is important and salient partly because there is no way for a Marxian theory of value to work unless somebody is exploited, but also because the behavior of prosumers in particular has been the subject of intense controversy, particularly on the subject of whether they produce value.  Ekbia’s contribution, it seems to me, is to show how and why some prosumers manage to be exploited.

    (more…)

  • There is a running debate in critical theory circles about the applicability of Marxian analysis to big data specifically, and to an economy dominated by immaterial goods, more generally (I have blogged about this periodically, circling primarily around the concept of primitive accumulation: see here and here).  As part of working through that literature, here I want to lay out some of the broad outlines of the pro- side as I see it, and then offer some preliminary thoughts from Marx’s own work that address one specific objection.

    Advocates of the applicability of Marx almost invariably (as far as I can tell so far; I don’t claim to have read anywhere near all of this literature yet) base their case on Italian autonomism.  Autonomism is best known in the work of Antonio Negri (with or without Michael Hardt); important in this context are also Paolo Virno, Maurizio Lazzarto and Franco Berardi.  Autonomism breaks with classical Marxism both insofar as it breaks with any sort of economic determinism (by shifting the base to class struggle, and not means of production), as well by advancing two more immediately applicable ideas.  First, autonomism adopts Marx’s “Fragment on Machines,” an unpublished (by Marx) and fragmentary set of notes to the effect that capital will be relying increasingly on accumulated science and knowledge (Virno has what is probably the best synopsis of this account of the “general intellect”).  Second, autonomist theory argues that capitalist relations have extended beyond the boundaries of the workplace to encompass all social relations.  In Negri’s terms, we now face the complete subsumption of society by capital. 

    (more…)

  • One question surrounding big data – in addition to well-established worries about privacy and discrimination – that is starting to get attention is how it functions as a mode of capitalist accumulation.  There is an emerging literature on capitalist value creation and big data, but a lot of that is about the creation of surplus value, and so generates debate about whether the value that individuals freely contribute to the Internet can be described in Marxian terms as surplus labor.  In view of that discussion, I’ve suggested that we need to also think about the level of primitive accumulation, or what David Harvey calls “accumulation by dispossession.”  In the case of big data, I argued, one such method is by depriving individuals of their preferences, though accumulation practices are diverse.  A recent paper by Deborah Lupton suggests another mechanism by which this process might occur: coercive self-tracking.

    (more…)

  • One of the more perplexing things about the Trump presidency is why it exists in the first place: he took office having lost the popular vote by a wide margin, and with one of the smaller electoral college margins in memory.  The win also defied virtually all of the pre-election polling and commentary: almost no one (except Michael Moore) predicted the outcome correctly, and on his victory lap, Trump himself admitted that he thought he was going to lose.  So a lot of us have tried to figure out what happened (I continue to think the election was about white supremacy, though that doesn’t explain how Trump got the white supremacists to the ballot box; I’ve also wondered about the libertarian candidates and Clinton’s staggering failure to take the Good News about the auto bailout to the rust belt states).   Others have wondered about the Comey letter, Russian hacking, and so on.  Now there’s another possibility: the Trump campaign’s use of big data, as reported in this chilling article on Motherboard. 

    (more…)