If anyone still doubted that Agamben’s thesis – according to which biopolitics today is about the reduction of politics to biological existence (zoe), shorn of anything to do with the form (bios) of life – needs revision, this arrives about big-data employee screening that operates with an amalgam of questionnaires and biometrics.  Salon’s Andrew Leonard relates:

“Welcome to your next job interview. Because that’s exactly how I felt moments before launching into a set of “cognitive video games” devised by the “employment assessment” company Prophecy Sciences. My pulse was spiking because more was at stake than the life of my avatar. Prophesy Sciences intended to use the data I generated while gaming to determine what kind of worker I was.

Staffed by a handful of Stanford neuroscience Ph.D.s, Prophecy Sciences is an ambitious recent arrival in the fast-growing world of “people analytics.” Prophecy Sciences believes better data is the key to helping employers match the right jobs to the right workers, and even to assemble teams in which employees are guaranteed to be compatible with each other.

The notion that our mental makeup and employment suitability can be analyzed by data gathered from our physiological and behavioral responses to a video game is, to some people, a bit more creepy than a pee test. When I wrote about this phenomenon in December, I wondered if companies like Prophecy Sciences were leading society down the path to a rigid algorithmically-driven meritocracy. What happens to our lives when the machines know us better than we know ourselves?”

There’s a lot that could be said about this so-called “people analytics,” but I’ll confine myself to a few, brief thoughts.

First, it is widely noted that neoliberalism involves the transfer of risk and precarity away from corporations and to workers.  This would seem an almost perfect example: workers are responsible for their physiological responses to questions, and corporations will be able to claim that they know in advance how effective an employee will be (note that I haven’t said that employers are any good at using their risk models, or that those models are themselves any good.  The financial crisis should be enough to establish serious doubt in that regard, and faith in the models can look more like a moral position than an epistemological one).  In this inverse-Rawlsian world, we let natural accidents dictate life as much as possible.  People analytics is just the next step in the steady intensification of a process where employers demand complete access to a worker’s life, not just during the job, but as a prior condition for employment; the arc runs from 1980s drug-testing (for convenience store clerks!) to demands that prospective employees share their facebook passwords, now to the collection of biometric data.

Second, we can expect a parallel intensification of efforts at subjectification: workers are to embrace this new world of people analytics, and there will soon enough be a flood of self-help books designed to ease the process of making oneself conform, as much as possible, to the new demands of capital.

Third, defenders of big data like to point out that it’s “more fair” than whatever regime we have now.  This is because data, like structural law, enforces uniformly and without human intervention.  While it’s certainly true that non-data-driven hiring is full of dubious practices, biases and so forth, we should be very careful in asserting that so-called data-driven approaches are somehow (therefore) more fair.  Throwing numbers around is a typical strategy of depoliticization, especially when it’s in the service of turning uncertainty into risk. The old computer science “garbage in, garbage out” principle applies: it is easy enough to build biases into systems.  For example, a policy to replace police officers with traffic cameras looks a lot less fair if it turns out that most of those traffic cameras get put into poor neighborhoods.  Data may change where we look for bias, or how we ask questions about it, but it doesn’t eliminate them.

Finally, when Leonard gets his results back, the system says he would do best at creative and entrepreneurial jobs.  Nothing in the public sector shows up at all.  Not that this is a surprise.  Maybe they just don't have that in the database yet…

Posted in , , ,

8 responses to “Biopolitical Employment Screening”

  1. Curtis L Avatar

    This denies the (obviously) important work of human resources staff and project managers who have “a knack” at putting together good groups. You can’t quantify an emergent/organic order. Yet that seems to be exactly what this company is attempting to do. It’s like Moneyball writ large.
    Also, it assumes we want groups to work well together. Aren’t some of the greatest breakthroughs in group dynamics the result of groups NOT working well together?

    Like

  2. Gordon Hull Avatar

    Yeah, I think you’ve got exactly the issue – the point of all this number crunching is to reduce unpredictability: neoliberalism really doesn’t like the aleatory. I do think the reduction of chance or unpredictable events is meant to be into risk: so presumably the sophisticated HR computer will say that you’re the sort of person whose personality isn’t at all like the rest of the group, and/but, you are (or are not) likely to play well with others. So the software will attempt to quantify the likelihood that such an order will emerge. That would put the good HR people out of business (companies will sign-up for the computer version on the grounds that good HR people are in short supply).
    I read somewhere about a year ago that companies are already complaining that nobody good applies for their jobs; the problem is apparently that their HR computers are screening out anybody who does not 100% totally exactly meet prior experience criteria – which of course no one does. The trick to getting hired is precisely to get the HR person to look at your c.v. The links in the OP about how these risk models do/did a terrible job in finance are pretty damning – so I would be surprised if they worked well here.
    There’s a part of me that imagines a dialectical outcome: some n number of years into the future, there’s going to be a CEO who writes a blockbuster business school bestseller: Hiring by Hand: How to Beat the Market and Get Fantastic Employees by Screening them Yourself

    Like

  3. Eric Winsberg Avatar

    I have to admit I have mixed feelings about this. On the one hand, it does all seem a bit, what?, Huxlean? And of course, the shift of risk onto employees, and the counter-Rawlsian aspects are alarming.
    But I wonder how much the last two are really tied, specifically, to the automation of these procedures, rather than the procedures themselves, even when they are done “by hand.”
    And as for that, I guess I’m less sanguine than you are about the skills of HR people. Isn’t there reason to believe that those people mostly enforce their implicit and explicit biases, and try to hire people who they like, and to hire employees who happen to most resemble the HR people themselves.

    Like

  4. Gordon Hull Avatar

    I should probably say directly that I agree that the evidence that current HR is very often a mess of biases is overwhelming, and that making that practice more fair would be a really good thing to achieve. So my concern here is the way that a move to automated big-data-based screening encourages subjectification along neoliberal lines. Insofar as it merges questionnaire and game and biometric data, it seems to me that it’s a pretty serious intensification of that process. I’m also enough of a student of people like Latour that I worry that the system won’t either produce its own biases, or end up reinscribing in more subtle ways the biases we already have.

    Like

  5. Christian Marks Avatar

    The trend toward big data employee screening doesn’t go far enough. Employment decisions should be taken from human decision makers altogether.
    The concern about in-silico neoliberal bias is understandable, but misplaced. The Bloomberg-Forbes-LinkedIn business media encourages employees to internalize market failures and to put their own interests last. HR departments will soon welcome the opportunity to internalize their automated obsolescence through big data as if it were their own fault.

    Like

  6. Ed Kazarian Avatar

    Gordon,
    First off, this is great stuff and I wanted to register that.
    Second, I think the question of whether all of this leads to the obsolescence of HR departments is an interesting one. There’s a pretty long-standing tendency within HR to rely on lots of metrics in order to ground hiring decisions. Something like Myers-Briggs has or one of its competitors has become fairy ubiquitous (despite the reams of evidence showing that there is basically no there there). And of course, all the biases. The point would be, though, that ‘HR’ as a field has survived, and you could argue, prospered precisely by establishing itself ‘knowing’ how to use all these ‘metrics’ in order to make ‘grounded’ decisions. Seen in that light, it’s hard not to see this as mostly a matter of upping the ante on that process, where the institutional the response will be to integrate a layer of vendors of this kind of testing into the scope of existing HR procedures. As you suggest, the likelihood that this plays out in such a way as to do anything but encourage subjects to further neoliberalize themselves is, I would suggest, very small.
    The other part of this is just the question of the validity of it all, and I’m seeing very little that inspires any confidence that this is necessarily any more valid than what already exists. My partner, who teaches in a related field and deals with this stuff all the time, points out that pretty much all of the ‘metrics’ that are currently in play are basically garbage. And yet, they’re still de rigeur, precisely because people have invested in the idea that employing them is the only way to ‘reduce risk.’ I could easily imagine all of this getting adopted because ‘well, we have to do everything we can,’ and because doing more justifies the existence of the institutional node that’s doing it.

    Like

  7. Gordon Hull Avatar

    Hi Ed – thanks for the comments. I think you’re probably right – and it’s a nice point – that this could be read as an intensification of the use of metrics – which have come to serve a “truth function” or as the “truth” of employability (I’m of course using those in the Foucauldian sense).
    It’s always astonishing to me how much faith people put into metrics, rubrics, and other techniques of quantification that just don’t tell you much or that can fairly straightforwardly “prove” what you already “know.” As far as I understand it, the big data gambit is that if somehow there’s qualitatively more data, then we’ll get a qualitatively better system of metrics. I of course doubt that. I was reading lots of articles in places like Economy and Society a couple of years ago, and the consensus (of the articles I was reading) was that all these risk management mechanisms don’t prove much of anything, and that often the people who use them have no idea what they even do.

    Like

  8. Ed Kazarian Avatar

    To your last point: exactly. My partner says basically the same thing. The people she’s encountered in the corporate HR/Training/Org Dev sphere are for the most part completely unprepared to think rigorously or critically about what these sorts of numbers are actually measuring, let alone what having that information can usefully tell you — and I have at least the impression based on that article that these ‘cog sci’ folks aren’t necessarily much more sophisticated, especially where they’re drifting into terrain that looks far more sociological than psychological. It’s hard to tell how much of that last is the article and how much is what’s really going on though. But if the article accurately represents their sales pitch, hoo boy.

    Like

Leave a comment