NYPD’s Domain Awareness System / Photograph by Shannon Stapleton/Reuters
In a recent article (May 15, 2015), the investigative news platform Mediapart, reported the use of a crime predicting algorithm by the French Ministry of Interior since the end of 2014. This type of algorithms that predicts the degrees of probability for crimes to occur at a given place and a given time, has been used for the past few years by various police department in the United States, as well as the in the United Kingdom. The Los Angeles Police Department and its Real-Time Analysis and Critical Response Division, in particular, relies on the private (!) company PredPol in order to organize the location of police patrols in the city (see a 2014 article in The Guardian). Of course, when talking about these algorithms, Philip K. Dick’s short story, Minority Report (1956) and its cinematographic adaptation by Steven Spielberg (2002) are invoked as the prophecy that announced such policing practices. Minority Report indeed describes a police force able to arrest people before they are about to commit a crime based on predictions made by three “precogs.” The vision developed by this fiction insists on the potential fallibility of this system and the problem of arresting someone for a crime (s)he has not yet committed. However, what I would like to shortly examine here, is rather what the use of these algorithms implies in the way a society is structurally conceived.
The first thing that ought to be said might be candid, but there seems to be a fundamental problem in the fatality of criminal behavior that these algorithms imply. As I wrote in a previous article about police and the notion of “law enforcement,” the very fact that such a thing as ‘anti-riot police’ exists, implies that the State anticipates that demonstrations against it would be fundamentally illegitimate and should thus be suppressed by a branch of police designed for it. Crime predicting algorithms register themselves in the anticipative function of the police, which might appear as an obvious function to us, but that can actually be dated historically as a recent paradigm. Anticipation requires a set of incriminating criteria that essentially rely on collective and/or individual subjectivity, not on any legal basis, since one cannot be legally accused of “already resembl[ing] his crime before he commits it,” as Michel Foucault states in his 1974-1975 Lectures at the Collège de France (Abnormals, 2004). This collective and/or individual subjectivity obviously involves racism as a determining influences and the overwhelmingly numerous cases of police officers searching, arresting and sometimes killing non-White bodies in the Western world are here to remind us of this.
The mathematical bases of crime predicting algorithms do not escape from such incriminating subjectivity. On PredPol’s website, we find a quote by a Santa Cruz, CA Police deputy affirming that “all the computer takes into effect are actual incidents that have occurred and have been reported…It really doesn’t know anything about the demographics of individuals that live in that area, what the economic statuses of these individuals are, or anything about the person. It’s all area-specific.” Similarly in The Guardian article cited above, UCLA professor of anthropology P. Jeffrey Brantingham, who contributed to the way the PredPol algorithm works, affirms that “this is about predicting where and when crime is most likely to occur, not who will commit it.” Coming from a researcher in anthropology who complements his research about criminological patterns with algorithmic studies of “cultural transmission in a variable velocity crowd” (see his website), it seems hard to be convinced that he actually believes what he affirms.
Through its regular census and in the way the latter is done, the United States are a country that has a good sense of the economic and racial organization of its territory — it is less true for a country like France that does not recognize race or ethnicity as an official criteria to define a given individual or population, yet, is well aware of it unofficially, especially when it comes to the police’s anticipative function. To affirm that the prediction of crime is only linked to a certain territoriality and not to the bodies located in it, thus appears disingenuous. Moreover, the determinism that the algorithm implies (or that all algorithms imply for that matter) seems awfully selective, since determinism is precisely what is not acknowledged by a majority of the political representatives when it comes to social inequality. In other words, it seems clear for a certain amount of politicians that crime can be mathematically predicted, but not as clear that some citizens are structurally less able to access the common societal infrastructure (education, economic opportunities, justice, transportation, etc.) than others, depending on the place where they live, despite the similarities of some criteria in both forms of determinism.
The last point of critique of crime predicting algorithms has to do with something common to all algorithms (as far as my poor knowledge of mathematics can say): how does the action undertaken by the algorithm’s output constitute a “re-entry” (a new input) in the same algorithm. Specifically to our case here, how does the police presence at a certain location where the crime probability is said to be high by the algorithm, influences the likelihood of crime to be influenced at this location. The designers of PredPol would like us to believe that the police presence significantly reduces the crime probability and, we might concede it to be true in the specificity of the moment; yet, we can highly doubt that it is, indeed, true on a longer period of time. The ubiquitous presence of the police in a particular neighborhood antagonizes the latter’s inhabitants and crime is likely to subsequently increase because of it. The inverse has been recently proven against its will by the New York Police Department when it went on strike against New York’s mayor Bill de Blasio, after two police officers were shot and killed in a context of generalized defiance against the police and the killing of Eric Garner on July 17, 2014. The NYPD had then decided to undertake only “absolutely necessary” arrests, thus reducing of two thirds its amount of daily arrests. As Matt Ford cleverly asks in an article for The Atlantic (December 31, 2014), “how many unnecessary arrests was the NYPD making before?” Somehow, the NYPD has involuntarily shown us what the police could look like if it chooses not to follow its subjective anticipation that crime predicting algorithms exacerbate. Although algorithms seem to have recently enter the traditional sterile technological debate about whether they are emancipation or oppressive (in particular in light of the growing power exercised by online companies such as Google or Facebook), we might want to use them to struggle against the determinism that they reveal until the re-entries that constitute our action based on their information, finally make them obsolete, determinism thus being replaced by effective equality.