Prediction or policing? Can algorithms be racists?

Square

By Elisa Maria Campos

The global fight against racism started several centuries back, even before we were able to make a call to invite a colleague to go on protest. Although the black #hashtag movements assume a core role in our digital era, global resistance against racism started in shared experiences in the fight against slavery, colonialism and racial oppression in Black movements as the one surrounding the Haitian revolution, the anticolonial revolutions and the long Black 1960’s (Martin, 2005).

The historical length of the fight against racism is relevant, not only to demonstrate the urgency of combating it, but also to raise awareness of how entrenched racism is in our society. In other words, how impressively hard is to combat it and to decolonize our perspectives, language, expressions, institutions and culture.

Will It be different concerning algorithms? Is technology a powerful tool for inclusion or exclusion? Can algorithms be racists? Who develop them, white or black or both? Unless we intend to continue repeating our mistakes and increasing the gaps of inequality, we must answer these questions urgently.

Data for Black Lives has been working on it. Defined as ”a movement of activists, organizers, and mathematicians committed to the mission of using data science to create concrete and measurable change in the lives of Black People.”, it was found by Yeshimabeit Milner and Lucas Mason-Brown, in Cambridge, Massachusetts. She engaged with data-based activism documenting fellow students’ experiences of racist policing, when she noticed patterns of racist classification by police, which results in data biases under police departments.

“There’s a long history of data being weaponized against Black communities.”, says Miner. It starts from data collection, as predictive policing derives from biased data registered reproducing historical prejudice from police against black people. Also, because predictive policing tools in the United States are mainly of two types, the ones using location-based algorithms, to predict where and when crimes are more likely to happen and tools based on data about people such as gender, age and criminal record to predict who has a high chance of being involved in future criminal activity.

An article by Will Douglas Heaven, in the MIT Technology Review magazine, give detailed information on it. “One of the most common predictive policing tools, called PredPol, which is used by dozens of cities in the US, breaks locations up into 500-by-500 foot blocks, and updates its predictions throughout the day—a kind of crime weather forecast. The person-based tools can be used either by police, to intervene before a crime takes place, or by courts, to determine during pretrial hearings or sentencing whether someone who has been arrested is likely to reoffend. For example, a tool called COMPAS, used in many jurisdictions to help make decisions about pretrial release and sentencing, issues a statistical score between 1 and 10 to quantify how likely a person is to be rearrested if released. According to US Department of Justice figures, you are more than twice as likely to be arrested if you are Black than if you are white. A Black person is five times as likely to be stopped without just cause as a white person. The mass arrest at Edison Senior High was just one example of a type of disproportionate police response that is not uncommon in Black communities” (Heaven, 2020).

Besides in predictive tools, racism can also be reproduced, for instance, in tools like statistical modeling, risk-based sentencing, and predatory lending that exclude Black communities from key financial services. Tough algorithms were developed to make decision-making fairer and objective than humans, they are ultimately based on data that reproduce limited information available because it is what police departments record. Moreover, the intersections between race and justice transcend any technical aspect. Police data carries generations of biased information and urge for more social research on how to dimmish racism, which is a question math is incapable of answering.

Anyway, few studies support the argument that predictive tools and algorithms are inevitable and, in the end, helpful to control crime rates despite inherent racism they carry. An opposite group says that there is no reasonable explanation to keep using any tool if it is consciously recognized as racist. The discussion is more than relevant and highlights the urgence to insist on a multidisciplinary team assessing data policies. Generally, most softwares are developed by white developers, in profit based white managed firms with a technical focus, lacking the social implications of how a simple location-based algorithm can, for instance, record someone as criminal stealing from him a bright future ahead.

Heaven, Will Douglas (2020). Predictive policing algorithms are racist. They need to be dismantled. MIT Technology Reviews Magazine. Available online: https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/

Martin, W. (2005). Global Movements before “Globalization”: Black Movements as World-Historical Movements. Review (Fernand Braudel Center), 28(1), 7-28. Retrieved January 2, 2021, from http://www.jstor.org/stable/40241617