Ethical Issues concerning algorithms

Square
black screen with code
Photo by Antonio Batinić on Pexels.com

I was reading El Pais today and suddenly passed through the story of Catherine Taylor, from Arkansas. Catherine applied for a position with Red Cross, trying to ‘save the world’, but had her life hugely damaged as a consequence of unethical algorithms processes. The mistake was made by ChoicePoint, a data broker – companies that recompile digital data to sell to other companies, here to the RedCross.

Unexpectedly (as in real life), there was another Catherine Taylor born on the same day that was condemned for “trying to sell drugs” and “manufacture methamphetamine”. As you already imagined, she got a denial from RedCross for the hiring process, based on the strictly automatized algorithm process. Suddenly, a computer decided the the “good Catherine” was a criminal and her life became a nightmare: many other companies had already bought the same database, she took 4 years to get a job, had many rental attempts denied, had to live with her sister and you can imagine the hell she has been passing through. She developed a heart problem and probably has lot many days of happiness supporting RedCross projects and being helpful if she had been hired instead.

This is real life and you might say that “it is an exception”, but there are many other more common situations that result in excluding and enlarging inequality gaps than we imagine. Hiring processes, in general, rely increasingly on algorithms that are not interested to understand if the bad grades you had in 2005 were in consequence of a racist teacher trying to damage you. Financial algorithms that determine an individual’s creditworthiness may underestimate how social conditions impact finances. The human layer of Artificial Intelligence gains relevance day after day, mistake after mistake. Many innovation labs are creating or enlarging their teams to include professionals from the social sciences (like me!) or experts in their fields, who can anticipate issues that happen in real life to optimize processes.

Like in real life again and bad comparing, it is like party rules. You cannot just get drunk, mistake and harass, beat or steal other’s people and justify as “sorry, I was drunk”. It is unethical. It was also unethical when Google processed the images of three young African Americans labeling them as gorillas. Machines make mistakes, as humans. Humans who rely only on statistics and algorithms to create AI tools mistake twice. Ethical standards for Artificial Intelligence are not only a matter of governmental regulations per se, but it also implies protecting human rights, saving mental health, saving money, and adopting the minimal crucial requirements to diminish the abysm of social gaps.

Peace is not a white dove.

By @elisamariacampos