Preprint / Working Paper
Details
Citation
Noto La Diega G (2018) Against Algorithmic Decision-Making. SSRN. https://doi.org/10.2139/ssrn.3135357
Abstract
This paper deals with the reasons why algorithms cannot and should not replace human decision-makers. Some statutory remedies against algorithmic decisions are presented.
In August 2013, Eric Loomis was sentenced to a six-year imprisonment for attempting to flee a traffic officer and operating a vehicle without its owner’s consent. This sentence is one that seems disproportionate to the alleged actions. What makes this decision even more illogical is that this sentence was decided on the basis of a COMPAS report. COMPAS is a proprietary algorithmic system which is used in law enforcement. This system claims to augment human intuition and predict an individual’s risk of recidivism and it suggested that Loomis had a high risk of violence, high risk of recidivism and a high pretrial risk. The judge at Loomis’ hearing agreed with the COMPAS report, stating that through the COMPAS assessment, Loomis was identified as an individual who was a high risk to the community and therefore a six-year sentence was ‘just’. Loomis’ lawyers tried to challenge this algorithmic decision but could not as the algorithm was covered by a trade secret and therefore the rationale of the decision was not accessible. In 2016, the Supreme Court of Wisconsin upheld the circuit court’s denial of the defendant’s motion for post-conviction relief requesting a new sentencing hearing. Loomis had unsuccessfully argued that the circuit court’s consideration of a COMPAS risk assessment at sentencing violated his right to due process. The Supreme Court held that, if used properly, a court’s consideration of a COMPAS risk assessment at sentencing does not violate a defendant’s right to due process.
In Milan, Marica Ricutti, a 39-year-old single mother of two children, was dismissed from her job at Ikea after 17 years of working there. Once again, the consequence of another algorithmic decision. Ms Ricutti had to swap her working shifts around from time to time with her colleagues as she had to take her children to school and to the hospital as one of her children was disabled. Although her colleagues and employers agreed to her swapping shifts, Ikea’s algorithmic workload system did not take note of these changes as they were not ‘authorised’ or ‘programmed’ and had come up with an account of Ms Ricutti working less than seven days per month for a period of eight months. This led to Ms Ricutti getting dismissed. Ikea’s reliance on the algorithmic system had resulted in heavy criticism, among them being a statement from Marco Beretta, a representative from Filcams Cgil (the Italian retail workers’ union), who stated that modern innovations including algorithms have made it harder to accommodate workers’ individual circumstances.
From these examples, it is evident that the replacement of human decision-makers with algorithms is problematic. An algorithm may be able to come up with more consistent decisions, but not necessarily more appropriate. In order to make appropriate decisions, several factors need to be taken into account – individual circumstances, personal characteristics, context and nature, among many other things. There is no doubt that an algorithm can make decisions based on statistical analysis, but this is not always sufficient. In most situations, there is a need for a holistic understanding of a number of factors including context and human intention – something that algorithms are not capable of. In the next part of this paper, some arguments against algorithmic decisions will be presented.
Funders | |
---|---|
Publisher URL |