Tuesday 18 August 2020

You can’t blame an algorithm: A-levels and unintended consequences.

You are at a stag do and in charge of paying the bill and you need to do long division which literally no one knows how to do by hand. So, you attempt to divide £2344 by 24 (lads) into your calculator. But because the multiple rounds of black sambuca adds to your already poor levels of coordination you drunkenly mash the last digit and end up typing in 2344/26. Inevitable you get the wrong answer and argue with the waiter for half an hour.

Who is at fault here? Could it possibly be you? No, it is the calculator’s fault. The calculator, being a sentient and telekinetic being should have known you meant to type 4 and not 6, like you definitely did do with your actual physical finger.

The problem with the word "algorithm" is that it makes you think of complex mathematical equation that makes decisions for us. But this is not correct all. An algorithm is not sentient, it does not make the decisions, it just follows orders. When you type 2344/26 into your calculator the calculator is following an algorithm. It is taking your input of 2344 and splitting it up into 26 equal parts for you. It does exactly what you said, it does not look around the table and notice there are only 24 people (lads) and think "ah I know what he means here".

Even the most complex of algorithms, so called “artificial intelligence” follow the orders we give them. We may not be able to understand how they solve problems but that does not mean we didn’t design the rules which they follow (it is quite an interesting subject really, someone should probably write a book about it).

However, the “algorithm” used to assess A-level results was not anywhere near as complex as artificial intelligence. It wasn’t even a “black box”: a situation which describes not knowing precisely how the data in your algorithm is transformed (usually as a result of complexity). But it was trying to deal with a complex task of assigning grades to students without them sitting exams.

The problem of designing algorithms is similar to that of designing laws. After all, laws are basically simply algorithms without numbers.

For example, I think most people would agree that stealing is wrong, and we should have a law that prevents people from stealing. So, let’s say we write a law that says “do not steal” and hope that does the job.

But what if a situation occurs where someone was bleeding badly in the street. I run to the nearest chemist and grab some bandages from the shelf, run out of the shop and dutifully tend to the bleeding person. 

If we were to enforce our “do not steal” law here I would have committed an illegal act despite my heroic efforts and would have been stopped by a security guard. Perhaps I could have gone back in to pay for the item, but it isn’t really clear whether this is stealing or not. This is because haven’t really established what “stealing” technically is in our “do not steal” law.

Writing laws are particularly difficult because of this. We need caveats and exceptions, we need careful definitions. But even this process isn’t perfect. We still need judges and juries to interpret the law on a case by case basis. 

The problem with laws and algorithms is that they can have unintended consequences. The "do not steal" law did not intend for a person to bleed out on the street. The question which we should then ask is who is to blame for these unintended consequences? Can we really blame someone who did not intend for these things to happen when they were acting in good faith?  In the A-levels case the answer is yes and no.

Firstly, I do not think we should blame the designers of the algorithm for having unintended consequences. It is not like they didn’t think things through or create an overly simple law. They did carefully check for certain things such as if their algorithm was favouring certain groups.

What happened, however, is that their algorithm potentially favoured private schools since private schools tend to have small class sizes and the model relied heavily on predicted grades for groups smaller than 15. Now you may say this is “obvious” but we have the benefit of hindsight and they were also dealing with a number of complicated issues. The fact that something would have escaped their model is not surprising at all which is why I do not think you can blame the designers completely for this.

However, you can blame someone for not acknowledging the fact that their model will have unintended consequences. Because of the high likelihood that something will go wrong you need to discuss the model with as many informed people as possible to spot potential issues. The other thing you need to do is take it as given that there will be unintended consequences and work out ways to mitigate their impact. Essentially, planning to fail.

The thing is, just as we need laws, we also need algorithms to help us. Imagine the scenario where we just said that students’ grades would solely be based on teacher assessment. The amount of pressure on teaches to inflate grades would be extremely high. Even if they would not do it themselves, they may be worried that other teachers would do it and not inflating grades would put their students at an unfair advantage.

But when we make algorithms, we need to be sufficiently prepared to deal with unintended consequences.

The UK smoking ban: can paternalism be justified?

Every day I tell my toddler off for doing something he shouldn't. He has no idea why playing with plug sockets are bad but light switche...