Monday, 19 April 2021

Using Bayes' theorem to figure out whether Bayes' theorem is an "obscure math theorem".

An edited abstract of our book was published on Sunday and caused statisticians everywhere to lose their minds. One particular tweet has gone viral.

The issue was due to the headline implying that Bayes' theorem was an "obscure math theorem". The reason it produced so much heat is the phrase is debatable in meaning, like "the dress".

What exactly we infer from this phrase is coincidently, also related to Bayes' theorem. Does it imply that it is obscure for a math theorem? Or that math theorems are not very well known anyway? Or perhaps "obscure" means hard to understand*?

Baye's theorem is certainly well known for a theorem, it is taught in many introductions to stats classes. But then again, what percentage of the population would have taken stats classes, not a huge amount? What percentage of the population is low enough to qualify for something to be called "obscure" anyway? 

Perhaps you think the meaning is clear so let's change it around. Let's say the headline was Bayes' theorem is a "well-known math theorem". Do you honestly think no one would reply: "Well-known! I haven't heard of it before!!!"

These phrases are lexically ambiguous and it's something I have written about before. If you hear someone say that the "priest married my sister" you may be unsure if the priest was the person who conducted the ceremony or is your sister's husband/wife. Usually, you can work this out from the context (e.g. if the priest was catholic then it's extremely unlikely to be the latter). This is how Bayesian reasoning works, we update our beliefs given new information and we do it all the time without thinking. 

However, sometimes this reasoning can go wrong and lead to a lot of issues like the above. If you know that Bayes' theorem is well known for a theorem, it may affect how you think most people would interpret the phrase "obscure maths theorem". Knowing this information biases you: it makes it much harder to think how others (who do not know this information) would think about the phrase. 

This is also related to something called the curse of knowledge (another thing I have written about before). This happens in conversation when you mistakingly believe that they know what you know. For example, some people pointed out that P(A|B) = (P(B|A)P(A))/P(B) is really simple as it's just multiplication and division. But unless you know that the "|" means conditional on and the "P" means probability, it's just completely incomprehensible. It isn't simple unless you know what the symbols mean, it is like saying this is simple, これは簡単です.**

The irony of all this is that journalists (including Tom) often get annoyed at readers who think that they write the headline of an article. Journalists know that it is usually the editor that writes the headline as they have direct experience of this. However, how common is this knowledge amongst the general population?

Saying this, I can also understand people's anxiety that the headline may be misleading. I do think it highlights a broader issue with the disconnect between headline writers and those who write the article.

Overall though, the fact that the Observer wanted to commission a piece about Bayes' theorem in the first place is fantastic. Let's build positively on this rather than arguing about it into obscurity. 


*Some people were concerned that the phrasing of the headline made it sound like it was more difficult to understand which could put people off. Another way of looking at is that for many, maths seems difficult to understand and acknowledging this may make people feel more confident. I have no idea which way is the right way to look at this.

**This is the Japanese for "this is simple".


Wednesday, 7 April 2021

The curse of knowledge: are you being "helpful" or just patronising?

Everyone has been accused of being patronising at some point in their lives. Patronising, by the way, is when you...

Ok, so this is a terrible joke but it does point to a problem: when do you explain something to someone?

The curse of knowledge happens in conversations when you mistakingly believe that they know what you know. It is more likely to happen in situations when jargon is involved but it is not always so easy to spot when you are doing it.

The curse of knowledge, however, is always present. It is happening right now as I am this writing this. I am making an assumption about your knowledge. To do this, I have a likely reader in mind, someone who I think knows the definition of patronising and can read English, for example. The problem is out of the (many millions) of people who read my blog, it is likely that someone will not understand the definition of patronising. What is even more likely, is that I have a slightly different definition to you. 

In any conversation, the chances of a misunderstanding in this way are a result of two things: me not explaining something and you not asking me to explain what I mean. The former can happen because I assume you know how I am defining something or I don't want to appear patronising. The latter happens because you are afraid to ask and don't want to look stupid. 

Alternatively - and the one I believe is the cause of most misunderstandings - you have a different definition to me and we both assume our definitions are the same. This happens all the time with debates about "capitalism" or "socialism" and is particularly pernicious with misleading words. 

So why do we get annoyed by someone explaining something to us that we already know? What we are accusing them of, is thinking it is highly likely we will know. 

The person explaining, however, could in fact think it is quite likely that you do know, but they just want to make sure. This all implies though that there is some level of certainty of the other person's knowledge above which we won't bother explaining e.g. I won't explain X if I am 80% sure the other person knows. Of course, we don't actually think in probabilities and this tolerance level will change depending on the situation. I would want to be 99.99% sure the other person knows which colour wire to cut if I were talking someone through a bomb defusal, even if afterwards the person accuses me of being patronising (it's a cross I am willing to bear).

This level is quite important because the higher we set it, the more likely we are to make the mistake of explaining to someone something that they already know. The flip side is that it becomes less likely that we do not explain something and the person ends up not knowing. This is akin to false positives and false negatives from hypothesis testing in statistics. What level we set may be arbitrary but it has real trade-offs: decreasing the chances of false positives increase the chances of false negatives and vice versa. It is also a big part of how science works and something I think people should know more about.* 


But what about situations where someone just wants to helpfully explain and make fewer false positives. If the person gets annoyed at you explaining it to them, is there annoyance really justified? If you are not being actively condescending by rubbing it in saying they "should really know the answer what", what's the harm?

Well I think there is some harm caused by this. Invariably you will make judgement calls about what to explain and when, you can't explain everything to everyone (it would take forever) and a lot of this reasoning is subconscious. Consider, for example, mansplaining. Men may think that they do not treat men and women differently when it comes to explaining things, but it is extremely difficult for a person to know for sure. 

I don't think there is a simple solution to this. However, I do think rather than explaining the "correct" definition per se, it is probably better to offer personal definitions. For example, saying "my understanding" of something is very different from saying "this is what X means". At the same time, we should be politely asking people how they are defining something more often. Hopefully, this will let us avoid the curse of knowledge without being overly patronising.




*You may have heard of p=0.05 before which is often the arbitrary level set in hypothesis testing. There is nothing special about p=0.05, we can set it lower at p=0.01 and we would get fewer false-positive and more false negatives. If we set it higher, say at p =0.10, the opposite occurs. But it is not just a simple probability and has quite a specific meaning which is often misinterpreted. We explain hypothesis testing and why it often goes wrong in our new book.












Wednesday, 31 March 2021

Why Deal or No Deal should have been cancelled

In the Autumn of 2005, Noel Edmonds would return to our screens with a new show called Deal or No Deal. Somehow the show managed to drag out the process of picking random numbers, from 1 to 22, and turn it into TV gold. There wasn't particularly any skill involved, other than perhaps convincing the banker that you were risk-loving in order to get a good deal for your box. 

Looking back, it was amazing to think people actually apologised for opening a random box that contained the £250,000 - as if they were somehow at fault. The contestants would often join hands if a crucial box was about to be opened, willing good fortune in messianic prayer. 

Perhaps the weirdest thing was once a player had accepted an offer, they would continue to carry on playing the game to see what "would have" happened. Noel would frequently chastise a contestant if it turned out they had a higher value number in their box: he was giving people grief for not being able to predict the future. 

My view is that Deal or No Deal should have never been broadcast or at the very least been put on after the watershed. Let's imagine for a moment that there was a TV show that doubted evolution called Ape or Not Ape. My bet is that there would be thousands of complaints by the "science-minded community" and drive it off the air within a matter of weeks.

You are probably thinking now that most people know that Deal or No Deal is rubbish and that I am overreacting.* However, just because you know something and find it easy, doesn't necessarily mean everyone else will. To quote another famous TV host: "it's only easy when you know the answer".

About 40% of the population do not get a C or above in Maths at GCSE and probably will only have a handful of lessons that are devoted to statistics. Yet just a few years ago, lots of kids would be getting taught Noel Edmond's version of probability, every day after school. 

OK, so Noel Edmond's may not be responsible for people believing that holding a rabbit's foot (say) will affect the distribution of events in their favour. However, I am not so sure we should have tacitly endorsed this kind of show either, as I think the percentage of people who believe in supernatural luck is probably quite high.**

You may think this all links to stupidity but supernatural luck is incredibly intuitive. You wear a lucky football shirt to the match and your team win, you attribute the shirt to affecting the outcome. Each time your team wins you feel it's partly down to the shirt and mostly ignore/forget if your team loses. This type of cause and effect reasoning is how we navigate the world. You don't need scientific proof that putting your hand over a candle will burn you, but have you learned this by experience in exactly the same way.

Often people think that the reason supernatural luck is unscientific is due to theoretical reasons. For example, what could possibly be causing the horseshoe to give you a higher chance of winning the lottery? You would have to believe in some supernatural effect that has powers to control the outcome of the numbers. As you cannot see the mechanism, it is unlikely to exist. But try and explain the theoretical reasons why a flame burns you without it sounding equally bizarre. 

Science isn't just about theory. Of course, theory helps us explain things but ultimately science is about proof. Just because evolution provides a plausible theory of why the beaks of finches on the Galapagos have different shapes, it doesn't necessarily mean evolution is what caused it. How we actually go about proving something scientifically is essentially a statistical claim (and if you are interested in finding out how exactly how scientific proof works then we cover it extensively in our new book).

There has been quite a push back to "science" in recent years and I think one of the main reasons is because it focuses too much on theory and not enough understanding scientific proof. The New Atheism movement seems to be dying away as most people claim to be agnostic. I think one of the main reasons for this is that people feel Atheism is more certain than Agnosticism: how can we be so sure God definitely doesn't exist? 

The frustrating thing about all this is that the whole point of scientific proof is that we can never be certain of anything. We do not know for certain, that a flame will always burn us or that evolution is real, but we think it is highly likely that these are both the case. But in order to understand how we go about scientifically proving something we need to have a better understanding of statistics as a society. And until we do, I am not so sure we should be indulging in shows like Deal or No Deal. Now let's all join hands and pray that it never gets recommissioned.


*The irony of all this is that a lot of "smart" people got a lot of the stats wrong about the show. I remember reading a Charlie Brooker review that argued the contestants shouldn't be cheering the other contestants on because if they win, then by the "law of averages" they are more likely to lose. Some people also argued that the person should swap the last box because of the Monty Hall problem, but it isn't a Monty Hall problem. [EDIT: I can't find the Charlie Brooker review of him saying this so I will not besmirch his good name. However, I am pretty sure someone said this.]

**According to this survey by Paddy Power, it's 38% of people. Now, what percentage of people do you think know this survey is biased? But in all seriousness, this poll is likely to survey gamblers, so it is more likely an accurate reflection of people who gamble than the general UK population. However, if this statistic is anywhere close to the true population of gamblers, then I would be worried whether gambling can really be considered an informed choice.


Saturday, 27 March 2021

Bitcoin and trust: how cryptocurrencies are "backed" by the government

If you ask most people what the £ is backed by, they will say "gold". There is a belief that somewhere, in the Bank of England, there is this huge vault of gold bars that you could exchange your hard-earned cash for. What gives money value, is not a question that most people really think about. That is until recently, with the advent of Bitcoin. 

The £, like the majority of global currencies, is a fiat currency, which means it is not backed by a commodity like gold but is government-issued. You could interpret this as being "backed" by the government, as people will accept £s as payment because you can always pay your taxes in it. What this effectively means, however, is that fiat rests on one important aspect: trust.

Many Bitcoins proponents will say that because it is backed by blockchain technology (a type of transparent ledger) it is backed by something immutable. As a result, we can trust Bitcoin more than any government. 

The problem is, for Bitcoin to actually work effectively, we still need to trust the government. Let's say you have some Bitcoin burning a hole in your virtual pocket and you decide to buy a Tesla. You send over your hard-mined, virtual cash and wait for delivery. But it never arrives. You send an angry e-mail to HR and no one gets back to you. You tweet at Elon Musk but he just tweets back: "lol n00b". At this point, you are understandably irritated and you decide to take Tesla to court. 

But wait a minute, what is forcing Tesla to make good on its promise? "Well, you will quite rightly say, "it's the law, it was a legal transaction that the seller failed to oblige by". But who is enforcing that law, what gives the law, value? Just because it is the law, doesn't mean it will necessarily be enforced.*

So in order for this transaction to go smoothly with Bitcoin, you need to have trust. Trust in the fact that the government will enforce your contract (this is technically what a purchase is) and also protect your property. Although some libertarians will argue that you can get around this in all sorts of ways, I am unsure if many people will be comfortable going down this route, even if hiring goons to beat-up Elon is an attractive prospect.**

Put simply, if you can't trust a government to behave responsibly with its currency, then it's unlikely you are going to be able to trust the government - in many other ways - in order for us to use a currency of any kind. This is why making governments trustworthy is so important and why we have mechanisms such as democracy to instil that trust. 

If we have that trust in fiat currency, it has the added bonus of central banks are able to have greater control over the booms and busts of the economy (I think this another aspect of monetary systems that most people are not really aware of). Also, Fiat currency doesn't have the issue of Bitcoin, which is that is inherently deflationary. As it gets harder and harder to mine Bitcoins it puts a limit on supply, as the value of Bitcoin increases people will want to hold on to them rather than use them to purchase stuff. What happens as a response is that sellers lower prices, which just makes Bitcoins more valuable and the spiral continues. This is one reason why a feature of a good currency is that it is stable over time. 

Bitcoin also has the feature that it is untraceable.*** If you are living under nefarious government, then this would be something we would think is desirable. But is it really a good feature if the government is somewhat trustworthy? I can understand in some countries this would be beneficial, perhaps if you were raising money for a protest against a corrupt government, for example. However, all this lets us do under a trustworthy government is things we as a society have deemed we don't want, like buying a bag of weed, tax evasion or hiring a hitman. 

If Bitcoin really became a serious candidate to replace fiat currency then you can see why governments would want to do something about it. However, even at the level it is operating at now, the amount of carbon it is producing (from all computational power needed to mine Bitcoin) is a serious cause for concern. This doesn't mean that certain types of research related to blockchain technology has to stop, but it would mean effectively having to ban cryptocurrencies (this is in fact what India have done).

So how would you ban Bitcoin? I am told from a legal expert friend that the high court in the UK has recently declared Bitcoin to be property, so in effect, you could ban owning it like we do with illegal substances. However, this has not yet been upheld by the Court of Appeal or UKSC.

Alternatively, you could make it illegal to mine bitcoins. This may be quite difficult to catch anyone doing it, but that's not the point. What it means is that if you buy something with bitcoin, and they don't fulfil your promise, you can't take them to court. The irony of all this is that if you were then to buy something with Bitcoin, you would have to trust the seller.


*The fancy Winnie the pooh terms here are De Jure and  De Facto

**Although you may have to hire more goons, in order to beat up the goons that were meant to beat up Elon, if they didn't fulfil their contract).

***As it is an open ledger system potentially a government could force people to register their private wallets, which would sort out some of these issues.

Sunday, 21 March 2021

Misleading Words: why we need to rename "statistical significance"

What is the definition of arachnophobia, equinophobia & hippophobia?

If you said they are all fears relating to spiders, horse and hippotamus then you would be wrong. 


The correct answer is spiders, horses and horses, respectfully.

Most people will be aware that phobia means fear and perhaps know spiders relate to arachnid and equine to horses. Many of us will be less familiar with the fact that hippo is actually related to horses unless you have heard of this before (or are a 2000-year-old Athenian).

The way we often understand new words is through inference: a way of narrowing down the answer. We do this all the time without thinking like with “lexically ambiguous” words. If I said “there were bats flying around my house last night”, the context helps us understand that this is referring to the animal rather than cricket bats. 


We do the same kind of thing when we learn a new word. Often words contain clues within the word itself or in the context we use them. For example, let’s make up a new English word called “worseimprovement” based on the direct translation of the German word, Verschlimmbesserung.


You may already have a vague idea of what this means* - and you should guess now before reading on. But if we used it in context it’s easy: “I added more spices to my dish but it ended up a worseimprovement”. We don’t really need to offer a definition, as it should be obvious that it refers to a situation where we try and make something better, but it ended up worse. 


Words, however, do not need to contain any relevant information at all. We can define a word however we want, as long as people are on the same page. For example, if we were to rename “banana” as “orange”, what fruit is the man in the picture holding?



If you said “banana” you are wrong, it is an “orange” - we have literally just been through this guys. It really doesn’t matter what we call this elongated yellow fruit as long as we all know what the name we give it refers to.

Problems, however, can occur if we were to keep up this definition. People who haven’t read this article may be perplexed if you said you hate the flavour of oranges and see you downing a Fanta**.

This isn’t always a massive problem. For example, Egregious is now a negative word when it originally meant "distinguished" or "eminent”, it was used in an ironic way to such an extent that the definition has changed over time. However, there are some words or phrases which can lead to incorrect inference. I think I will tentatively call them "misleading words", but I am worried that this may actually create a "misleading word" in itself.


The word "fertility" is a misleading word when it is used in the technical sense: how many children a woman has had. This can lead to lots of problems like saying women are now less "fertile" in their 20’s than their grandmothers were in their 30’s. If you read this without knowing the technical definition, you probably would think it means women are somehow biologically less able to give birth now, than before.
Another example of a misleading term is something I use a lot and can be seen in the following sentence:


“Drinking urine makes us live longer and this finding is statistically significant.”


Most people will think the “statistical” part suggests that some sort of statistics are involved and it’s science-related (and they would be right on this). “Significant”, however, usually implies noteworthy, something that commands attention. This is definitely not what statistically significant means so please don't think that (for the actual meaning, I have heard there is a good book out at the moment that explains it well).

Drinking urine could literally only increase your life by 10 seconds, and the findings could be statistically significant. This does not sound like “significant” in the everyday use of the word, so if you started drinking your own urine as a result of this misleading word, I think you have a right to be annoyed. Trying to get everyone to understand what the term actually means is really difficult. Yes, it isn't a problem if people who use it understand it, like the banana/orange example above, but we rarely think about the consequences when the word starts getting used with people who are not privy to the definition. They will do what we all do and use inference to get an idea of the word and be badly misled.


As a result, I think the best thing to do is to change the name of statistical significance to something else, like statistically detectable (it is worth thinking about what we exactly change it to carefully though).

It may take some time, but I think with a bit of coordination it is certainly possible. If we can change the name of a Marathon Bar, we can do anything.



*Inbesondere wenn Sie Deutsch sprechen.

**I am aware there are other flavours of Fanta, but if I said Fanta Orange here you may think it was referring to a banana.

Tuesday, 16 February 2021

Who is Nick J. Cox and why is he the most important person in science?


If I have seen further it is by standing on the shoulders of Giants

Issac Newton


This quote, possibly the first recorded humblebrag, is how we often think science progresses. It is demonstrated nicely in my favourite documentary ever, Simon Singh’s Fermat’s Last Theorem, which tells the story of how Andrew Wiles would go on to solve this centuries-old math problem. It is an amazing story in so many ways and it is really worth watching it all. However, one part in particular always stands out to me and it appears in the very last minute of the documentary, while the credits are rolling. A long list of names of all the great mathematicians - who Andrew Wiles uses in order to solve the problem - are simply, read out. 

I do, however, dislike this theory (not Fermat's, the giant's shoulder one) because I feel it places far too much emphasis on the importance of great individuals. This way of thinking is an important part of where I think science goes wrong. In order to explain this, I am going to spend the next 5 paragraphs talking about a statistical software package...

Many economists use data to answer questions about the world and we need some sort of statistical software to do so. One of the most common ones in use has been around since the 1980s is called Stata (definitely, not STATA*). It looks like an old Atari game** and I have spent many days and nights staring and swearing into this screen while getting endless replies of Syntax Error, R(198).

So let's say you get some data, load it into Stata, and you realise that the dates are in a weird format Y'68MDEC05, Y'65MJAND2 etc. Why on earth someone decided that this was the best way to write dates is anyone's guess, but it happens all the time. Perhaps because the East German who wrote it in 1978 was drunk on Mecklenbuger Punsch and had grown tired of life. His only solace was the knowledge that this decision would really screw over someone in 50 years time, perhaps someone living in a small cathedral city in the north of England. Who knows? All that matters is that it is your problem now.

You figure out that Y'68MDEC05 is actually December the 5th 1968 and you need to decode these dates into a format Stata will understand. So you decide you are going to manually change each one and a few hours later you realise that it might take a little longer than expected (manually decoding 95,325 birthdays of former DDR citizens takes quite a long time apparently).

Although you can quite easily explain to a person how to decode this date, in Stata (as in any computer language) it is much harder. No amount of pleading and begging will help (believe me, I have tried). The answer, like all things in life, is to google it.

You type into google "weird date stata help drunk stasi" and amazingly someone has asked a similar question to you before! Someone has even replied giving the code to solve the problem whilst simultaneously chastising them for not reading the help file! Over time, problems like this keep coming up and your random googling keeps giving you the answers - you notice a pattern. One person seems to be answering all these idiosyncratic detailed questions: Nick J. Cox.

Since 2014, Nick J. Cox has posted on Statalist, Stata's dedicated messageboard, over 20,000 times at a rate of nearly 10 posts a day (he has been posting before this on old servers so the number is likely far higher). He has answered questions from undergraduates to Professors, at all levels of difficulty, and as a result, has probably increased the productivity of countless academics. Remember its not just the people who he replies to that benefits but the people that view his replies, like the example above. And it is not just economists that use Stata but a whole host of social scientists and biostatisticians and even epidemiologists. 

If you look at his rank in terms of citations he is number 35 in economics, just behind two little known economists Paul Krugman and Ben Bernanke. He is ahead of many people on this list who have been awarded Noble prizes. So you would think, after all this, that Nick J. Cox would be showered with academic achievements and writing columns in the New York Times or being head of the US Federal Reserve.






The thing is, Nick J. Cox is not even an economist, he is an Assistant Professor of Geography at Durham University***. He is not as famous as other people on that list for one reason: academia cares only about your publications. We reward the goal scorer and not the person who made the assist, let alone the defensive midfielder who does all the work and goes unnoticed. Sadly, the way academia currently works is the equivalent of putting 11 strikers in a football team, thinking it is the best way to win matches

Science is a process of discovery. If Sir Issac Newton hadn't discovered gravity, I am fairly certain someone else would. And in order to make scientific progress, we have to make this process as efficient as possible. People like Nick J. Cox are such a good example of this because he has increased the productivity of so many researchers (If anyone wants to work on a paper trying to show just how much he has had an effect, I would be interested). 

OK, Nick J. Cox may not be the most important person in science but I do think what he represents is: the idea that science is more than just end process, the individual who makes the discovery. We are not just standing on the shoulders of giants, we are standing on the shoulders of everyone involved in the scientific process, regardless of height advantage.


*Some people think it is spelt like this as a result of the old logo. But DO NOT spell it this way or you will get shouted at.

**OK, the new version looks a bit more modern with the white background but I was brought up on Classic view and I will die with Classic view.

***In the 5 years I have been in Durham I have never bumped into him, I am beginning to expect he is just a benevolent A.I. sent by the Stata God. I am also slightly worried if I bump into him he will tell me to read the help file (although, you really should read it before posting on the message board).

Wednesday, 6 January 2021

The face mask of certainty: a thought experiment to separate the moral from the empirical.

I was recently asked why I got into my research area. I can trace the exact moment to a 2nd-year political philosophy module at Manchester University by Stephen de Wijze. It was a lecture on Rawls’ theory of justice that had a lasting effect on me. Rawls produced an interesting thought experiment by asking what sort of society you would want to live in. But the kicker is that you don’t know where you will be in that society, you are under what he calls the veil of ignorance. If you picked one of extreme inequality, you could end up being really rich or extremely poor: it would be a gamble. He then goes on to argue that you would end up choosing a society where you would tolerate some inequality, as long as it doesn’t make the poorest in society worse off.

There are a number of criticisms about this theory but there was one thing that really got me thinking. His idea of what society we would choose rested on what I think is an empirical claim: the relationship between inequality and development. This is why I have spent most of my life trying to look at the relationship between the two.

Many moral questions are reliant on empirical claims. Is it morally wrong to push someone off a cliff? The empirical question which this all rests on is what happens if we push someone off a cliff. Most of us (hopefully) would all agree that the person would fall. We don’t have to spend time talking about this. But what if moral questions rest on more difficult to prove empirical claims?

There are a number of Covid sceptics that come from libertarian backgrounds. They dislike lockdowns as it takes away from their individual liberties. It is, however, a separate question from how the virus evolves, which is an empirical claim. The problem with this is the two often get conflated. Many libertarians also claim that lockdowns are ineffective in combatting the virus. But why should this be the case?

This is usually what we refer to when we say people are biased. We search for evidence that helps us support our own belief. But this something we find so easy to see in others, yet so hard to see in ourselves. There is a danger in thinking that you are the only objective person in the room. I do, however, believe that we should at least try and fight against our internal biases and aim towards this when looking at empirical questions. So what can we do?

We could try a very simple thought experiment which I will call the face mask of certainty. When you put on the face mask, you instantly know the probability of something working.* Let's say now, the face mask of certainty tells us that there is a 90% chance that lockdowns will suppress the virus, 10% it doesn’t work. Given the trade-offs, do you think we should go into a lockdown? Some of you reading this will think that lockdown sceptics would still choose not to lockdown. So what about if the probabilities were reversed, what if there was only a 10% chance that lockdowns work, and 90% it doesn’t? I am willing to bet some people will still think we should still go into lockdown.

You may find that you agree with people you otherwise thought you disagreed with. For example, some Covid sceptics may actually agree with you that IF lockdowns were X% effective then they would support lockdowns. They just don't think lockdowns work very well empirically. Most libertarians, however, will probably need a higher percentage of certainty for lockdowns to work than others. Which is a reasonable position to hold given that lockdowns have real trade-offs. It is important to argue about these sorts of moral questions, and I believe we should argue over them. But we don’t want the moral debate to spill over into the question of how empirically effective lockdowns are.


*By working I mean to suppress the virus with some known effectiveness over a specific time frame etc. This of course will affect peoples decisions but I am trying to keep the problem tractable.