## Covid-19 tests: probabilities

Bayes’ Theorem is applied to medical tests, to calculate the probability of being infected with a virus, given a positive or negative test result. What drives the uncertainty is false negative results, or false positive results. In this article, I give a practical outline as to how one can interpret one’s test result, after calculating the relevant probability using Bayes’ Theorem.

To start off with, we need two estimates. For a negative covid-19 test, we need the rate of false negative results, and the current actual prevalence of the disease in the community. On the other hand, for a positive covid-19 test, we need the rate of false positives, and the current prevalence of the disease. False outcomes in tests vary according to the laboratory doing the test, and probably also the skill with which each individual test is carried out, but, for the sake of a rational understanding of the usefulness of these tests, we can use common statistics to calculate feasible probabilities.…

By | January 1st, 2021|News|0 Comments
• Gallery

Introduction

In this post we will have a look at Parrondos paradox. In a paper* entitled “Information Entropy and Parrondo’s Discrete-Time Ratchet”** the authors demonstrate a situation where, by switching between 2 losing strategies, we can create a winning strategy.

Setup

The setup to this paradox is as follows:

We have 2 games that we can play – if we win we get 1 unit of wealth, if we lose, it costs 1 unit of wealth. Game A gives us a payout of 1 with a probability of slightly less than 0.5. Clearly if we play this game for long enough we will end up losing.

Game B is a little more complicated in that it is defined with reference to our existing winnings. If our current level of wealth is a multiple of M we play a game where the probability of winning is slightly less than 0.1. If it is not a multiple of M, the probability of winning is slightly less than 0.75.…

## Ten Great Ideas about Chance – By Persi Diaconis & Brian Skyrms, a review

NB. I was sent this book as a review copy. From Princeton University Press

This book straddles a tricky middle ground, given that it introduces topics from scratch and goes into some very specific details of them in a relatively few pages, before jumping onto the next. On starting to read it, I was skeptical of how this could possible work, but by the end of it I believe that I saw the real utility of a book like this. The audience is quite specific, but for them it will be a gem.

The book covers a huge range of ideas related to chance, from the underlying mathematics of probability, to the psychology of decision making, the physics of chaos and quantum mechanics, the problems inherent in induction and inference and much more besides.

The book is taken from a long-running course at Stanford which the authors taught for a number of years, and they have tried to condense down the most important aspects of it to a relatively light book.…

• Gallery

## The Probability Lifesaver – by Steven J. Miller, a review

NB. I was sent this book as a review copy. In addition, I lent this book to a student studying statistics, as I thought that it would be more interesting for them to let me know how much they get out of it. This is the review by Singalakha Menziwa, one of our extremely bright first year students. From Princeton University Press

All the tools you need to understand chance, the insight of statistics at base, and more complex levels. Statistics is not just about substituting into the correct formulae but requires understanding of what the numbers mean. Counting rules and Statistical inference were two of the topics I struggled with, especially the logic behind statistical inference, but this book provided great insight and explanations regarding these topics with a step by step procedure and gave enough interesting exercises. Miller’s goal when writing the book was to introduce students to the material through lots of accurately done, in depth worked examples and some fascinating coding for those who want to get more practical, to have a lot of conversations about not just why equations and theorems are true, but why they have the form they do.…

## You’re (probably) a Bayesian – whether you like it or not!

Statisticians have long been separated into two camps as to how they philosophically interpret their trade. These schools of thought are usually called Frequentists and Bayesians.

Frequentists believe that a probability, $p\in[0~ 1]$, associated with a specific possible outcome of an observable occurrence or process, is simply telling you that, could you observe this occurrence (or process) infinitely many times, the fraction of such observations that would yield that specific outcome is $p$ . Using the age-old coin toss example: tossing the coin is the occurrence or process and recording a Heads or Tails are the two observation. The number 0.5 $\left(P(\text{Tails})=0.5=P(\text{Heads})\right)$ tells a Frequentist that, in the pursuit of infinitely many coin tosses, the ratio of Heads recorded to the number of tosses performed asymptotically approaches 0.5. And that’s all! The value should not be interpreted as the most likely outcome for the next observation or sample taken from the process (though I’ve always wondered how a Frequentist would gamble…).…