A Walkthrough of the Rx-Bayes App

With examples

Starting where we left off

So, picking up from our explanation in the first section, in order to get a good prediction from a test, you need TWO THINGS:

  1. A REALLY GOOD TEST, with high sensitivity and specificity AND
  2. A reasonable chance of the result BEFORE YOU START THE TEST!!

The first criterion refers to the Accuracy of a test, which can be broken down into measurements of the test's sensitivity and specificity - more details about that later. Right now, just know that these are ways of talking about a test's accuracy.

The Problem of the Prior

That second criterion is called the "prior probability", meaning, "before I get any more information from the "test", what's the probability of the disease being present? (in the first section we used "lying" as the "disease", and "looking away as you answer" as the "test").

Understanding this idea shows us that, when we "test" for a disease, what we are doing is adding information, which allows us to update our probability estimate of the disease's presence or absence.

The influence of the prior probability on the final probability is the most misunderstood concept in medical diagnosis.

This is REALLY important - because it means that the way you interpret a "test" depends on what you know (or believe) about the world BEFORE you run the test. That's what's so counter-intuitive. We usually assume that "evidence" tells us things about the world, not that our beliefs about the world tell us anything about how to interpret evidence.

But, if you think about it, we do come across this in our lives. Imagine you're on a trip to Hawaii. You tell your best friend at home to check on your plants and water them, and he or she agrees. While you're in Hawaii, at the beach, you think you see your best friend's face across the crowd.

Would you immediately call up your friend and demand to know why they are in Hawaii instead of looking after your plants as they had agreed? Probably not. You'd probably assume that you mistook someone else to be your friend, and you might tell your friend you saw someone "who looks just like you".

But this method of interpretation is Bayesian - you intuitively believe, before you see anyone's face in Hawaii, that there is an EXTREMELY SMALL PROBABILITY that you friend is in Hawaii. Therefore, even though you receive high-quality evidence (direct visual sight), you realize that the evidence is more likely to be wrong than right. That is, the final probability that your friend is in Hawaii, even after you see their face, is still less than 1%.

That's Bayesian reasoning.

Sensitivity and Specificity

So, I mentioned above that I'd give a bit more info on how we quantify a test's accuracy. We generally use "sensitivity" and "specificity" as measures of a test's quality. The quality of a test, in turn, is the degree to which the information of a test's result affects the change from "prior probability" to "final probability".

Same concept stated differently - a higher quality test means that there will be more change from the "prior probability" (information before the test) to "posterior probability" (after the test). You can see that in the app. --screenshot

To get the sensitivity, look at how often a positive test corresponds to the right answer.

To get the specificity, look at how often a negative test corresponds to the right answer.

See that in the figure below, from the first section:

figure with defined terms marked

Here you can see that the sensitivity corresponds to how often a positive test is right - in the example of the first section, this means "how often does a person who is lying look away?" In other words, the test is positive (looking away), and the disease is present (lying).

Similarly, the specificity corresponds to how often a negative test is right - "how often does a person who is not lying NOT look away?". In other words, the test is negative (they DON'T look away) and the disease is ABSENT (they are NOT lying).

Ready to Dive Deeper?

So, if you've played around with the app, you've hopefully gotten a feel for how the prior probability, the sensitivity, and the specificity, all affect the final probability, in the case of a positive test (like if the person does look away when they are lying), or a negative test (they don't look away. Hopefully, you've also gotten a feel for how higher quality tests change the final probabilities (better quality test leads to a bigger difference between a positive and negative test).

So, let's then consider the question of where prior probabilities come from, and how we can get accurate prior probabilities, so we can use our medical testing to improve our probability of disease.

There are 2 ways of getting a good "prior probability".

  1. Gather data about the prevalence in the population, and in subpopulations. Then, you can match your patient to the most similar subpopulation and get a "base rate" or "prevalence" for that subpopulation, which gives you a good starting point for your prior probabality.
  2. Or, you can make a guess (use your biased opinion, based on your beliefs, observations, knowledge, and feelings). This is what doctors do most of the time (what they're supposed to do). This is what medical school, and especially internship and residency are for - giving new doctors enough knowledge and experience to make their prior probability guesses accurate enough for the medical tests to be meaningful. We'll see that in action in some of the examples below.

By using this website or any of its affiliates, you consent to the use of cookies. We use cookies to improve the functionality of our website (including keeping you logged in so you don't have to enter your information every time, among other things), as well as to track the usage of our website (so we can improve our marketing, and improve our app by determining which features are used most and least). See our terms of service and privacy policy for details