This blog has talked a lot about abduction ever since the foundational posts (here, and here for example), it is a major component of clinical reasoning, particularly diagnostic reasoning generally and differential diagnosis specifically. We accept its definition to be:

Abduction: The process of moving from causal relations and some specific effect, to acceptance (or expectation) of a specific cause.

To demonstrate a causal relation we require observations with a temporal ordering (amongst other things from Hill’s criteria). We also know that, particularly for conditions and symptoms, the condition causes the symptoms, the symptoms do not cause the condition (or at best the symptoms do not start the cycle if there is a cycle of condition and symptoms; and to be honest, many times I believe our assumption of a cycle with symptoms has an underlying physiological process - for example if we say that Condition -> Pain -> Anxiety -> Pain2; there are neurological processes underlying the path from pain to anxiety to pain that could be added to this causal sequence). But that complexity is not what we are here to discuss right now.

Let’s try to not make this more complicated than it needs to be. Bayes theorem (in particular the equation) is a theorem that allows us to invert a conditional probability, it allows us to obtain the probability of a condition (disease, diagnosis) based on the observation of an effect.

P(ConditionSymptom) = (P(SymptomCondition) * P(Condition)) / P(Symptom)
Interpretation: the probability of a condition given a symptom which is P(ConditionSymptom), is simply the probability of the symptom given the condition P(SymptomCondition) multiplied by the probability of the condition P(Condition) and that product divided by the probability of the symptom P(Symptom)
Graphically P(SymptomCondition) is what we get from a causal structure depicted as:

dagitty-model

Most are classified as possible causes (see posts about possible causes here). This is the causal structure we accept as knowledge. In clinical practice we then observe “Symptom” and want to know the probability that it was caused by a “condition.”

The causal structure, P(SymptomCondition), can be obtained observationally with a temporal study. It is essentially the incidence of the symptom in a sample with the condition, it includes both the statistical association between the symptom and the condition and methodological considerations, and therefore is best obtained through controlled observations, which lead to an inductive inference contributing to a causal model whereby the condition is the cause and the symptoms is the effect. It can also be qualitatively considered in a general way since we are not all holding specific probabilities in our head all the time but still know that the P(PainPaper Cut) is pretty high; or the P(Shortness of BreathHeart Failure) is also pretty high; P(Shortness of BreathPaper Cut) is pretty low.

Also please keep in mind that you can replace “Symptom” with a “Sign” as an effect.

How do we intuitively reason with this equation all the time?

P(Condition Symptom) = (P(Symptom Condition) * P(Condition)) / P(Symptom)

When considering whether a condition is present based on the presence of a symptom, we consider the probability that the symptom causes the condition; but we also consider the probability of that condition in the first place (the baseline, or prior probability of the condition), and the probability of the symptom (the baseline, or prior probability of the symptom).

For example - with no other information:

P(MeningitisNeck stiffness) = (P(Neck stiffnessMeningitis) * P(Meningitis)) / P(Neck stiffness)
P(Neck stiffness Meningitis) is pretty high (let’s say .9) (which means that meningitis tends to cause neck stiffness 90% of the time)

P(Meningitis (bacterial or viral) is pretty low, about 14,000 cases for every 100,000 people or .14 based on this website.

P(Neck stiffness) considered as life time prevalence for adults 18 - 84, is about .485 (based on this review) - it has a high range though - so we are not very sure on this one, .485 is the mean from a bunch of other trials that ranged from .142 to .71

P(MeningitisNeck stiffness) = (.9*.1)/.485 = .18 (or 18% probable that neck stiffness indicates meningitis - given NO OTHER INFORMATION)

How often are we making these considerations with no other information? Never in a clinical situation. Perhaps on an class based examination - if you are simply being tested on the basics of Bayes. In real life we would minimally consider the baseline probability IN THIS PATIENT for meningitis based on where they have been - for example, let’s say it is a college student, living in a dorm with people that have had confirmed meningitis and they have been in contact with such people? Well, we naturally interpret their symptom of neck stiffness differently! This is an example of Bayes reasoning underlying our right decision to consider their symptom differently. The information about the exposure to people with meningitis has increased the probability P(Meningitis (bacterial or viral) from .14 to, let’s say .7. Now Bayes says:

P(MeningitisNeck stiffness) = (.9*.5)/.485 = .93 (or 93% probable that neck stiffness indicates meningitis - this is based on the adjusted baseline, or prior probability of meningitis)

Of course this is only considering meningitis as a possible cause. Bayes is also involved in the process of comparison - simply be weighting the evidence for the baseline probability of competing causes and comparing the probabilities. Let’s take a simple example:

P(IschemiaChest Pain) = P(Chest PainIschemia)*P(Ischemia) / P(Chest Pain)

VS.

P(Intercostal Strain  Chest Pain) = P(Chest PainIntercostal Strain)*P(Intercostal Strain) / P(Chest Pain)
These probabilities vary greatly based on what you know about the patient. With a recent history of coughing we increase the probability P(Intercostal Strain) and that shifts us toward a higher P(Intercostal Strain  Chest Pain) than P(Ischemia  Chest Pain). What I find is that many of our clinical questions are aimed to adjusting the baseline probability of P(Condition).

Another way we reason with Bayes is by recognizing that rare signs or symptoms, that are associated with a condition greatly shift our probability. Basically, the lower the probability of the observed effect (sign or symptom), the greater the probability of the cause of that sign or symptom.

A set of general rules can follow:

If P(Symptom) =  P(Condition), then P(ConditionSymptom) = P(SymptomCondition)
If P(Symptom) > P(Condition), then P(ConditionSymptom) < P(SymptomCondition)
If P(Symptom) < P(Condition), then P(ConditionSymptom) > P(SymptomCondition)

This last one says that if the probability of a symptom (or sign) is less than the probability of a condition, then the probability of the condition when you know they have the symptom is greater than the probability of the symptom when you know they have the condition. This is not likely to happen very often quantitatively since, if the condition is associated with the symptom then the symptom is typically at least as probable as the condition. But, if there are conditions where by only some people display a symptom, and that symptom is extremely rare, then this last general rule may apply.

All well and good. This post has introduced (or reintroduced, or simply reviewed) you to Bayes theorem. We are using it intuitively when we reason through diagnostic processes and during differential diagnosis. The next step to this series of posts will be to demonstrate how we can use Bayes theorem in reasoning through causal structures, graphical causal models, DAGs, which all can also be called “Bayes Nets.”