This post will start to develop a classification system for abductive adjustment, which I hope will assist with reasoning about differential diagnosis. I will propose a classifications and rules associated with them, and implications of that particular classification.  This is preliminary - but I would hope that at some point these classifications will assist with interpretation of sensitivity, specificity and likelihood ratios for a knowledge based approach to diagnostic accuracy and differential diagnosis. Recall that when we conduct a diagnostic test we are essentially testing our beliefs, and based on the results of the testing we adjust our belief. We aim to have justified true belief. When we first propose a set of possible beliefs (based on some set of information) we may suspect some elements of that set more than others, and we may have rather straightforward approaches to rule out some members of the set. For a review of the belief - truth reality of diagnostic testing see this post.

What is the set of possible beliefs for abductive adjustment and differential diagnosis? It is a set of the possible causes of the observed effect (sign, symptom). It is completely predicted by our knowledge of the world we live in, built up from observations and rationally believed. What is an abductive adjustment set? It is the set of information that is needed to make a rational choice between more than one possible cause of an observed effect (assuming the cause was not observed also).

Classification 1 (if and only if (iff)):

Class1

In some respects this graphic says more in what it does not display than in what it does display. It does not display any causes of Effect besides the one listed (Cause). If this is true in reality then the probability of the Cause, given (conditional on having observed) the Effect = 1.0 (certain). Notationally this is:

P(CauseEffect) = 1
This can occur with an effect that occurs, if and only if, one cause occurs (Effect iff Cause). You might expect that the sensitivity = 1, specificity = 1; + LR approaches the limit of infinity; and -LR approaches the limit of 0; but this is unlikely to be the case even with a real classification 1. There are two reasons - one logical and the other has to do with observational errors. First, logically, an “iff” relationship where P(CauseEffect) = 1, does not mean that: P(Not CauseNot Effect) = 1. So with classification 1, observing the effect rules in the cause; but not observing the effect does not rule out the cause. This may seem strange to say now - why would be be looking for the cause of an effect we have not observed? But keep this in mind below when we get to classification 2 and we are trying to rule out competing possible causes.

Second, observational errors, this has to do with the reliability of observing the Effect, there is bound to be some error in this observation, and that error will show up in samples and thus impact sample statistics, and thus ever so slightly reduce the sensitivity, specificity and thus reduce +LR and increase -LR (recall, increasing -LR means you are less confident in your belief).

The adjustment set for Classification 1 is empty - {} - there is no need to adjust.

Classification 2 (inverted fork)

dagitty-model-3

Some quick background here. Accepting the law of causation as an fundamental axiom of a knowledge based practice, based on a realist epistemology (see here), we always believe that there is a cause of an observed effect. Classification 1 is only applicable when there is only one cause (that is rare). Classification 2 is when there is more than one possible cause (1, 2, 3, …,n possible causes). But if the entire set of possible causes is known (set N) then, and an element of N is n:

P(An element of Cause NEffect) = 1

Which says, the probability of of an element of the causal set N given observation of the Effect is certain.

In this regard Classification 2 is simply an extension of Classification 1, from a single possible Cause to a set of possible causes (N) with elements (n).

There is a more important aspect of Classification 2 to consider, which is an assumption embedded in the DAG (causal structure). Each Cause (n) in the set of possible causes (N) is independent of one another. Meaning none of the causes are caused by any of the other causes.

Notationally we have the set: Causal Set N= {Cause1, Cause2, Cause3, Cause_n}

This alone (the set) does not address whether the elements of the set are conditionally dependent on one another (cause - effect relationships). The DAG model does provide that information.

And notationally, the DAG model implies the following independences:

What does independence mean for differential diagnosis? It means we can construct an abduction adjustment set that simply lists the set of possible causes {cause 1, cause 2, cause 3, cause n}, and each can be ruled out individually (or in groups if they share a common path to the effect).

Logically, it means that “cause 1 OR cause 2 OR cause 3 OR cause n” causing the effect but this DOES NOT mean that the possible causes are mutually exclusive! Here we are using the “inclusive or” operator which means more than one can be true; it just means that to be “true” AT LEAST one of those possible causes must be true.

In summary and the most important component, it means that the adjustment set is a simple list of the possible causes. There is no further potential bias or confounding based on the causal structure of the possible causes themselves. Next time I will cover classification 3.