After yesterday’s post - in preparation for today’s post - I realized that I should explain what I mean by “classifications” since, after all, I am not attempting to classify every possible causal structure with an unlimited set of variables (causes and effect). As a causal structure grows, the number of possible classifications similarly grows if the attempt is to classify the structure based on all the variables and connections making up the structure.

With these classifications (1 and 2 in the first post, and 3,4 and 5 today) I am sticking with the smallest elements of the structures with the hope that as I (and others) start building up larger structures we can break them down into smaller elemental components that can be classified and provide an understanding about the larger structure. If, at that time, it becomes apparent that new classifications are common and beneficial based on combinations of smaller structural classifications then we can simply add them.

Another thing to point out is that, right now, the classifications are pretty much based entirely on the existing work for adjustment sets used in inductive inferences; just adding as necessary considerations given we are doing abductive inference in differential diagnosis. The next posts, based on the plan as outlined previously, will be about Bayes theorm, which allows us to connect between inductive inferences and abductive inferences

Classification 3 (fork)

dagitty-modelHere the cause we are interested in has multiple effects (which clinically is very common). I wanted to include this structure because it will be helpful for considering sets of effects that increase the likelihood of a particular cause. For example, if the only cause of 3 effects is one particular cause, then it increases our belief in that common cause. It does not reach the level of belief as in classification 1 (iff) if there are other possible causes of the effects in question, but if there are not, then it can reach the level of iff; that is,

{Effect1, Effect2,…,Effect_n} iff Cause; then the presence of this set of effects entails the cause. But if there are other explanations of the effects -

dagitty-modelAs in this example where a, b and n are other causes of Effect1, Effect2, Effect_n; the presence of all three effects increases the probability of “Cause” as opposed to a, b and c; but it does not reach the level of certainty (in other words, a, b and c are still possible explanations). The shift in probability that occurs will be easy to consider once we cover Bayes theorem, it has to do with the drop in probability associated with the likely lower prior probability of having {a AND b AND n} as compared with “Cause”. But, if the conjunction (and) of {a AND b AND n} has a higher prior probability than “Cause” (let’s say Cause is a very rare thing), then the set of effects does not necessarily warrant a shift in our belief that Cause is the cause of Effect1, Effect2 and Effect_n.

Classification 4 (chain)

Chain’s are going to be a common component to the causal structures physical therapists use in differential diagnosis. It is the nature of what we do by attempting to go further back to identify cause. Recall from a previous post the causal chain of pain due to a trigger point:

TrPtPain-Sample1There is a chain embedded in this structure, along with several other classifications (for example, “Sympathetic Activation” creates a fork as well; and Low Job Control creates a fork; and as we saw in that post those forks influence our reasoning along the chain. But there are parts that are simply chains.) A chain is:

dagitty-modelAnd we would want to propagate backwards through the chain to get to the root (or ultimate) cause if that ultimate cause has implications for us - i.e. intervention, prevention, contraindications. A chain reminds us that every effect can also be a cause along the chain. Rarely are chains so sure, un influenced by other factors. More often there are elements of the chain that branch off creating a fork, or perhaps jump past one another, or are caused by causes other than those part of the chain (creating an inverted fork).

The adjustment set for a simple chain is simply the cause prior to the effect of interest as you propagate backwards. So for E, it is {D}, for D it is {C}, etc.

Classification 5 (causal dependence - or a lack of causal independence)

dagitty-model-2For now, Classification 5 is the only classification I will do at this high of a level - but a high level I mean it is really a combination of previous classes.

With Classification 2 (inverted fork) we noted the significance of independence between the causes. Here we start with an inverted fork, and then note that the structure includes forks as well, in fact, each cause in this structure includes a fork.

When there is a lack of dependence between causes (that is when causes are the effects of other dagitty-model-4causes as well as the effect observed they can create biasing pathways if not considered and therefore must be considered.

Here cause 2 is not observed or considered which clearly leads to bias.

This is the case even for abduction if Cause 2 (above) does not itself cause the effect observed - but causes the other causes of the effect.

I think that is it for now on classifications. Whether anything becomes of these is another question. They certainly need more work, more formalization and consideration. Best to move ahead though and consider how they relate to Bayes theorem and diagnostic accuracy.