When I posted on Abduction I referred to a critique of using Bayesian inference for diagnostic reasoning by Eddy and Clanton: their critique is that most of the time abduction is dealing with so many variables that the use of any quantifiable approach gets intractable, as nicely discussed in Eddy and Clanton’s 1983 paper in NEJM, The Art of Diagnosis: Solving the Clinicopathological Exercise. Back in 1983 Bayesian Networks via graphical models (and eventually DAGs) was just getting started (if it had started) in the work of computer scientists and psychologists interested in artificial intelligence. So when Eddy and Clanton reasoned, as they did, they were correct. Bayesian inference was intractable due to the number of computations. But now we have computers - in fact you all have a computer much more powerful than any computer Eddy and Clanton had access to in your pocket most of the time with your smartphone. And computers have made the previously intractable calculations tractable in many cases - particularly with Bayesian Networks and DAGs where artificial intelligence researchers have developed algorithms for identifying independences and conditional independences both which reduce the computational demand, and do the computations. The fact of the matter is that these approaches now underlie most attempts at prediction and diagnosis (if not Bayesian networks then neural networks). Prediction and diagnosis represent the use of knowns to infer to unknowns (either unknown future states (prediction) or unknown past states (causes - diagnosis). Regardless of what people think about Knowledge Based Practice, Evidence Based Practice will soon realize the need to embrace the graphical causal models of DAGs associate with studies on diagnosis and prognosis (prediction), if not because of their clear value, then due to their overwhelming use (due to their clear value). They are methods enabled by the tools we now have access to - people wonder what can do with computers. Showing your patient a color picture or video of the exercise is great, really it is, but computers can also be used to expand our clinical reasoning process through the training and use of models, amongst other things….I am planning a post soon on why DPT programs need to consider adding basic core content in discrete math, logic and probability as related to graph theory and algorithms, these are, in my opinion, necessary tools for the future of knowledge development and evidence interpretation in the profession.

This post will be based on an example - using a Bayesian network. This Bayes net is a model of reality - not perfect, but perhaps good enough to be useful.

SOB-HF-COPDIt is a model I have started to put together to reason about differentially diagnosing the source of DOE (dyspnea on exertion) in patients that have both HF and COPD. In terms of diagnosing either condition, DOE has dismally low specificity due to lots of false positives, but pretty good sensitivity (not many false negatives). But for diagnosing someone off the street - which a low prior probability of HF or COPD, DOE would be a poor indicator for screening - imagine doing an Echocardiograph and PFTs on everyone in the country today that has DOE while walking up stairs. If attempting to identify any one of the three possible root causes {HF, COPD, Reconditioning} DOE is pretty good, particularly if you set a rather low workload as the criterion workload for a + DOE. For example, if walking up 1 flight of stairs is the criterion for a +DOE finding then chances are higher that those with +DOE have one of the initial set, but we could also consider additional conditions for the set that +DOE could indicate.

The typical diagnostic metrics for DOE would give us numbers that are not very helpful. They would likely give us numbers that lead us to think we should not even consider DOE as a symptom to pay attention to! However, the DAG above provides us with adjustment set. It tells us that with DOE, to discern the causal factors {HF, COPD, Deconditioning} we should consider the variables in the minimal sufficient adjustment sets for estimating the direct effect of COPD,HF,Deconditioning on DOE, which include: {ElevVEO2, Hypercarbia, Hypoxia, LowMIP, WeakMuscles} or {LowMIP, VQmismatch, WeakMuscles}. Basically, P(DOE  LowMIP, VQmismatch, WeakMuscles) or P(DOEElevVEO2, Hypercarbia, Hypoxia, LowMIP, WeakMuscles) are a much better approach - clinically - to think about the diagnostic usefulness of DOE. Once we know DOE, we should then investigate the adjustment set variables. You might notice that the variables in the adjustment set are themselves clinical signs. That is how it goes with a causal structure - that is part of the value. Algorithms can be based on the underlying causal structure and the resultant adjustment sets.


A model of DOE, HF, COPD and Deconditioning that only considers the disease/conditions themselves results in an empty adjustment set (i.e. there is no way to discern one from the other) without knowing what you set out to know - which is whether they are the cause (i.e. ruling out two entails the third if there really are no other causes).

The model above is available on DAGitty (here) if you are interested in adjusting it yourself.

You can change to only being interested in whether HF is the cause:

dagitty-model-4For the adjustment sets for the direct effect of HF: {Deconditioning, ElevVEO2, Hypercarbia, Hypoxia, LowMIP}, {Deconditioning, LowMIP, VQmismatch}, {ElevVEO2, Hypercarbia, Hypoxia, LowMIP, WeakMuscles} or {LowMIP, VQmismatch, WeakMuscles}

Or when you focus on Deconditioning you find biasing pathways due to COPD and HF (which is why these have to be ruled out before suspecting DOE is simply Deconditioning).

dagitty-model-5Minimal sufficient adjustment sets for estimating the direct effect of Deconditioning on DOE:

In summary, even when we have numbers for sensitivity, specificity and likelihood ratios,we are not devoid of having to reason through the underlying causal structure that gives rise to the numbers. And when we are interpreting these numbers, it behooves us to consider the source of the number, and how subjects were recruited, how data was collected, all of which influence the numbers in ways that are qualitatively predictable from understanding the underlying causal structure. When we are making claims or challenging others claims, we should strive to articulate them based on a well reasoned underlying causal structural model, from which we can explicitly expose the assumptions we are holding to reason toward our conclusion.

This is the last of the posts that attempt to generally explain why graphical causal models (DAGs) are a useful for teaching and potentially improving our reasoning regarding differential diagnosis. I will now return to posting on related topics of interest with the aim to demonstrate examples and answer questions that come up.