# Belief and truth - thinking about diagnostic accuracy

Today is the start of the second week of the online course I referred to earlier: Introduction to Mathematical Philosophy. There is only one exam (a final) so it’s not too late to sign up. It requires about 2-3 hours per week, about half of which is watching videos and the other half is reading and thinking. I like to watch the videos before taking a walk and then thinking about them while walking….yes, I am the guy in an orange jacket mumbling philosophical rants while walking around the neighborhood….

The introduction to week 2 (which is on truth) just struck me with something I knew before, but never quite stated it in such a clear way (not sure why) - I am paraphrasing:

Belief in a proposition is logically independent from the truth of that proposition.

It immediately reminded me of what we are trying to do when we reason about the statistics of diagnostic accuracy (sensitivity, specificity and likelihood ratios).

We do a diagnostic test because we have a belief about a proposition. That results of that test then influence our belief about the proposition by making the truth of the proposition more or less likely than before we had the results of the test.

For example: I believe this person has an ACL tear, is a belief in the proposition: this person has an ACL tear. These are logically independent. But by being logically independent we simply mean they can have different truth values:

Belief in ACL tear Has an ACL tear

TRUE TRUE

TRUE FALSE

FALSE TRUE

FALSE FALSE

This most likely reminds you of a contingency table (2 x 2). The logical independence is different from the causal dependence. Meaning, while there can be different truth values, there are causal dependencies between the truth of the proposition (has an ACL tear), and belief in ACL tear. In fact I am willing to propose that the stronger those causal dependencies, and the more readily they are observed, the likely belief and proposition align into “TRUE - TRUE” or “FALSE - FALSE”; as opposed to the errors of “TRUE-FALSE” or “FALSE-TRUE.”

We calculate the sensitivity, specificity and likelihood ratios based completely on the belief - truth contingency table, where belief is the test result and truth is determined from some gold standard. We rely on the the fact that stronger causal dependencies will result in stronger association between belief and truth. What a strong + Likelihood ratio tells us is that if the test is positive, our belief that the proposition is true is most likely correct; from a strong (-) Likelihood ratio we learn that if the test is negative, that the belief that the proposition is false is more likely correct.

It’s fun making these connections and considering the test result as influencing our belief as logically independent from the truth of the proposition we are proposing to believe. Students sometimes struggle with the terminology used - a strong + Liklihood ratio shifts your confidence that a positive test means the patient has the diagnosis. It shifts confidence because it changes your belief, which still remains logically independent of the proposition (therefore still could be wrong….)

I highly recommend the course (it’s free and a nice way to spend 2-3 hours per week).