Response to: “On Parachute’s and Evidence”

Once again Kenny Veneer has provided a clear, lucid and insightful post on Physiological: Where Physiotherapy meets Logic - the post is here, including my comments (below the post) - which are produced here with some additional thoughts on how my response relates to KBP generally and causal models specifically. I have also added links to references here that are not in the comment.

The below only makes sense if you first read Kenny’s post on Physiological: On Parachute’s and Evidence.

As presented in the post I agree with what Kenny. Smith and Pell have not provided an effective argument against Evidence Based Practice or against the superiority of RCTs for isolating a particular causal mechanism from amongst alternative explanations.

There are aspects to the satire that do warrant pause and consideration. From the Smith and Pell article: ”Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data.”

Based on this sentence it does not seem that Smith and Pell are attempting to provide an argument against EBP in total. Merely an extreme form of EBP that does not seem to be held by you or Howick. This extreme form may be a straw man set up by Smith and Pell in order to knock something associated with EBP down (in other words perhaps no one holds this belief of criticizing “the adoption of interventions evaluated by using only observational data”). There are systematic reviews that only include RCTs. I have personally always believed this is to compare apples to apples in the examination of evidence. And, if there are enough RCTs to support a rigorous systematic review then it is appropriate to use this highly controlled set of evidence. But if there are few RCTs due to the nature of the underlying system being studied (i.e. not amendable to an experimental set up - closed system; but observable in a structured way as an open system); or due to the inability of researchers to have had time to study the question; or due to the obviously observable mechanisms and remarkable effects; then yes observational studies should be (and are) included in systematic reviews.

But if there are people that believe interventions based on only observational trials should not be considered (should be excluded from practice) then the Smith and Pell paper demonstrates that there are instances in which we can obtain knowledge without RCTs.

Essentially Smith and Pell have provided a counter example to the claim that the only source of knowledge is a randomized controlled trial. If anyone was ever trying to make that claim then Smith and Pell’s counter example is effective as a single counter refutes a universal.

If no one was making that claim, if that claim is a straw man, then at best Smith and Pell provide an opportunity for us to consider what the factors are that do lead to knowledge about the world based on observational studies (which you and Howick have done very effectively). And yes, in most instances we do tend to need at least some parts of an open system to have been isolated with strict experimental control to understand it. We must always consider that eventually we need to plug that knowledge back into the open system and test what we know about it and in those instances a return to an observational trial may be the most effective means to knowledge. More on this below.

Underlying my response I am making use of two fundamental aspects of Roy Bhaskar’s Transcendental Realism, more commonly referred to as the Critical Realist Philosophy of Science (most widely accepted and adopted by the social sciences, but applicable to all sciences). One is the belief that ontology determines epistemology, meaning the way things are determines the way things are known. The other is the stratification of reality for scientific purposes into the closed and open. Open refers to the real world in its fully interacting state (what an observational trial attempts to apply some control to not by manipulating it but through structured observations); closed refers to the isolation of the real world based on the system you want to know and manipulating it, poking it, prodding it to see what happens). RCTs isolate and create a closed system and that is a very useful tool to identify causal relations by isolating a system and ruling out alternative explanations. And the ontology of many things warrants this epistemological approach. But the Smith and Pell satire raises awareness that the way of some things allows another approach to knowledge.

I have two more thoughts to share (should they be any use, I am not sure). First, you and Howick have proposed a set of considerations about the ontology of systems that might be justifiably known (epistemology) through observational trials that are similar to the case of parachutes. Let’s call them ontological features. These ontological features: “…interventions that are remarkably plausible, fill an acute and usually life threatening need and produce dramatic and routinely observable effects. The results of these interventions are so large that it effectively rules out any potential confounding variables.”; are not binary. We really cannot answer these as black and white, cut and dry, yes or no, ontological features. These ontological features are on a spectrum. Therefore, it is on a spectrum that we must evaluate the claims being proposed as knowledge from observational trials. In some extreme examples as you have identified, there is little debate. In other less extreme the debate gives academics something to discuss :)

The second thought is that there are two ways to consider the usefulness of observational studies. Thus far we have considered them as if their benefit comes from generating knowledge that RCT’s are clearly better at generating (isolated cause - effect) as a closed system. In this regard, they are not as effective. That is clear. But what about considering the ontology of the open system. Here we find out that RCTs are inferior to observational studies in that by their nature they isolate and close the system. I would ask whether we need to consider the usefulness of well designed, structured, observational studies to provide insights and knowledge about the causal structure of systems that are not so isolated or cannot be so isolated, that are open and exist as they do in the full messiness of reality. We do sacrifice some control, but we also may gain in applicability. 

My dissertation was on job stress and it’s impact on ECG measured variables of cardiac control (through heart rate variability). How this system was (ontology) determined how we were to know it (epistemology) and only an open approach with an observational design was appropriate. There is no way to simulate the long term, chronic, real life stress associated with a job in an experimental study. For an explanation of the epistemological challenges associated with such open system, complex, multi-causal systems - see our paper: Description of a large scale study to assess stress disease associations for cardiovascular disease.

So there is a tension - a necessary back and forth - that we must work with as we struggle through what information (data) we need and how we collected it and how we analyze it to generate knowledge for the profession, for practice.

I believe causal structures as represented as causal models provide us with a framework for  working through this tension between closed and open systems, classifying the ontology of the system we are attempting to know, and using both RCTs and observational trials appropriately in the service of knowledge. With a  causal model we can see, based on what we claim to know, how complex the system is. The more complex the more isolated parts may need to become in order to study. But then the more likely it is that we lose something when we take the isolated information back to the real open system and therefore have a greater need to confirm our expectations with open system (observational) design studies.

What Kenny has pointed out about physical therapy interventions is true. Their effects tend to be smaller. There are probably many reasons that they are smaller, but one of the reasons is because they are part of a very large system, which means that even when we isolate them and find them, they are harder to not only see with observational trials, but harder to know whether they even exist in the open system or whether those effects are lost to the complexity of other interacting factors. This is where observational trials may be a great followup to a series of RCTs.

A great example of this exists with the attempt to isolate into RCTs physical therapy intervention in the in intensive care unit. This recent systematic review of only RCTs (went from 5733 records to 7 for the review) concludes that: “Early rehabilitation during ICU stay was not associated with improvements in functional status, muscle strength, quality of life or healthcare utilisation outcomes, although it seems to improve walking ability compared to usual care. Results from ongoing studies may provide more data on the potential benefits of early rehabilitation in critically ill patients.” The complexity of the care in such a setting - the number of variables may be such that even an RCT is unable to isolate. Perhaps in such situations we must look toward new designs that combine elements of experimental and observational research traditions. When it comes to patient care, the risk of Type 2 errors in such studies are problematic for policy making and decisions about staffing. If anyone is interested in joining me in a more thorough critique of the approach taken in this systematic review - please let me know.

When it comes to RCTs and observational studies for knowledge generation for clinical practice perhaps rather than “either / or” we consider a “both / and” approach based on the ontology of the causal structure we are interested in, and the ultimate goal of the inquiry.

Perhaps the flow is something like: have a hunch about some causal associations, build a causal model; test it observationally; isolate what can be isolated; figure out what cannot be and subject parts of the total model to RCTs; put the RCT knowledge back into the more complete causal model and test it with observational studies; again modify our understand of the causal model and build a new causal model; now we are at the beginning……so we repeat (gives us something to do :)


The point here is that observational studies can serve two purposes and neither of them need to conflict with what RCT’s offer in the overall process of generating knowledge for practice. The other point is that regardless of the studies being used to generate the causal models, practice ultimately consists of inferences drawn from the causal model of the full open system, to the best of our ability.

Leave a Comment