Trials and errors: Why science is failing us
This assumption — that understanding a system’s constituent parts means we also understand the causes within the system — is not limited to the pharmaceutical industry or even to biology. It defines modern science. In general, we believe that the so-called problem of causation can be cured by more information. Scientists refer to this process as reductionism. By breaking down a process, we can see how everything fits together; the complex mystery is distilled into a list of ingredients.
There’s quite a bit that bothers me about this article. There are elements of truth to some of the critique, but the extrapolation doesn’t work. There’s no denying that reductionism is present and prevalent in science, but science is also pragmatic. You use the approach that works, up to the point that it doesn’t work. Scientists are aware of nonlinear phenomena, of chaos, of complexity in systems with multiple variables and also that correlation and causation aren’t the same thing — you may still have to look for an underlying cause.
The example of the red and blue ball film seems to me to be an example of people applying basic model — we expect things to be causal. We notice deviations from natural motion in animations, and it bother us a bit. We interpret data in the context of the science we know. The discovery here is that the animations are not reflecting reality. You recognize that (or not) and proceed. So it seems to me that this was more of a psychological/cognition test than a critique of science.
Another issue is using medical research as a proxy for all of science; there’s a lot of medical advice that seems to be based on conventional wisdom — a physician finds something that works, and that becomes a treatment, but while there’s plenty of science in medicine, it is not the best example of science in action.
The study concluded that, in most cases, “the discovery of a bulge or protrusion on an MRI scan in a patient with low back pain may frequently be coincidental”.
This is not the way things are supposed to work. We assume that more information will make it easier to find the cause, that seeing the soft tissue of the back will reveal the source of the pain, or at least some useful correlations.
My strong objection here is that this is exactly the way science is supposed to work. You have some data and you formulate a hypothesis and you check it. You have more information, but that new information is finding out that you were wrong. This doesn’t invalidate the method — it vindicates it!
The real story here is that complex science is hard to do. Research is full of false leads and blind alleys (and metaphors for such things) and subtle interactions. There are limitations in looking for correlations, but we are limited to piecing together what we are able to observe in finding out the underlying rules of nature. That’s science. As we learn more, it’s getting harder to push the boundaries. But if it’s failing us, what’s the alternative?
1. Reductionism is perfectly valid. Breaking a problem into its fundamental parts is one way of gaining understanding, but it is far from the whole story. Once the parts are understood, it still remains to understand how those parts interact with one another to make up the whole. Scientists know this perfectly well. The author seems not to fall into that category.
2. Medicine is not science. The goal of science is a fundamental understanding of how nature works. The goal of engineering is the production of useful products having cost and schedule constraints, often using knowledge gained from science. But engineering is not science. The goal of medicine is the treatment of human beings who are sick or injured, under constraints of urgency and cost, often using knowledge gained from science. But medicine is not science. Medicine both are presented with systems that are far too complex to be completely understood with our current knowledge. Nevertheless, products must be produced and patients must be treated. Physicians and engineers are sometimes forced to make decisions and commit resources, even when the problem is not fully understood — and therefore some decisions will be shown in hindsight to be incorrect despite the fact that the best available understanding was employed.