Skip To Content
Cambridge University Science Magazine
We have heard this mantra, or its reincarnation in the form of “data not dates”, since the start of the pandemic. But this is not the first time that decision makers have sought to portray their approaches as objective in order to distance themselves from their judgement calls. Whenever you hear the term “evidence-based”, it is worth remembering that evidence in isolation never calls the shots. Most of us remember having to make a difficult decision, weighing up the pros and cons of several options. We may have tried to perform an objective analysis, but our subjective judgement inevitably contributes. Even if we attempt entirely rational decision making, our set of values, how we weigh them up against each other and our attitude to risk need to feature in the consideration. Worse, these inputs are affected by how a given decision question is framed.

Say you are about to purchase a laptop and are weighing up whether to also buy insurance for it. You know you will most likely never have to make a claim, but think about the consequences of the laptop breaking at a crucial point in time when you would need a quick replacement. The price tag per month seems relatively low for additional peace of mind, but if you end up not needing it, you could have saved that money. How would your choice be affected if the insurance product was already selected by default at check-out, and you had to opt out rather than opt in? It is interesting that we often consider purchasing insurance a responsible action and purchasing a lottery ticket an irresponsible one when they both have an expected value that is a loss. For gadget insurance, if you are not prone to losing/breaking items and have the means to replace them if necessary (a condition unfortunately likely to reinforce existing inequalities), you may be better off self-insuring. But when more is at stake relative to what an individual can readily afford, loss aversion seems a sensible bias and purchasing insurance can save us from life-changing debt or worse, even if most people without insurance end up lucky most of the time. Once we know how events unfold, it can be hard to feel good about perfectly reasonable decisions which carefully considered the probabilities of future events and their consequences, but turned out to be poor choices in the scenario that materialised. It is also easy to retrospectively rationalise decisions that were careless but lucky.

Quantity, quality and inequalities



Moral dilemmas are commonly framed as trolley problems. In one classic example, five people are tied to some train tracks, about to be hit and killed by an approaching trolley. You are given the option of pulling a lever and diverting the trolley onto a side track containing only one person. We can ponder hypotheticals, but are in fact constantly faced with real-life trolley problems. They are, however, usually complicated further by containing many different side tracks and by the outcomes being uncertain (as opposed to the certain death of 1 or 5 people respectively). This is where scientific evidence can often come in: narrowing down the uncertainties and giving us, if certainty is impossible, at least quantitative probability estimates for what lies on the different tracks. Unfortunately, estimates generated by different research teams’ methodologies sometimes do not agree. Confidence intervals may not overlap, as sources of uncertainty are not always comprehensively accounted for, and the true error bars are often wider. Even where consensus exists, science itself still cannot make the decision of which, if any, levers to pull: “following the science” is meaningless.

The roots of evidence-based practice can be found in evidence-based medicine. It might seem relatively simple to set objective values here: the National Institute for Health and Care Excellence (NICE) which assesses medical interventions in the UK uses Quality-Adjusted Life Years (QALYs) when comparing the cost effectiveness of different treatments. However, it is not easy to weigh the net benefits to some patients with the side effects experienced by others. There is also the opportunity cost of treatments: the resources could instead be spent on other interventions, helping people with other conditions, preventative care, or beneficial projects in other parts of society. Some have argued that medicalisation has hindered addressing social issues such as poverty that are sometimes the root cause of poor health by turning them into medical problems suffered by individuals. 

No evidence is bias-free



Personalised approaches would ideally avoid administering a treatment to people for whom the harm of side effects outweighs the positives, while still allowing those overall helped by a treatment to benefit from it. Currently, this is tricky: the standard way of assessing the efficacy of treatments is a randomised, controlled trial (RCT), which compares the outcomes of groups, not individuals. RCTs will not generally show us whether some people got worse due to the treatment if a larger number got better. Stratification may be possible if sample sizes are large enough to see significant differences for subgroups, as has happened with the AstraZeneca vaccine recommendation depending on age. However, this updated advice only arose during real-life use, after the vaccine was administered to millions of people. RCTs are also expensive to run on the scales required, which means that potentially successful, innovative treatments with insufficient funding behind them are automatically ruled out. This highlights one of the challenges to objectivity of an evidence base requirement, as funding provision will be biased towards historically successful approaches and those backed by industry due to having potential for making profit. That leaves other potential approaches to slip through the net, unable to meet the necessary evidence base.  Another problem is that the standards of evidence required can be gamed: p-hacking to achieve “significant” outcomes is a known issue. Furthermore, most meaningful outcomes are not assessable on short timescales, so proxies are commonly substituted without much scrutiny, e.g. reporting blood pressures rather than cardiovascular event incidence.

Reducing people’s conditions to a small number of measurements (or self-assessments) hides a lot of complexity in all but the simplest of cases. It presents a particular challenge when it comes to mental health, with each individual’s life events and experiences that shaped their mental state being different, and each client-therapist relationship being unique. There should be no surprise that treatments with the same name may therefore look very different in practice, and will produce different outcomes. Ideally, everyone would be given a chance to explore and find a treatment that works well for them, but the funding model often favours approaches that are overall most cost-effective when applied to the average person rather than being individualised.

When it comes to social and political interventions, personalising incentives and penalties might also be optimal for outcomes, but currently lies outside the Overton window. Any measures taken or not taken adversely affect some sections of society more than others, and it is up to politicians to weigh up the consequences. An additional complication is the timescale, as conflicts arise when considering short-, medium- or long-term outcomes in turn. This is readily apparent when we look at environmental problems such as climate change, where short-term priorities have been slowing down progress even though the consequences for future generations are severe. It is also common to see sticking plaster solutions to social problems rather than addressing the root causes: prioritising long-term outcomes carries few rewards for politicians looking to get re-elected within a few years.

Returning to the trolley problem, what assumptions did you intuitively make about how likely it is for different people to end up on the various tracks? Always diverting to kill just one person may be the (short-term!) utilitarian answer, but given existing inequalities, it may not be an egalitarian one.

Andrea Chlebikova is a final year PhD student affiliated with St Catherine's College.