Evidence-biased psychiatry

The randomised controlled trial (RCT) came into psychiatry in the 1950s. 10 and 11 The evidence from placebo-controlled trials was meant to act as a brake on over-enthusiastic claims of what a drug could do. It was meant to stop therapeutic bandwagons. Used in this way, RCTs have recently brought a halt to debriefing after trauma by showing that the benefits claimed for debriefing just cannot be demonstrated when compared with non-intervention. Clinical trials show what treatments do not work; they cannot show what works. A positive result for a trial means that it is simply not possible to say that this treatment has no effect. 12


However, instead of providing a brake on therapeutic enthusiasm, a belief has developed that RCTs show that treatments work. The results are increasingly used to persuade mental health workers that they have a duty not just to detect conditions but to persuade patients to go on treatment. All new antidepressants and antipsychotics now undergo a series of clinical trials of this sort but, far from pushing science forward, clinical trials have become a marketing tool. Results are pored over in detail, with the details of side effects being of most interest, as marketing strategists decide on which aspects of the compound to emphasise, given the profiles of their competitors.

If a fraction of the amount of money and effort that is put into clinical trials were to be put into attempts to specify the range of effects that a compound may have (other than the target clinical indication), the causes of science and therapeutics might be much better served. Instead, the costs of such trials significantly increase the overall costs of new drugs and lead to the need for companies to engage in aggressive marketing practices of the sort that get the pharmaceutical industry a reputation for being unethical.

All of this is well known, but there are in fact much bigger problems with the evidence from RCTs than is usually realised. RCTs originated within epidemiology. They are a legitimate shortcut that enables companies to recruit hundreds rather than thousands of subjects to trials – but at a cost. The cost is that there is no guarantee that the trial sample is representative of the rest of the population. Many epidemiologists have considerable misgivings about the capacity of randomisation to overcome the problems of external validity that result from the sampling methods adopted by this approach. Basically, many participants in trials are professional patients recruited by advertising who do not represent the kind of patient who is later most likely to be put on the drug. The problems inherent in RCTs are compounded in company-sponsored RCTs, which explicitly recruit samples of convenience: they want young and fit subjects who are not particularly ill. Within psychiatry a further problem lies in the measures used to assess whether a drug works – we use rating scales rather than assessing how many people get back to work or how many lives are lost. For all these reasons, company trials may show that the treatment has an effect, but these trials offer no guarantee that this effect will translate into clinical practice.

Why would companies settle for trials like this? The answer lies in the fact that such trials will suffice to get a drug through regulators in the USA and Europe. The public and health professionals tend to see the regulators as a watchdog guaranteeing the efficacy and safety of medicines. This is not their role. The role of the regulator is to accept or otherwise that this product labelled as butter or an antidepressant actually is butter or an antidepressant as claimed. To do this in the case of an antidepressant, the regulators simply have to see some result from some trial that would make it impossible to say that this drug has no effect in depression. However, licensing a drug on this basis says nothing about how good an antidepressant it is or how safe it is in the longer run. The decision to use the treatment clinically at present can be based only on clinical judgement; it should not follow from the fact that regulators have permitted a drug on to the market. Companies, however, portray regulatory approval as meaning that the drug can all but be put in the drinking water.

There is a further problem in psychiatry, where trials of treatments never look at whether patients get back to work or are less likely to commit suicide, etc. Psychiatric trials all look at changes on rating scale scores. There are four potential domains of measurement: 1) observer-based disease-specific rating scales, such as the Hamilton Rating Scale for Depression (HAM-D) – scales where the clinician rates symptoms of particular interest; 2) patient-based disease-specific rating scales, such as the Beck Depression Inventory; 3) observer-based non-disease-specific scales of global functioning; and 4) patient-based non-disease-specific scales of global functioning (Quality of Life/QoL), where patients rate areas of general functioning that matter for them.


These treatments therefore are not like taking penicillin for pneumonia – they don’t work in this sense. If convincing scores on rating scales across the range of domains of measurement were available, there would still remain the problem of factoring in the evidence of discontinuation syndromes before anyone could begin to say whether it was a good idea to take a treatment or not.

Stay updated, free articles. Join our Telegram channel

Jun 10, 2016 | Posted by in PSYCHIATRY | Comments Off on Evidence-biased psychiatry

Full access? Get Clinical Tree

Get Clinical Tree app for offline access