Yesterday I went to a very interesting debate at the Institute of Psychiatry, with the motion:
This house believes that CBT for psychosis has been oversold.
I’m glad to say that it was a well mannered and reasonable debate, with those on both sides presenting interesting cases. Although the actual question is perhaps not that interesting, the myriad of underlying issues are. Things like:
- Does CBT for psychosis actually work?
- If so, for what does it work best?
- Which version of CBT for psychosis is most effective?
- Which outcomes should we be measuring?
- How do we match clients to therapies?
- Does CBT for psychosis have to change the topology of positive or negative symptoms of psychosis to be useful? Or might it be enough to change a person’s relationship with their experience?
- Are there other interventions that we would be better focussing on instead?
In the end, the motion was defeated resoundingly, with a large shift from the first vote at the beginning of the debate. Those for the motion, put this down to a triumph of anecdote over statistics. Of course, as psychologists and philosophers may say, it’s not events that matter, but what we believe about them and how we respond. An alternative belief is that perhaps the audience actually don’t think CBT for psychosis has been sold very strongly at all, regardless of its effectiveness. Or, perhaps people thought that the issues of CBT for psychosis are too complex to be encapsulated in the particular meta-analyses that were the primary focus of the speakers for the motion. There are many reasons why the vote could have gone this way, and without doing a survey, I could not tell you!
Response to Keith Laws.
One reason I’m writing this, is that I rashly described (over twitter) one of Keith Laws’s assertions as intellectually dishonest, when perhaps I should have said he was loose with his wording. He understandably challenged me to defend this claim. So I will do so here on my blog (as I’m not a very familiar with twitter, and don’t think 140 characters is useful for discussion). Before I go any further, I should declare a conflict of interest, I’m a clinical psychologist and much of my workload involves CBT for psychosis.
Unfortunately I don’t have a recording of the debate yet, so I don’t have his exact words, thus I’m going to address what I thought his point was! I remember Laws saying that the evidence says that CBT for psychosis only helps 5% of people treated. For the moment, you can find a reference to this on Alex Langford’s live take on what was said here at storify, and the tweet of the claim in question here.
Laws, I believe, bases this claim on a meta-analysis on which he is last author. This paper concludes that based on the meta-analysis:
Cognitive-behavioural therapy has a therapeutic effect on schizophrenic symptoms in the ‘small’ range. This reduces further when sources of bias, particularly masking, are controlled for.
And finds that (for example), the effect size on overall symptoms falls from -0.62 to -0.15 (95% CI –0.27 to –0.03), when studies with insufficient and sufficient masking are compared. (Always note the confidence interval. Even here, at this significance level, the true effect size might be as low as -0.03 or as high as -0.27).
My claim is that even if we take no issue with the way in which the meta-analysis has been carried out (and of course we might), and even if we temporarily accept the figure of 5% (I’ll confess I’m not sure exactly where this came from, some NNT calculation?), Law’s conclusion that CBT only helps 5% of people seems flawed.
One key reason for this, is that the meta analysis includes both treatment as usual and control interventions as comparators. Thus a more valid conclusion would be that CBTp helps only 5% more people than a mixture of treatment as usual (TAU) and control interventions such as befriending. To me, this is a quite different thing. For instance, it is possible that the control interventions were also very effective and thus CBT had a hard job getting significantly better results. As an example, let’s say in a study, befriending had a 40% impact on symptoms and CBT had a 45% impact. This would not mean that CBT helped only 5%, but 5% more, although the difference between interventions was only 5%.
A second reason not to accept this interpretation is that our clients’ wellbeing can be quite independent of the number and frequency of their positive symptoms. A person can for instance, continue having auditory hallucinations, but completely change their relationship to them, and thus reduce depression and anxiety, and increase quality of life. Thus CBT may help clients in the absence of a change in positive symptoms (the meta analysis that Laws was an author on, did not consider other outcomes such as depression and anxiety – key issues for our clients). Equally, if a client asks us first to help them with their panic attacks, that is generally what we do, yet progress here will not necessarily show in a measure of psychosis symptoms.
I’m in agreement with Laws in some senses. The literature can certainly be improved upon. Clinically, we often seem to see remarkable change, yet the literature at large, does not necessarily reflect this. This may be because CBT is not adding much to treatment as usual, or other interventions, but that we wrongly interpret change as related to CBT. Or, it may be because there are many different types of CBT, some better than others. Or it may be because we are measuring the wrong things. Or it may be that we are looking at the evidence too simplistically.
Incidentally, it was argued in the debate that it would take a huge number of extra significant trials to improve the effect size of CBTp in meta-analyses. This to me, shows a misunderstanding of CBTp. CBTp is not quetiapine, which is always the same. CBT is evolving over time and comes in many forms (from individual to group, from classic CBT to taste the difference Mindfulness Based CBT, from CBT for general psychosis to CBT for command hallucinations). Lumping all studies together as if it were the same, is thus not necessarily a good idea.
Whatever the explanation, it behoves us to rethink the way we have run our trials to date, in order to capture those outcomes that are most useful to service users. (I suspect Laws may not appreciate how difficult it is to get funding to run sufficiently powered studies, which may also explain how many studies are at the right side of his forest plot, yet non-significant). We also clearly need to continue to refine our treatment protocols. We are beginning to do this, with targeted interventions such as the COMMAND trial (among others). It’s a hard slog, but I for one, think the future is bright.