Yesterday I went to a very interesting debate at the Institute of Psychiatry, with the motion:
This house believes that CBT for psychosis has been oversold.
I’m glad to say that it was a well mannered and reasonable debate, with those on both sides presenting interesting cases. Although the actual question is perhaps not that interesting, the myriad of underlying issues are. Things like:
- Does CBT for psychosis actually work?
- If so, for what does it work best?
- Which version of CBT for psychosis is most effective?
- Which outcomes should we be measuring?
- How do we match clients to therapies?
- Does CBT for psychosis have to change the topology of positive or negative symptoms of psychosis to be useful? Or might it be enough to change a person’s relationship with their experience?
- Are there other interventions that we would be better focussing on instead?
In the end, the motion was defeated resoundingly, with a large shift from the first vote at the beginning of the debate. Those for the motion, put this down to a triumph of anecdote over statistics. Of course, as psychologists and philosophers may say, it’s not events that matter, but what we believe about them and how we respond. An alternative belief is that perhaps the audience actually don’t think CBT for psychosis has been sold very strongly at all, regardless of its effectiveness. Or, perhaps people thought that the issues of CBT for psychosis are too complex to be encapsulated in the particular meta-analyses that were the primary focus of the speakers for the motion. There are many reasons why the vote could have gone this way, and without doing a survey, I could not tell you!
Response to Keith Laws.
One reason I’m writing this, is that I rashly described (over twitter) one of Keith Laws’s assertions as intellectually dishonest, when perhaps I should have said he was loose with his wording. He understandably challenged me to defend this claim. So I will do so here on my blog (as I’m not a very familiar with twitter, and don’t think 140 characters is useful for discussion). Before I go any further, I should declare a conflict of interest, I’m a clinical psychologist and much of my workload involves CBT for psychosis.
Unfortunately I don’t have a recording of the debate yet, so I don’t have his exact words, thus I’m going to address what I thought his point was! I remember Laws saying that the evidence says that CBT for psychosis only helps 5% of people treated. For the moment, you can find a reference to this on Alex Langford’s live take on what was said here at storify, and the tweet of the claim in question here.
Laws, I believe, bases this claim on a meta-analysis on which he is last author. This paper concludes that based on the meta-analysis:
Cognitive-behavioural therapy has a therapeutic effect on schizophrenic symptoms in the ‘small’ range. This reduces further when sources of bias, particularly masking, are controlled for.
And finds that (for example), the effect size on overall symptoms falls from -0.62 to -0.15 (95% CI –0.27 to –0.03), when studies with insufficient and sufficient masking are compared. (Always note the confidence interval. Even here, at this significance level, the true effect size might be as low as -0.03 or as high as -0.27).
My claim is that even if we take no issue with the way in which the meta-analysis has been carried out (and of course we might), and even if we temporarily accept the figure of 5% (I’ll confess I’m not sure exactly where this came from, some NNT calculation?), Law’s conclusion that CBT only helps 5% of people seems flawed.
One key reason for this, is that the meta analysis includes both treatment as usual and control interventions as comparators. Thus a more valid conclusion would be that CBTp helps only 5% more people than a mixture of treatment as usual (TAU) and control interventions such as befriending. To me, this is a quite different thing. For instance, it is possible that the control interventions were also very effective and thus CBT had a hard job getting significantly better results. As an example, let’s say in a study, befriending had a 40% impact on symptoms and CBT had a 45% impact. This would not mean that CBT helped only 5%, but 5% more, although the difference between interventions was only 5%.
A second reason not to accept this interpretation is that our clients’ wellbeing can be quite independent of the number and frequency of their positive symptoms. A person can for instance, continue having auditory hallucinations, but completely change their relationship to them, and thus reduce depression and anxiety, and increase quality of life. Thus CBT may help clients in the absence of a change in positive symptoms (the meta analysis that Laws was an author on, did not consider other outcomes such as depression and anxiety – key issues for our clients). Equally, if a client asks us first to help them with their panic attacks, that is generally what we do, yet progress here will not necessarily show in a measure of psychosis symptoms.
I’m in agreement with Laws in some senses. The literature can certainly be improved upon. Clinically, we often seem to see remarkable change, yet the literature at large, does not necessarily reflect this. This may be because CBT is not adding much to treatment as usual, or other interventions, but that we wrongly interpret change as related to CBT. Or, it may be because there are many different types of CBT, some better than others. Or it may be because we are measuring the wrong things. Or it may be that we are looking at the evidence too simplistically.
Incidentally, it was argued in the debate that it would take a huge number of extra significant trials to improve the effect size of CBTp in meta-analyses. This to me, shows a misunderstanding of CBTp. CBTp is not quetiapine, which is always the same. CBT is evolving over time and comes in many forms (from individual to group, from classic CBT to taste the difference Mindfulness Based CBT, from CBT for general psychosis to CBT for command hallucinations). Lumping all studies together as if it were the same, is thus not necessarily a good idea.
Whatever the explanation, it behoves us to rethink the way we have run our trials to date, in order to capture those outcomes that are most useful to service users. (I suspect Laws may not appreciate how difficult it is to get funding to run sufficiently powered studies, which may also explain how many studies are at the right side of his forest plot, yet non-significant). We also clearly need to continue to refine our treatment protocols. We are beginning to do this, with targeted interventions such as the COMMAND trial (among others). It’s a hard slog, but I for one, think the future is bright.
I will respond to your claim that I am ‘intellectually dishonest’ or ‘loose with my wording’
I can assure you I choose my words much more carefully than you appear to do!
1) You say “Law’s (sic) conclusion that CBT only helps 5% of people seems flawed”
You argue “One key reason for this, is that the meta analysis includes both treatment as usual and control interventions as comparators. Thus a more valid conclusion would be that CBTp helps only 5% more people than a mixture of treatment as usual (TAU) and control interventions such as befriending. To me, this is a quite different thing. For instance, it is possible that the control interventions were also very effective and thus CBT had a hard job getting significantly better results. As an example, let’s say in a study, befriending had a 40% impact on symptoms and CBT had a 45% impact. This would not mean that CBT helped only 5%, but 5% more, although the difference between interventions was only 5%”
What is your point about the 5% exactly? Are you implying my 5% doesnt apply when compared to a so-called ‘active control’, but does apply when compared to TAU or something altogether different – e.g. all control conditions make it hard to get a CBT for psychosis effect? I don’t follow your argument
2) you say “A second reason not to accept this interpretation is that our clients’ wellbeing can be quite independent of the number and frequency of their positive symptoms. A person can for instance, continue having auditory hallucinations, but completely change their relationship to them, and thus reduce depression and anxiety, and increase quality of life. Thus CBT may help clients in the absence of a change in positive symptoms (the meta analysis that Laws was an author on, did not consider other outcomes such as depression and anxiety – key issues for our clients).”
As meta-analysts, we dont decide the outcome measures in trials – ask CBT trialists why they dont often use these measures!
and where they have looked at mood or functioning – neither is significant (see Wykes et al meta analysis). CBT trialists like Kingdon and Kinderman have taken the quasi-neuroleptic approach to CBT…not me
3) Finally , you say “I suspect Laws may not appreciate how difficult it is to get funding to run sufficiently powered studies, which may also explain how many studies are at the right side of his forest plot, yet non-significant”
Are you trying to argue that we havnt had enough sufficiently powered studies? If so, present a case – work out the power required and then look at how many studies meet your criterion
In any case, meta-analysis increases the power to detect even the smallest effects (if present) – every single study could be underpowered and every could be nonsignificant, but yet the overall effect could still be significant – so your power argument carries no weight as far as I can see.
Thank you for taking the time to respond. I’ll address the points in order. I appreciate the discussion and I’m genuinely curious. I shall also try to be careful with my wording!
1. 5%. This was my main initial objection, and thus perhaps the one to focus on.
A. Firstly, I don’t fully understand where you get the figure of only 5% of people respond to CBT. Could you clarify? I’m sorry that I missed where this came from, as it makes it difficult to interpret.
B. Taking the 5% for granted, and assuming it came from the meta-analysis; as far as I can see, all we can say is that CBT helped 5% MORE people than the interventions it was compared with. That is not the same as CBT only helps 5% of people. (5% more, is not the same as 5%). Do you dispute this interpretation?
As I’m not sure exactly where the 5% figure came from, I find it difficult to argue much further than this. I can offer the following conjecture:
I would expect CBT, if it is effective, to separate more from TAU than active control interventions (which are normally TAU + control). In your meta-analysis there is no significant difference between control and TAU comparators, although “the pooled effect size was smaller in five studies using a control intervention than in eight studies that did not,”.
If the 5% figure came from a mixture of control and TAU comparators, one could argue the effect of CBT would be underestimated due to not accounting for the effect of the control interventions. If on the other hand, the 5% came solely from TAU comparators, it might be a stronger argument (CBT then helps 5% more than TAU). As an analogy. Let’s say I compared quetiapine (Q) to a mixture of risperidone and placebo (R+P), and found Q helped 5% more people than R+P, would it be fair to say Q only helped 5% of people? Does that make sense?
2. OUTCOMES. I acknowledge that our trials should move away from the quasi-neuroleptic approach. However, the fact that the trials have not measured these other outcomes does not mean that CBT had no effect on them. It simply means we have failed to measure and demonstrate or rule out an effect. We have some evidence to support effect on specific outcomes (e.g. the COMMAND trial), but we must remain agnostic until the evidence is stronger. The onus is indeed on us to do so, and as I mention, this is the direction in which trials are moving.
3. STUDY POWER. My argument is not with regard to the meta-analysis, it is with regard to the individual studies. You have made a repeated point of saying how many of these studies were non-significant.
My argument is that when looking at all those non-significant studies, which sit on the expected side of 0 (not how I would expect a forest plot for a true placebo to look), many of those were probably not well powered to detect small-medium effect sizes. Thus a meta-analysis might, as you say, increase power and allow us to test overall significance (assuming the studies are homogenous enough to be taken together).
You indeed found significant effects in the meta-analysis, despite so many of the individual studies being non-significant. As you say in your blog “This reveals several things – that even when 75% of studies are nonsignificant, meta-analysis can produce an overall significant effect.”. I had always assumed that this was one of the main points of a meta-analysis!
And yes, perhaps I should go through and see what the power of each study was. It’s a good idea.
Thanks again.
1. You say “I don’t fully understand where you get the figure of only 5% of people respond to CBT. Could you clarify? I’m sorry that I missed where this came from, as it makes it difficult to interpret.”
So why not ask me rather than accuse me of being intellectually dishonest and being loose with my words?
2. You say “Taking the 5% for granted, and assuming it came from the meta-analysis; as far as I can see, all we can say is that CBT helped 5% MORE people than the interventions it was compared with. That is not the same as CBT only helps 5% of people. (5% more, is not the same as 5%). Do you dispute this interpretation?”
It is the same …unless you say the control is not a ‘control’ …so why would you argue that? And if you do, and use it as an argument for no difference, then why prioritise CBT over the control in clinical practice?
3. You say “the fact that the trials have not measured these other outcomes does not mean that CBT had no effect on them.” … as you say later “The onus is indeed on us to do so, and as I mention, this is the direction in which trials are moving.”
I don’t think trials are moving in that direction (certainly not of volition) – Birchwood’s BJP quasi-neuroleptic paper was published 8 years ago and nothing happened (yet), but yes…the onus is on you (not me)
4. You say “My argument is that when looking at all those non-significant studies, which sit on the expected side of 0 (not how I would expect a forest plot for a true placebo to look), many of those were probably not well powered to detect small-medium effect sizes. Thus a meta-analysis might, as you say, increase power and allow us to test overall significance (assuming the studies are homogenous enough to be taken together).”
I don’t think you can make that placebo argument because of the varied quality and sample sizes of studies. Later studies are often higher quality – look at overall symptoms in our Fig 2 forest plot across last 10 studies – looks like a placebo ?
In order of original points.
1. I apologised for the “intellectually dishonest” right from the start. I also said right from the start of this blog that I did not understand where it [the 5% claim] came from, although I made an assumption that it came from your meta-analysis. Could you humour me and explain?
An active control is an active control. An active control such as befriending is expected to have an effect (this is based on the observation that people with psychosis have, on average, very impoverished social networks). Thus, I do not consider an active control equivalent either to placebo or TAU. And this is why I do consider the language loose.
If an active control turns out to be cheaper and as effective as CBT, then yes, it should be prioritised.
2. Indeed, when I say the onus is on us, I am referring to CBT researchers.
3. Of the last 10 studies, all bar one seem to be (even if barely) on the ‘favouring CBT’ side of the line. If it was a placebo, would we not expect some more to be on the other side?
So for those who have not followed on Twitter. Professor Law’s last contribution to this discussion was as follows:
“@ferguskane You didnt apologise …so I have finished talking with you”
This struck me as rather out of the blue, and also not true (assuming he was talking about me calling a claim he made “intellectually dishonest” and not just having the gall to continue discussing), I previously said:
@Keith_Laws. Yes sorry. I should really say loose with your wording and interpretations. Difficult to fit into a tweet.
Ok, not a full blown grovelling apology, but then what can you fit into a tweet? And you can judge from the tone of the overall debate, whether I was respectful or not.
I’d hate to think that Professor Laws simply stopped the discussion because he could not actually justify his claim (or even provide an explanation for the how the 5% claim was calculated).
So I thank Professor Law’s for his engagement with the issue, and leave the reader to draw their conclusions as to whether his claim stands up to scrutiny. Professor Laws is welcome to rejoin the discussion at anytime.
My personal conclusion. Professor Laws made a claim that was at best loose in it’s wording, and which did not stand up to analysis.
Well, although Prof Laws has decided to cease and desist on this blog, he continues on twitter. I’m guessing his latest tweet is addressing point 3 here… but perhaps I’m being egocentric.
He’s refrained from discussing this further, offering interpretation or citing the study. I’ve genuinely no idea what one can conclude from visual inspection of forest plots.
I’m glad to say however, that he’s started to look into different psychological controls, and has posted a potentially interesting paper: http://bit.ly/1kH3Kix
This is a typical strategy of Professor Laws when faced with valid counterarguments. He simply cannot stand being wrong – I have never seen him admit to it. Instead he blocks, ridicules, kicks the ball into the long grass (eg “why don’t you send a letter to journal and I’ll respond there?”), or pretends to be offended so he doesn’t have to reply. A skilled sophist!