Avoiding Advice

Something I’ve been struggling with in my Psychological Wellbeing Practitioner training and clinical work so far is that we are told that the ‘therapy is in the materials’ rather than in ourselves as clinicians. Our role is to guide our clients through self-help material that is appropriate to their psychological distress, and help them to problem-solve any difficulties that they might have along the way. Rightfully, due to our lack of training in delivering any kind of therapy proper, we are told to concentrate on the CBT-based tools and techniques that we are supposed to be imparting to our clients. It is (also rightfully) emphasised to us that the focus should be on ‘collaboration’ rather than any kind of didacticism in our delivery.

However, it feels that by focusing so much on the content and tools that we are providing our clients with, I too easily slip into ‘offering advice’, which I think is usually so antithetical to any kind of meaningful therapeutic intervention! I really do try not to do it, but find myself on occasion saying, ‘What about trying this…’, when discussing how to change a sleep routine, for example. (I feel no temptation to offer any more significant life advice, thank god). One way around this that our supervisors have recommended is to ask questions based on the materials/information you’ve given your client, for example, ‘Why do you think I asked you to read that Booklet?’, ‘Can you explain to me the rationale behind Behavioural Activation?’. But I think those questions can be useful to check or consolidate learning, rather than genuinely encouraging the client to arrive at their own conclusions and answers…

I think this is related to my major qualm with CBT-based approaches in general, the fact that though they profess themselves to be less hierarchical than psychoanalytic or psychodynamic approaches are seen to be (in that old-fashioned idea of psychoanalyst having all the answers but remaining silent), they can end up being more unequal in power dynamics. In the psychoanalytic approach, regardless of whether the analyst thinks she has all the answers, she at least gives the patient space to think things through in their own way, following their own patterns of thought, rather than shoving tips and tricks down their throat in a limited number of sessions. The CBT clinician can end up asking patronising questions (like those above, ‘Can you confirm that you’ve understood all the information I have imparted to you today?’), rather than genuinely engaging with the client’s way of understanding the world, and taking it on its own terms. The CBT clinician is Wise Teacher, who benevolently takes on board the patient’s particular life circumstances to adapt the techniques to them, but nevertheless remains the one with all the information the client needs to live a better life. I guess these power dynamics risk becoming problematic in all kinds of therapies, because essentially, the client is coming to a trained ‘expert’ for help. But I think it’s important that we remind ourselves of the pitfalls of this kind of imbalance as often as possible, and do everything we can to stop offering advice. I’m mainly speaking to myself here.

Are We Fudging IAPT Data?

In my PWP training today we were taught how we are supposed to record our targets and recovery rate data, and I think I’ve just realised one way that IAPT services might potentially be overestimating their success rates…

We were told that if, by the end of the 6 Low Intensity CBT sessions we offer (outcome measures for depression and anxiety are taken at each session), the client’s scores on the two main measures have gone to ‘recovery’ (meaning below caseness, so scoring below 9 for the PHQ-9 or below 7 for the GAD-7) then we mark the final session as a ‘treatment session’, the system will count that client as ‘recovered’ – which makes sense, and that’s all fine and well.
But if we arrange a ‘follow up session’ with them in a few weeks time, and find that their scores have risen to now above caseness, then we are told to mark that session as a ‘follow up session’, and it will not count towards our recovery rates. So, we would have learnt that the person has not in fact really benefitted from the sessions that we have given them, or at least not in any lasting way, but on the system that rise in scores will essentially be ‘invisible’, so ours, and our companies recovery targets, will be unaffected. It will look like IAPT did its job and was successful in ‘curing’ the individual, even though the benefits of our treatment have actually not had too much of a lasting impact, and so weren’t so good after all.

We were also taught another way that might overestimate IAPT’s success rate. If, at the end of the 6 sessions the client’s scores have not lessened enough for them to count as ‘recovered’, but at the follow up session a few weeks later we find that their scores have dropped to below caseness, we are told to mark that extra session as a ‘treatment session’ (not a ‘follow up session’ as in the situation that I’ve described above) so that on the system it will count as thanks to our treatment, and so count towards our recovery rate. If we’re feeling generous to IAPT, we may say that our sessions and support just ended up having a bit of a delayed effect – maybe they were a bit slow to apply all the ‘tools’ we gave them and so we do deserve to pat ourselves on the back. But, you could just as well argue that maybe their life just improved slightly (nothing to do with us), or it was a purely natural recovery (generally consistent low mood does tend to improve over time even with no treatment). So basically we’re allowing natural recovery to count as IAPT-caused, when there is no true measure as to whether this was actually the case.

Neither of these situations are explicitly fiddling with the data – we are still trusting and taking at face value someone’s scores (this is to say nothing of the problems that may inhere in using the outcome measures that IAPT services do, for more on this see Levis et al. 2020), but it’s easy to see how they might lead to a slight bias towards favouring IAPT Guided Self-Help treatments which may not reflect their actual efficacy…

Would be very interested to hear people’s thoughts on this!