Are We Fudging IAPT Data?

In my PWP training today we were taught how we are supposed to record our targets and recovery rate data, and I think I’ve just realised one way that IAPT services might potentially be overestimating their success rates…

We were told that if, by the end of the 6 Low Intensity CBT sessions we offer (outcome measures for depression and anxiety are taken at each session), the client’s scores on the two main measures have gone to ‘recovery’ (meaning below caseness, so scoring below 9 for the PHQ-9 or below 7 for the GAD-7) then we mark the final session as a ‘treatment session’, the system will count that client as ‘recovered’ – which makes sense, and that’s all fine and well.
But if we arrange a ‘follow up session’ with them in a few weeks time, and find that their scores have risen to now above caseness, then we are told to mark that session as a ‘follow up session’, and it will not count towards our recovery rates. So, we would have learnt that the person has not in fact really benefitted from the sessions that we have given them, or at least not in any lasting way, but on the system that rise in scores will essentially be ‘invisible’, so ours, and our companies recovery targets, will be unaffected. It will look like IAPT did its job and was successful in ‘curing’ the individual, even though the benefits of our treatment have actually not had too much of a lasting impact, and so weren’t so good after all.

We were also taught another way that might overestimate IAPT’s success rate. If, at the end of the 6 sessions the client’s scores have not lessened enough for them to count as ‘recovered’, but at the follow up session a few weeks later we find that their scores have dropped to below caseness, we are told to mark that extra session as a ‘treatment session’ (not a ‘follow up session’ as in the situation that I’ve described above) so that on the system it will count as thanks to our treatment, and so count towards our recovery rate. If we’re feeling generous to IAPT, we may say that our sessions and support just ended up having a bit of a delayed effect – maybe they were a bit slow to apply all the ‘tools’ we gave them and so we do deserve to pat ourselves on the back. But, you could just as well argue that maybe their life just improved slightly (nothing to do with us), or it was a purely natural recovery (generally consistent low mood does tend to improve over time even with no treatment). So basically we’re allowing natural recovery to count as IAPT-caused, when there is no true measure as to whether this was actually the case.

Neither of these situations are explicitly fiddling with the data – we are still trusting and taking at face value someone’s scores (this is to say nothing of the problems that may inhere in using the outcome measures that IAPT services do, for more on this see Levis et al. 2020), but it’s easy to see how they might lead to a slight bias towards favouring IAPT Guided Self-Help treatments which may not reflect their actual efficacy…

Would be very interested to hear people’s thoughts on this!