Edited by Jill Wright,
A British site called The Mental Elf is a handy resource for psychologists and members of the public who want to keep up with the latest research on mental health.
The latest post underlines the findings of many other researchers, by indicating, yet again, that no one "brand" of psychotherapy is better than another for the treatment of depression.
What I can't help but wonder, however, is whether sites like this might better serve their readers by offering them just a small measure of salt to digest, along with the research news.
They might be made aware, perhaps of the phenomenon psychologists call "research allegiance effect", which has been demonstrated in hundreds of studies.
Luborsky et al., for instance, found that more than two-thirds of the variance found in outcomes of studies was accounted for by the particular field favoured by the researchers. In 2004, a study led by Drew Westen calculated that in more than nine out of 10 instances, the results of a comparative trial could be predicted by the authors'allegiance alone.
As the British psychoanalyst Professor Andrew Samuels puts it, "There is a kind of scientific myth around that if you use randomised controlled trials, you can find out which kinds of therapy actually work."
Anyone who has followed the brilliant work of the late Dr Enrico Jones and his colleagues, Ablon, Parke and Pulos, among other researchers, knows just how questionable this research can be.
It's almost 30 years since Jones produced, through phenomenal effort, a precise, objective measure of the elements of the psychotherapeutic processes called the Psychotherapy Process Q-set (PQS).
He and his colleagues then rigorously examined what actually happened during psychotherapy sessions, as opposed to what the practitioners imagined was happening.
Their research showed that even in clinically controlled trials, you could simply not assume that any improvement made by a patient during the course of treatment was the result of the technique described in the treatment manual.
They found that the therapeutic techniques employed in apparently controlled trails were rarely theoretically pure and frequently "borrowed" from processes that were typically associated with other theoretical orientations and that there were significant shortcomings in studies based on both efficiency and efficacy of treatments.
What they flagged was a fundamental problem: if you can't control how treatments are actually conducted, the validity of randomised controlled clinical trials of psychotherapy is highly problematic.
Their findings supported the overwhelming weight of evidence that shows that the strongest indicators of success in therapy are not the techniques of any brand of psychotherapy - whether it is CBT or ACT or Inter Personal Therapy or whatever you might fancy - but other factors that are principally grouped under the heading of "the psychotherapeutic relationship".
The most vital are these: the patient feels trusting, secure and understood by the therapist; understands what the therapist is saying; accepts the therapist's observations and has clearly positive feelings towards the therapist.
In other words, the personality of the therapist and the aptitude of the client for the process are what really count. As I've suggested previously, these facts make a lot of the claims for superior psychotherapeutic performance by members of a particular college somewhat hollow.
I shall allow him a final comment.
"Every discipline that does psychotherapy has its stars and its dullards," Saunders writes. "But taken overall, each discipline, irrespective of type of training, does well. We all deserve prizes.
"And I'm going with the evidence that being a 'super-shrink" has more to do with how you practise than the discipline you practise. I am, after all, a Clinical Psychologist, so the evidence really does matter. So can my evidence-based, clinical psychology colleagues please stop claiming that they are 'better than all the rest'? Please."