L

LondonGal

222 karmaJoined

Bio

Made you look!

I'm less irritating in person than I am online - my DM's are open if anyone wishes to entertain my over-opinioned thoughts on mental health any further. I'm always keen to learn. UK doctor / EA outsider / well-intentioned visitor to your community (I come in peace). 

I was told to mention that Gregory Lewis is responsible for my philosophy training, as well as Kristen Bell - please direct all criticism appropriately.

Comments
6

Hi MichaelPlant, [Edit: Jk - I don't get the comment about my username/real name, I saw a mix being used on the forum, but I might have missed some etiquette - would you like my real name? Just 'hello' is fine if you'd prefer - no offence taken.]

Thanks so much for taking the time to read and respond! I was hoping to get more insight from people within EA who might be able to fill me in on some of the more philosophical/economic aspects as I'm aware these aren't my areas of expertise (it was very much a 'paper' EA-hat I was trying on!) - I felt furthering my online searches wasn't as helpful as getting more insight into the underlying concepts from experts and hoped my post would at least show I was interested in hearing more. Thanks for the links as well - I did come across a few of them in my approach to this work, but will take your advice these are worth looking at again if you think I've not appraised them properly - you definitely know best in this regard!

Also, apologies - you might be right in saying I didn't structure a paragraph very well if it has left anyone with the impression I was suggesting subjective wellbeing research has only been in existance since COVID. My own graph disproves this, for starters! I think it's this paragraph from the first section I've not phrased well (italics added).

From my quick literature review, the interest in wellbeing as a research area is a relatively recent phenomenon. There has been rapid growth in papers being published about subjective wellbeing from around 2020 onwards. I’d guess this is due to (1) the COVID-19 pandemic and lockdown restrictions making this a hugely important topic for public health officials and politicians, and (2) the recent interest in using wellbeing as an outcome when evaluating the effect of a broad range of policy decisions, which has subsequently driven interest in quantifying ‘wellbeing’ for use in cost-impact analyses.

I was trying to emphasise the relatively steep growth in interest over the last few years due to questions about cost-effectiveness (e.g. WELLBY), which as you mention is 'barely older than COVID'. I don't actually think we disagree here so I'll need to think how to rephrase it to avoid conflating this with SWB research as a whole - to be clear, I don't think your reading of this was unfair and I can phrase it better.

I'm not too sure I was ever arguing I was doing an exhaustive literature review (?) - I felt I stated a few times this was non-scientific, should have no weight, etc. My goal was just trying to get a quick overview as more of a sense-check, but didn't want to limit my reading to the first x number of pages in case that was biased - I chose very limited terms and stated them so it would be clear how I did the search, which allowed you to double-check I wasn't pulling anything dishonest (impressive that 7 papers have been added since I did the search last week - clearly there is a lot of interest!). If I set out to do a completely exhaustive review, you're right to suggest the terms you did (I would add "SWB" as well), but I'm not sure that would be a reasonable expectation of something like an "abstract-only" non-academic review from a visitor with a full-time job when that returns thousands of results...

I'm sorry if it appeared that selecting limited search terms was an attempt to 'downplay' SWB research as a field - I mentioned it was my time constraints that were the problem but it's easy to miss in a long post. I felt I was trying to explain throughout how interesting/useful I found this piece of work, spoke about how it changed my mind in the later sections and identified blind-spots I wasn't aware I had. I don't think I was critical of 'subjective wellbeing' research as whole - I tried to lay out my very specific concerns very clearly (e.g. in "life satisfaction" being used as an isolated measure for subjective wellbeing) but my overall conclusion was in support of finding a way to incorporate more of the diverse value these researchers were adding to the understanding of how we think about wellbeing estimates e.g. other measures, combining measures, qualitiative research, etc. I was approaching the problem with an open mind and left the exercise positively so I'm sorry to see it might have appeared mal-intentioned.

I hope this doesn't seem like nit-picking, and it's not intended as criticism of you personally as you similarly were upfront about not being familiar with PubMed; it can be tricky to get to grips with, but it might be helpful to share a quick point.

For the search that you linked that returned 150k results - if you go to "Advanced" under the search bar and click through you can see this searched "All Fields" and expanded the terms you used as they weren't in quotation marks (it's a quirk of the database that perhaps isn't true for others!). This was the actual search run from your link - for all I know this was your intent, but in case it wasn't (just judging this as a possibility based on the hyperlink):

With quotations to limit results to those specific terms (I'm not sure if that was your intent) and using Title/Abstract (PubMed also doesn't really like hyphens as you see above but I used "subjective well" to try and work around this) you get more like 20-30k results. You can use MeSH terms (key words instead), other variations, etc to push it up or down (I tried a few variations to work with 'well-being' hence this being #8). And as you rightly say, you can always add more terms to include more papers.

Image alt-text

Again, my aim isn't to down-play SWB research at all with this point (I like the field!), it's just in case it's helpful as I use PubMed all the time and it's one of many databases which contains mental health literature. Whether 2k or 150k+ results (even a few well-written papers wouldn't necessarily be an argument against the field - it just suggests to me it's new), I still stand by my OP in being broadly positive about the area as a whole, hence coming up with a framework of my own in inspiration, I just haven't shifted much in my specific critiques of certain applications of specific aspects of SWB but I'll have a read of your links and see if it changes my mind! Similarly, if you had any further thoughts, I'd appreciate hearing them for your feedback.

Ah, okay, thanks for this - I think I see why I've been confused. I skipped through my arguments for psychosis/schizophrenia research in my OP because I felt my post was already ridiculously long - I see now that it could be read as suggesting a high HCP investment is needed for schizophrenia treatment in an LMIC. You make the fair point (that I entirely agree with) that anything requiring significant additional use of HCP resources is a major problem. Sorry, I didn't understand where you were coming from and so I'm sure my response to your comment makes absolutely no sense - I thought you were making a completely separate point about HCPs in LMICs and so I was a bit perplexed and needlessly theoretical in my response.

The things I skipped over in the schizophrenia/psychosis research section were ways to reduce the needs of health services through treatment of psychosis/schizophrenia that would not require increasing numbers of HCPs. Again, none of this is an EA area suggestion - I do not have any authority to talk about LMICs. I'm explaining why I mentioned these things in my OP as I covered this poorly. It was intended as highly speculatively-reasoned areas for a potential deeper look if my groundwork seemed okay

Given prevalence is relatively similar across the world and static over time, LMICs are managing schizophrenia at present - it's not an illness which is influenced by societal problems or 'increased awareness' etc. It's why I assume that community-approaches and spiritual understandings might be very similar to things like CBT-p and family intervention - I wouldn't assume these should be 'rolled out' to LMICs (rather the opposite that we could have learnt from other cultures a lot sooner to help our patients - we tended to use institutions/asylums instead) and there are non-HCP/non-medical approaches that can and do help people with schizophrenia. We might give the same ideas medical terms and run RCTs, but we likely didn't 'invent' these approaches, nor would it be considered groundbreaking that this helps elsewhere in the world with different cultural understandings of psychosis. It's an argument against an assumption about the need for therapy/therapists in all LMICs for psychosis.

It helps schizophrenia to treat it early (reducing duration of untreated psychosis to have best chance of complete recovery, faster recovery, reduced length of subsequent hospital admission) and my personal assumption would be in LMICs the issue is access to antipsychotics, particularly newer drugs which have fewer side-effects and are likely under patent - not HCPs/psychiatrists. I could be wrong - it just doesn't necessarily require advanced training to recognise psychosis when someone is that unwell they need treatment (if it did, expertise-sharing/teleconsults with specialists isn't necessarily a bad approach if you want to support one doctor working in a remote setting expected to manage a whole range of problems independently - it's not an exclusively psych issue to consider access to specialities without training more HCPs). Using older drugs is a problem as they have disabling (e.g. Parkinson's-type) side effects amongst many others, which means patients stop taking them and therefore are prone to relapse (needing more acute/hospital-based care) - I assume these are more commonly used in LMICs as they are much cheaper.

Some of the other (slightly newer) treatments cause obesity, diabetes, high blood pressure, heart disease etc, so will increase the need for other types of healthcare in the long-term. I guess, it could be one option to look into in-patent drug access agreements for LMICs to access the newest drugs for lowest side effect burdens, but as I wasn't talking about LMICs, research for new treatments which work better/faster or are better-tolerated is another good option. If people can stay well on a medication, they will not need the kind of frequent HCP/social support they otherwise absolutely do need with chronic forms of schizophrenia.

I also assume clozapine is not easily accessed in resource-poor settings. This is the only medication with any evidence in treatment-resistant schizophrenia (which similarly would be highly-consumptive of HCP resources if not treated). Part of this reason is that it requires intensive blood monitoring - weekly blood tests for first 18 weeks, then every 2 weeks for the first year, then monthly thereafter. Heart health could be checked with (at least) an ECG machine which isn't so rare, but the blood monitoring is clearly an issue if you don't have ready access to a lab to process blood samples (and the infrastructure/cold chain to support transport/reliable results). It's risky using clozapine without this monitoring. This is not just about checking levels of clozapine in the blood, but due to the known issue that taking clozapine can cause levels of white blood cells to crash and leave people unable to fight off infection - another issue in any setting with high prevalence of infectious disease (but it can be fatal anywhere). Finger-prick monitoring which I referenced briefly could have huge impact in remote/LMIC settings as it wouldn't require a lab or anyone able to draw bloods to provide monitoring and might therefore support access to this effective treatment.

And it's true that our long history of relative neglect of people with psychosis has set us really far back in terms of research. As an example, NMDA-r encephalitis was only discovered in 2007 ("Brain on Fire" is based on this illness) - a treatable form of psychosis when it was discovered patients were actually suffering with an autoimmune encephalitis and so needed immunotherapy, not antipsychotics. It's generated a flurry of interest in understanding mechanisms for schizophrenia and was a significant breakthrough for identification of auotimmune encephalitis and directing treatment which undoubtably has saved lives. It seems plausible to me that there is a lot of potential gain in psychosis research when we are starting so far behind other areas of medicine.

Schizophrenia (and bipolar) are a little unique as they are considered more 'organic' than others i.e. they cannot arise from environmental factors alone, there has to be an underling 'switch' waiting to be flipped which is likely genetic in origin. While in illnesses like depression/anxiety/substance misuse, people can be more genetically prone to these problems, they can also arise entirely as a result of someone's environment and - to me - a high prevalence of these illnesses in a country is a 'canary in the mine' for greater issues in public health/society. It's why I'm a bit reluctant to assume medicalised treatments are helpful and think there is room to think about addressing poverty, inequality, hunger, preventable disease etc as 'societal-level' highly effective mental health treatments.

I didn't mention it as a key area for mental health-style interventions for this reason (and thought I'd come too close to talking about SM). In my example about offering CBT vs cash as a trade-off CEA; that isn't to suggest people in LMICs devalue depression treatments etc, it's that they might know exactly why they are depressed and it's therefore more effective to direct treatment to the cause than assume using therapy/medication can help people in the same way while they are living in very objectively bad circumstances. This wouldn't favour HCPs either.

I hope that's more helpful to explain where I'm coming from in thinking schizophrenia/psychosis is worth more thought from a wellbeing perspective as it potentially offers scope for larger gains through some of these simpler things outside of HCP training/retention. I'd be relatively certain psychosis research is EA-aligned, but would not suggest any other intervention without more detailed work in psychosis or other mental health conditions in LMICs.

In case my repetition about cultural relativism seems a bit hokey to anyone reading, this is my favourite paper I often suggest as a 'gateway' to anyone who doesn't find psychosis interesting or wants to know more. It's not overly technical/medical. It's based on trying to understand how people who were born deaf (i.e. have never perceived sound) or lost hearing later in life/have partial hearing loss "hear voices" as a hallucination experience. It was research conducted by someone who uses sign-language and explains lots of the issues in how certain words are understood to a 'hearing' person vs how they are meant in BSL and how they worked around this. It might help explain some of my reluctance to make assumptions and why I constantly talk about within-community research - it's so easy to have blind-spots (we don't know how much we don't know).

Thanks so much for commenting, you make an interesting point!

It's stretching my competency a wee bit to discuss mental healthcare in LMICs (and I won't touch poor StrongMinds again!). With the prevalence of schizophrenia being relatively static over time, I suppose the whole concept of this requiring highly-trained medical professionals is with a "Western" bias - I'd be open to the idea that LMICs may manage mental illnesses differently with good results, hence my interest in supporting research from LMICs and having some flexibility in the idea RCTs are 'best' to allow us to learn from other settings. It's not like psychiatry has always been 'right' in its approach and has occasionally been very wrong, historically speaking, as has medicine as a whole. We can only do the best we can with the knowledge we have now, share and learn as much as possible, but still, who's to say in 100 years we're not going to be scoffed at in textbooks for being so misinformed.

CBT for psychosis (CBT-p) and some of the family interventions are based around helping people find meaning in their experiences, set their own goals for recovery (a strictly 'medical' model might suggest eliminating 'symptoms' is the goal; while many people are not concerned by hallucinations etc, if they understand them and they are not intrusive/distressing), and promoting open discussion to destigmatise experiences which can be frightening to both experience oneself and witness in a loved one. I can see community-based or spiritual concepts organically mirroring those ideas outside of any medical framing/labelling, whereas it's perhaps a bit of a course-correct for how psychosis can be understood in countries where it is otherwise highly stigmatised. I have seen lots of UK/US people use 'psychotic' or 'mentally ill', or even 'schizophrenic', as a lay term to mean 'unhinged', 'violent', etc - it makes sense to me why a diagnosis of schizophrenia can be incredibly challenging and isolating for patients when it must feel misunderstood amongst friends/family/colleagues etc.

So, I wouldn't necessarily assume that a mass scale-up of healthcare workers in LMICs would be uniformly desirable or even the first priority for managing psychosis in many settings - access to medications might be more pressing, for example. I kept my recommendation to 'research' to avoid making specific LMIC suggestions which would be ill-informed without more work on my part. Advances in treatment are helpful regardless as medications are considered first-line in many settings when someone is very unwell, and this does have a positive effect on long-term recovery and wellbeing. I'd be disappointed if current treatment is the best we can do forever and I have hope there are promising developments on the horizon from what I've seen. I generally think research in psychosis/schizophrenia is needed to understand these illnesses more completely towards that aim.

That being said, if LMICs identify lack of health workers as their main need, I'm not sure mandated service is the answer and I would be deeply uncomfortable with funding from wealthy countries for education having those strings attached. Simply put, training is expensive, and having a number of years post-qualification for doctors/nurses in LMICs would likely be brilliant for wealthy countries struggling to retain HCWs as this will create a pool of relatively senior HCWs the receiving country won't even have to train after recruitment. It creates a perverse incentive for this funding, and might degrade working conditions in LMICs (why worry if you know people have to work for you), strip health systems of mostly senior clinicians, and there's a bigger issue of people in caring roles working with vulnerable people who might feel burned out or hate their jobs but also that they have no choice but to stay - at best, this will just increase mistakes and worsen care, at worst it can lead to abuse and cause sickness and harm in your workforce.

I mean my hands are tied given the industrial action in my profession in the UK, where I've heard the same suggestions about keeping doctors in the NHS - I can't agree with something I vehemently oppose for myself and my future colleagues so I might not be the best person to ask! It's one of the reasons I've found the relative complacency about understanding the retention problems in the UK so frustrating - the issue of knock-on effects to health worker migration is relatively overlooked as a predictable consequence of worsening retention and the mandatory service suggestion seems to dismiss the very real issue with morale and working conditions for doctors and nurses which is causing the problem in the first place. I personally feel countries with the means to do so have an ethical responsibility to address problems in their health workforce to avoid contributing to worsening global health inequalities, and think you can't really take push or pull factors in isolation to fix the problem.

While I've ended up apparently arguing for a pay raise for me and my colleagues (I'm joking!), hopefully I've balanced this by mainly talking about my redundancy to say I don't think our industrial action should be an EA priority either. This might be a roundabout way of saying you're right to point out my work is lacking practical-level steps and my suggestions are vague/speculative. I suppose I wanted to check my groundwork before risking making any concrete suggestions off of a shaky foundation. I'm expecting this to be trashed imminently, but if it survives in some form and there's interest I can try taking it a bit further and looking at a couple of my broad areas of potential gain in more detail. [I think anything even approaching a CEA might be embarrassing without some help though!]

That's kind of you to say - it's definitely a sobering perspective working in mental health and you end up feeling very strongly (clearly...) about wishing people struggling with mental illnesses had more support.

Of course that makes me biased, and it's worth saying I'm still learning - if I presented this to my psychiatry trainee colleagues, I'm sure they would all have different takes, let alone more senior clinicians or doctors in other specialities. Clinical work means, of course, we don't think about 'evaluating' illnesses outside of the patients in front of us and it's highly individualised as a field.

I think that makes it easy to go with my initial reaction to all of this ("Mental health is too complicated, shouldn't be simplified to numbers, you all don't understand," etc). It makes me uneasy to think about 'comparing' suffering - it's much more comfortable to stay railing against the machine in my position, and it's historically why I've not felt aligned with EA or utilitarianism more broadly.

But obviously CEAs happen all the time in medicine, it's just at a level way over my head so I don't have to think about it. Reading some of the work that went into the DALY was pretty fascinating to see how people approached this problem on a global health scale (I also favoured the DALY most out of the frameworks I encountered). I think my overall takeaway is a greater sympathy for what EA is trying to do, and I definitely learned a lot in the process - it's been humbling trying to think from this perspective (even if massively long forum posts are not the usual behaviour of the humble).

[Speaking from a UK perspective with much less knowledge of non-medical psychotherapy training]

I think the importance is having a strong mental health research background, particularly in systematic review and meta-analysis. If you have an expert in this field then the need for clinical experience becomes less important (perhaps, depends on HLI's intended scope).

It's fair to say psychology and psychiatry do commonly blur boundaries with psychotherapy as there are different routes of qualification - it can be with a PhD through a psychology/therapy pathway, or there is a specialism in psychotherapy that can be obtained as part of psychiatry training (a bit like how neurologists are qualified through specialism in internal medicine training). Psychotherapists tend to be qualified in specific modalities in order to practice them independently e.g. you might achieve accreditation in psychoanalytic psychotherapy, etc. There are a vast number of different professionals (me included, during my core training in psychiatry) who deliver psychotherapy under supervision of accredited practitioners so the definition of therapist is blurry.

Psychotherapy is similarly researched through the perspective of delivering psychotherapy which perhaps has more of a psychology focus, and as a treatment of various psychiatric illnesses (+/- in combination or comparison with medication, or novel therapies like psychadelics) which perhaps is closer to psychiatric research. Diagnosis of psychiatric illnesses like depression and directing treatment tends to remain the responsibility of doctors (psychiatrists or primary care physicians), and so psychiatry training requires the development of competencies in psychotherapy, even if delivery of psychotherapy does not always form the bulk of day-to-day practice, as it relates to formulating treatment plans for patients with psychiatric illness.

The issues I raise relate to the clinical presentation of depression as it pertains to impairment/wellbeing, diagnosis of depression, symptom rating scales, psychotherapy as a defined treatment, etc.; as well as the wide range of psychopathology captured in the dataset. My feeling is the breadth of this would benefit from a background in psychiatry for the assumptions I made about HLI's focus of the meta-analysis. However, if the importance is the depth of understanding IPT as an intervention, or perhaps the hollistic outcomes of psychotherapy particularly related to young women/girls in LMICs, then you might want a psychotherapist (PhD or psychiatrist) working with accreditation in the modality or with the population of interest. If you found someone who regularly publishes systematic reviews and meta-analyses of psychotherapy efficacy then that would probably trump both regardless of clinical background. Or perhaps all three is best.

You're both right to clarify this, though - I was giving my opinion from my background in clinical/academic psychiatry and so I talk about it a lot! When I mention the field of study etc, I meant mental health research more broadly given it depends on HLI's aims/scope to know what specific area this would be.

[Edit - Sorry, I've realised my lack of digging into the background of HLI members/contributors to this research could render the above highly offensive if there are individuals from this field on staff, and also makes me appear extremely arrogant. For clarity, it's possible all of my concerns were actually fully-rationalised, deliberate choices by HLI that I've not understood from my quick sense-check, or I might disagree with but are still valid.

[However, my impression from the work, in particular the design and methodology, is that there is a lack of psychiatric and/or psychotherapy knowledge (given the questions I had from a clinical perspective); and a lack of confidence in systematic review and meta-analysis from how far this deviates from Cochrane/PRISMA that I was trying to explain in more accessible terms in my comment without being exhaustive. It's possible contributors to this work did have experience in these areas but were not represented in the write-up, or not involved at the appropriate times in the work, etc. I'm not going to seek out whether or not that is the case as I think it would make this personal given the size of the organisation, and I'm worried that if I check I might find a psychotherapy professor on staff I've now crossed (jk ;-)).

[It's interesting to me either way, as both seem like problems - HLI not identifying they lacked appropriate skills to conduct this research, or seemingly not employing those with the relevant skills appropriately to conduct or communicate it - and it has relevance outside of this particular meta-analysis in the consideration of further outputs from HLI, or evaluation of orgs by EA. In any case, peer-review offers reassurance to the wider EA community that external subject-matter expertise has been consulted in whatever field of interest (with the additional benefit of shutting people like me down very quickly), and provides an opportunity for better research if deficits identified from peer-review suggest skills need to be reallocated or additional skills sought in order to meet a good standard.]

Hi everyone,

To fully disclose my biases: I’m not part of EA, I’m Greg’s younger sister, and I’m a junior doctor training in psychiatry in the UK. I’ve read the comments, the relevant areas of HLI’s website, Ozler study registration and spent more time than needed looking at the dataset in the Google doc and clicking random papers.

I’m not here to pile on, and my brother doesn’t need me to fight his corner. I would inevitably undermine any statistics I tried to back up due to my lack of talent in this area. However, this is personal to me not only wondering about the fate of my Christmas present (Greg donated to Strongminds on my behalf), but also as someone who is deeply sympathetic to HLI’s stance that mental health research and interventions are chronically neglected, misunderstood and under-funded. I have a feeling I’m not going to match the tone here as I’m not part of this community (and apologise in advance for any offence caused), but perhaps I can offer a different perspective as a doctor with clinical practice in psychiatry and on an academic fellowship (i.e. I have dedicated research time in the field of mental health).

The conflict seems to be that, on one hand, HLI has important goals related to a neglected area of work (mental health, particularly in LMICs). I also understand the precarious situation they are in financially, and the fears that undermining this research could have a disproportionate effect on HLI vs critiquing an organisation which is not so concerned with their longevity. There might be additional fears that further work in this area will be scrutinised to a uniquely high degree if there is a precedent set that HLI’s underlying research is found to be flawed. And perhaps this concern is compounded by the stats from people in this thread, which perhaps is not commonly directed to other projects in the EA-sphere, and might suggest there is an underlying bias against this type of work.

I think it’s fair to hold these views, but I’d argue this is likely the mechanism by which HLI has escaped scrutiny before now – people agree more work and funding should be directed to mental health and wanted to support an organisation addressing this. It possibly elevated the status of HLI in people’s minds, appearing more revolutionary in redirecting discussions in EA as a whole. Again, Greg donated to Strongminds on my behalf and, while he might now feel a sense of embarrassment for not delving into this research prior, in my mind I think it reflects a sense of affirmation in this cause and trust in this community which prides itself on being evidence-based. I’m mentioning it, because I think everyone here is united on these points and it’s always easier to have productive discussions from the mutual understanding of shared values and goals.

However, there are serious issues in the meta-analysis which appears to underlie the CEA, and therefore the strength of claims made by HLI. I think it is possible to uncouple this statement from arguments against HLI or any of the above points (where I don’t see disagreement). It seems critical to acknowledge the flaws in this work given the values of EA as an objective, data-driven approach to charitable giving. Failing to do this will risk the reputation of EA, and suggest there is a lack of critical appraisal and scrutiny which perhaps is driven by personal biases i.e. the number of reassurances in this thread that HLI is a good organisation where members are known personally to others in the community. Good people with good intentions can produce flawed research. Similarly, from the perspective of a clinical academic in psychiatry, there is a long history in my field of poorly-conducted, misinterpreted and rushed research which has meant establishing evidence-based care and attracting funding for research/interventions particularly difficult. Poor research in this area risks worsening this problem and mis-allocating very limited resources – it’s fairly shocking seeing the figures quoted here in terms of funding if it is based wholly or in part on outputs such as this meta-analysis which were accepted by EA. Again, as an outsider, it’s difficult for me to judge how critical this research was in attracting this allocation of funds.

While I think the issues with the analysis and all the statistics discussions are valid critiques of this work, it’s important to establish that this is only part of the reason this study would fall down under peer review. It’s concerning to me that peer-review is not the standard for organisations supported by EA; this is not just about scrutinising how the research was conducted and arguing about statistics, but establishes the involvement of expertise within the field of study. As someone who works in this field, the assumptions this meta-analysis makes about psychotherapy, outcome measures in mental health, etc, are problematic but perhaps not readily identified to those without a clinical background, and this is a much greater problem if there is an increasing interest in addressing mental health within EA. I’m not familiar with the backgrounds of people involved in HLI, but I’d be curious about who was consulted in formulating this work given the tone seems to reflect more philosophical vs psychiatric/psychotherapeutic language.

The way the statistical analysis has been heavily debated in this thread likely reflect the skills-mix in the EA community (clearly stats are well-covered!), but the statistics are somewhat irrelevant if your study design and inputs into the analysis are flawed to start with. Even if the findings of this research were not so unusual (perhaps something else which could have been flagged sooner) or were based on concrete stats, the research would still be considered flawed in my field. I imagine this will prompt some reflection in EA on this topic, but peer-review as a requirement could have avoided the bad timing of these discussions and would reduce the reliance on community members to critique research. I think this thread has demonstrated that critical appraisal is time-intensive and relies on specialist skills – it’s not likely that every area of interest will have representation within the EA community so the problem of ‘not knowing what you don’t know’ or how you weight the importance of voices in the community vs their amplification would be greatly helped by peer-review and reduce these blind spots. If the central goal of EA is using money to do the most good, and there is no robust system to evaluate research prior to attracting funding, this is an organisational problem rather than a specific issue with HLI/Strongminds.

My unofficial peer review.

Given inclusion/exclusion criteria aren’t stated clearly in the meta-analysis and the aim is pretty woolly, It seems the focus of the upcoming RCT and Strongminds research is evaluating:

  1. Training non-HCPs in delivering psychotherapy in LMICs

  2. Providing treatment (particularly to young women and girls) with symptoms suggestive of moderate to severe depression (PHQ-9 score of 10 and above)

  3. Measuring the efficacy of this treatment on subjective symptom rating scales, such as PHQ-9, and other secondary outcome measures which might reflect broader benefits not captured in the symptom rating scales.

  4. Finding some way to compare the cost-effectiveness of this treatment to other interventions such as cash transfers in broader discussions of life satisfaction and wellbeing which it obviously complicated compared to using QALYs, but important to do as the impact of mental illness is under-valued using measures geared towards physical morbidity. Or maybe it's trying to understand effectiveness of treating symptoms vs assumed precipitating/perpetuating factors like poverty.

Grand.

However, the meta-analysis design seems to miss the mark on developing anything which would support a CEA along these lines. Even from the perspective of favouring broad inclusion criteria, you would logically set these limits:

  1. Population

LMIC setting, people with depressive symptoms. It’s not clear if this is about effectively treating depression with psychotherapy and extrapolating that to a comment on wellbeing; or using psychotherapy as a tool to improve wellbeing, which for some reason is being measured in a reduction in various symptom scales for different mental health conditions and symptoms – this needs to be clearly stated. If it's the former, what you accept as a diagnosis of depression (ICD diagnostic codes, clinical assessment by trained professional, symptom scale cut-offs, antidepressant treatment, etc) should be defined.

If not defining the inclusion criteria of depression as a diagnosis, it's worth considering if certain psychiatric/medical conditions or settings should be excluded e.g. inpatients. As a hypothetical, extracting data on depression symptom scales for a non-HCP delivered psychotherapy in bipolar patients will obviously be misleading in isolation (i.e. the study likely accounted for measuring mania symptoms in their findings, but would be lost in this meta-analysis). One study included in this analysis (Richter et al) looked at an intervention which encouraged adherence to anti-retroviral medications via peer support for women newly diagnosed with HIV. Fortunately, this study shouldn't have been included as it didn't involve delivering psychotherapy, but for the sake of argument, is that fair given the neuropsychiatric complications of HIV/AIDS? Again, it's not about preparing for every eventuality, but it's having clear inclusion/exclusion criteria so there's no argument about cherry-picking studies because this has been discussed prior to search and analysis.

  1. Intervention

Delivery of a specific psychotherapeutic modality (IPT, etc) by a non-HCP. While I can agree there are shared core concepts between different modalities of psychotherapy, you absolutely have to define what you mean by psychotherapy because your dataset containing a column labelled ‘therapyness’ (high/medium/low) undermines a lot of confidence, as do some of the interventions you’ve included as meeting the bar for psychotherapy treatment. If you want to include studies which perhaps are not focussed on treating depression and might therefore involve other forms of therapy but still have benefit in alleviating depressive symptoms e.g. where the presenting complaint is trauma, the intervention might be EMDR (a specific therapy for PTSD) but the authors collected a number of outcome measures including symptom rating scales for anxiety and depression as secondary outcomes, it would be logical to stratify studies in this manner as a plan for analysis. I.e. psychotherapeutic intervention with evidence-base in relieving depressive symptoms (CBT, IPT, etc), psychotherapeutic intervention not specifically targeted at depressive symptoms (EMDR, MBT etc), with non-(psychotherapy) intervention as the control.

Several studies instead use non-psychotherapy as the intervention under study and this confusion seems to be down to papers describing them as having a ‘psychotherapeutic approach’ or being based on principles in any area of psychotherapy. This would cover almost anything as ‘psychotherapeutic’ as an adjective just means understanding people’s problems through their internal environment e.g. thoughts, feelings, behaviours and experiences. In my day-to-day work, I maintain a psychotherapeutic approach in patient interactions, but I do not sit down and deliver 14-week structured IPT. You can argue that generally having a supportive environment to discuss your problems with someone who is keen to hear them is equally beneficial to formal psychotherapy, but this leads to the obvious question of how you can use the idea of any intervention which sounds a bit ‘psychotherapy-y’ to justify the cost of training people to specifically deliver psychotherapy in a CEA from this data.

The fundamental lack of definition or understanding of these clinical terms leads to odd issues in some of the papers I clicked on i.e. Rojas et al (2007) compares a multicomponent group intervention involving lots of things but notably not delivery of any specific psychotherapy, to normal clinical care in a postnatal clinic. The next sentence describes part of normal clinical care to be providing ‘brief psychotherapeutic interventions’ – perhaps this is understood by non-clinicians as not highly ‘therapyish’ but this term is often used to describe short-term focussed CBT, or CBT-informed interventions. Not defining the intervention clearly means the control group contains patients receiving evidence-based psychotherapy of a specific modality and a treatment arm of no specific psychotherapy which is muddled by the MA.

  1. Comparison

As alluded to above, you need to be clear about what is an acceptable control and it’s simply not enough to state you are not sure what the ‘usual care’ is in research by Strongminds you have weighted so heavily. It can’t be then justified by an assumption mental health is neglected in LMICs so probably wouldn’t involve psychotherapy (with no citation). Especially as the definition of psychotherapy in this meta-analysis would deem someone visiting a pastor in church once a week as receiving psychotherapy. Without clearly defining the intervention, it's really difficult to understand what you are comparing against what.

  1. Outcome

This meta-analysis uses a range of symptom rating scales as acceptable outcome measures, favouring depression and anxiety rating scales, and scales measuring distress. This seems to be based on idea that these clusters of symptoms are highly adverse to wellbeing. This makes the analysis and discussion really confused, in my opinion, and seems to be a sign the analysis, expected findings, extrapolation to wellbeing and CEA was mixed into the methodology.

To me, the issue arises from not clearly defining the aim and inclusion/exclusion criteria. This meta-analysis could be looking at psychotherapy as a treatment for depression/depressive symptoms. This would acknowledge that depression is a psychiatric illness with congitive, psychological and biological symptoms (as captured by depression rating scales). As a clinical term, it is not just about 'negative affect' - low mood is not even required for a diagnosis as per ICD criteria. It absolutely does negatively affect wellbeing, as would any illness with unpleasant/distressing symptoms, but this therefore means generating some idea for how much patients' wellbeing improves from treatment has to be specific to depression. The subsequent CEA would then need to account for this and evaluate only psychotherapies with an evidence-base in depression. In the RCT design, I'd guess this is the rationale for a high PHQ cut-off - it's a proxy for relative certainty in a clinical diagnosis of depression (or at least a high burden of symptoms which may respond to depression treatments and therefore demonstrate a treatment effect); it's not supporting the idea that some general negative symptoms impacting a concept of wellbeing, short of depression, will likely benefit from specific psychotherapy to any degree of significance, and it would be an error to take this assumption and then further assume a linear relationship between PHQ and wellbeing/impairment.

If you are looking at depressive symptom reduction, you need to only include evaluation tools for depressive symptoms (PHQ, etc). You need to define which tools you would accept prior to the search and that these are validated for the population under study as you are using them in isolation - how mental illness is understood and presents is highly culturally-bound and these tools almost entirely developed outside of LMICs.

If, instead, you're looking at a range of measures you feel reflect poor mental health (including depression, anxiety and distress) in order to correlate this to a concept of wellbeing, these tools similarly have to be defined and validated. You also need to explain why some tools should be excluded, because this is unclear e.g. in Weiss et al, a study looking at survivors of torture and militant attacks in Iraq, the primary outcome measure was a trauma symptom scale (the HTQ), yet you've selected the secondary outcome measures of depression and anxiety symptom scores for inclusion. I would have assumed that reducing the highly distressing symptoms of PTSD in this group would be most relevant to a concept of wellbeing, yet that is not included in favour of the secondary measures. Including multiple outcome measures with no plan to stratify/subgroup per symptom cluster or disorder seems to accept double/triple counting participants who completed multiple outcome measures from the same intervention. Importantly, you can't then use this wide mix of various scales to make any comment on the benefits of psychotherapy for depression in improving wellbeing (as lots of the included scores are not measuring depression).

In both approaches, you do need to show it is accepted to pool these different rating scales to still answer your research question. It’s interesting to state you favour subjective symptom scores over functional scores (which are excluded), when both are well-established in evaluating psychotherapy. Other statements made by HLI suggest symptom rating scores include assesment of functioning - I've reproduced the PHQ-9 below for people to draw their own conclusions, but it's safe to say I disagree with this. It’s not clear to me if it’s understood functional scores are also commonly subjective measures, like the WSAS - patients are asked how rate how well they feel they are managing work activities, social activities etc. Ignoring functioning as a blanket rule seems to miss the concept of ‘insight’ in mental health, where people can struggle identifying symptoms as symptoms but are severely disabled due to an illness (this perhaps should also be considered in excluding scales completed by an informant or close relative, particularly thinking about studies involving children or more severe psychopathology). Incorporating functional scoring captures the holistic nature of psychotherapy, where perhaps people may still struggle with symptoms of anxiety/depression after treatment, but have made huge strides in being able to return to work. Again, you need to be clear why functional scores are excluded and be clear this was done when extrapolating findings to discussions of life satisfaction or wellbeing. This research has made a lot of assumptions in this regard that I don’t follow.

x. Grouping measures and relating this to wellbeing:

On that note – using a mean change in symptoms scores is a reasonable evaluation of psychotherapy as a concept if you are so inclined but I would strongly argue that this cannot be used in isolation to make any inference about how this correlates to wellbeing. As others have alluded to in this thread, symptom scores are not linear. To isolate depression as an example, this is deemed mild/moderate/severe based on the number of symptoms experienced, the presence of certain concerning symptoms (e.g. psychosis) and the degree of functional impact.

Measures like the PHQ-9 score the number of depressive symptoms present and how often they occur from 0 (not at all) to 3 (nearly every day) over the past two weeks:

  1. Little interest or pleasure in doing things?
  2. Feeling down, depressed or hopeless?
  3. Trouble falling or staying asleep, or sleeping too much?
  4. Feeling tired or having little energy?
  5. Poor appetite or overeating?
  6. Feeling bad about yourself - or that you are a failure or have let yourself or your family down?
  7. Trouble concentrating on things, such as reading the newspaper or watching television?
  8. Moving or speaking so slowly that other people have noticed? Or the opposite - being so fidgety or restless that you have been moving around a lot more than usual?
  9. Thoughts that you would be better off dead, or of hurting yourself in some way?

If you take the view that a symptom rating score has a linear relationship to 'negative affect' or suffering in depression, you would then imagine that the outcomes of PHQ-9 (no depression, mild, moderate, severe) would be regularly distributed in the 27-item score i.e. a score of 0-6 should be no depression, 7-13 mild depression, 14-20 moderate depression and 21-27 severe. This is not the case as the actual PHQ-9 scores are 0-4 no depression, 5-9 mild depression, 10-14 moderate, 15-19 moderately severe, 20-27 severe. This is because the symptoms asked about in the PHQ are diagnostic for depression – it’s not an attempt at trying to gather how happy or sad someone is on a scale from 0-27 (in fact 0 just indicates ‘no depression symptoms’, not happiness or fulfilment, and it's likely people with very serious depression will not be able to complete a PHQ-9). Hopefully it's clear from the PHQ-9 why the cut-offs are low and why the severity increases so sharply; the symptoms in question are highly indicative of pathology if occuring frequently. It’s also in the understanding that a PHQ-9 would be administered when there is clinical suspicion of depression to elicit severity or in evaluation of treatment (i.e. in some contexts, like bereavement, experiencing these symptoms would be considered normal, or if symptoms are better explained by another illness the PHQ is unhelpful) and it's not used for screening (vs the Edinburgh score for postnatal depression which is a screening tool and features heavily in included studies). Critically, it’s why you can’t assume it's valid to lump all symptom scales together, especially cross-disorders/symptom clusters as in this meta-analysis.

x. Search strategy

I feel this should go without saying, but once you’ve ironed out these issues to have a research question you could feasibly inform with meta-analysis, you then need to develop a search strategy and conduct a systematic review. It’s guaranteed that papers have been missed with the approach used here, and I’ve never read a peer-reviewed meta-analysis where a (10-hour) time constraint was used as part of this strategy. While I agree the funnel plot is funky, it’s likely reflecting the errors in not conducting this search systematically rather than assuming publication bias – it’s likely the highly cited papers etc were more easily found using this approach and therefore account for the p-clustering. If the search was a systematic review and there were objective inclusion/exclusion criteria and the funnel plot looked like that, you can make an argument for publication bias. As it stands, the outputs are only as good as the inputs i.e. you can't out-analyse a poor study design/methodology to produce reliable results.

Simply put, the most critical problem here is that without even getting into the problems with the data extraction I found, or the analysis as discussed in this thread, from this study design which doesn't seek to justify why any of these decisions were made, any analytic outputs are destined to be unreliable. How much of this was deliberate on the part of HLI can’t be determined as there is no possible way of replicating the search strategy they used (this is the reason to have a robust strategy as part of your study design). I think if you want to call this a back-of-napkin scoping review to generate some speculative numbers, you could describe what you found as there being early signals that psychotherapy could be more cost-effective than assumed and therefore there’s need to conduct a vigorous SR/MA. It perhaps may have been more useful in a shallow review to actually exclude the Strongminds study and evaluate existing research through the framework of (1) do the SM results make sense in the context of previous studies and (2) can we explain any differences in a narrative review. It seems instead this work generated figures which were treated as useful or reliable and fed into a CEA, which was further skewed by how this was discussed by HLI.

TL;DR

This is obviously very long and not going to be read in any detail on an online forum, but from the perspective of someone within this field, there seem to be a raft of problems with how this research was conducted and evaluated by HLI and EA. I’m not considered the Queen Overload of Psychiatry, I don’t have a PhD, but I suppose I'm trying to demonstrate that having a different background raises different questions, which seems particularly relevant if there is a recognition of the importance of peer-review (hopefully, I’m assuming, outside of EA literature). I’m also going to caveat this by saying I’ve not poured over HLI’s work, it’s just what immediately stood out to me, and haven’t made any attempt to cite my own knowledge derived from my practice – to me this is a post on a forum I’m not involved with rather than an ‘official’ attempt at peer review so I’m not holding myself to the same standard, just commenting in good faith.

I get the difficult position HLI are in with reputational salvage, but there is a similar risk to EA’s reputation if there are no checks in place given this has been accessible information for some time and did not raise questions earlier. While this might feel like Greg’s younger sister joining in to dunk on HLI, and I see from comments in this thread that perhaps criticism said passionately can be construed as hostile online, I don’t think this is anyone’s intent. Incredibly ironically given our genetic aversion to team sports, perhaps critique is intended as a fellow teammate begging a striker to get off the field when injured as they are hurting themselves and the team. Letting that player limp on is not being a supportive teammate. Personally, I hope these discussions drive discussions in HLI and EA which provide scope for growth.

In my unsolicited and unqualified opinion, I would advise withdrawing the CEA and drastically modifying the weight HLI puts on this work so it does not appear to be foundational to HLI as an organisation. Journals are encouraging the submission of meta-analysis study protocols for peer-review and publication (BMJPsych Open is one – I have acted as a peer reviewer for this journal to be transparent) in order to improve the quality of research. While conducting a whole SR/MA and publication takes time which could allow further loss of reputation, this is a quick way of acknowledging the issues here and taking concrete steps to rectify them. It’s not acceptable, to me, for the same people to offer a re-analysis or review this work because I’m sceptical this would not produce another flawed output and it seems there is a real need to involve expertise from the field of study (i.e. in formal peer review) at an earlier stage to right the ship.

Again, I do think the aims of HLI are important and I do wish them the best; and I’m interested to see how these discussions evolve in EA as it seems straying into a subject I’m passionate about. I come in peace and this feedback is genuinely meant constructively, so in the spirit of EA and younger-sibling disloyalty, I’m happy to offer HLI help above what’s already provided if they would like it.

[Edit for clarity mostly under 'outcomes' and 'grouping measures', corrected my horrid formatting/typos, and included the PHQ-9 for context. Kept my waffle and bad jokes for accountability, and was using the royal 'you' vs directing any statements at OP(!)]