huw

698 karmaJoined Working (0-5 years)Sydney NSW, Australia
huw.cool

Bio

Participation
2

I live for a high disagree-to-upvote ratio

Comments
107

The best meta-analysis for deterioration (i.e. negative effects) rates of guided self-help ( = 18, = 2,079) found that deterioration was lower in the intervention condition, although they did find a moderating effect where participants with low education didn't see this decrease in deterioration rates (but nor did they see an increase)[1].

So, on balance, I think it's very unlikely that any of the dropped-out participants were worse-off for having tried the programme, especially since the counterfactual in low-income countries is almost always no treatment. Given that your interest is top-line cost-effectiveness, then only counting completed participants for effect size estimates likely underestimates cost-effectiveness if anything, since churned participants would be estimated at 0.


  1. Ebert, D. D. et al. (2016) Does Internet-based guided-self-help for depression cause harm? An individual participant data meta-analysis on deterioration rates and its moderators in randomized controlled trials, Psychological Medicine, vol. 46, pp. 2679–2693. ↩︎

huw
4
0
0
1
1

Specifically on the cited RCTs, the Step-By-Step intervention has been specifically designed to be adaptable across multiple countries & cultures[1][2][3][4][5]. Although they initially focused on displaced Syrians, they have also expanded to locals in Lebanon across multiple studies[6][7][8] and found no statistically significant differences in effect sizes[8:1] (the latter is one of the studies cited in the OP). Given this, I would be default surprised if the intervention, when adapted, failed to produce similar results in new contexts.


  1. Carswell, Kenneth et al. (2018) Step-by-Step: a new WHO digital mental health intervention for depression, mHealth, vol. 4, pp. 34–34. ↩︎

  2. Sijbrandij, Marit et al. (2017) Strengthening mental health care systems for Syrian refugees in Europe and the Middle East: integrating scalable psychological interventions in eight countries, European Journal of Psychotraumatology, vol. 8, p. 1388102. ↩︎

  3. Burchert, Sebastian et al. (2019) User-Centered App Adaptation of a Low-Intensity E-Mental Health Intervention for Syrian Refugees, Frontiers in Psychiatry, vol. 9, p. 663. ↩︎

  4. Abi Ramia, J. et al. (2018) Community cognitive interviewing to inform local adaptations of an e-mental health intervention in Lebanon, Global Mental Health, vol. 5, p. e39. ↩︎

  5. Woodward, Aniek et al. (2023) Scalability of digital psychological innovations for refugees: A comparative analysis in Egypt, Germany, and Sweden, SSM - Mental Health, vol. 4, p. 100231. ↩︎

  6. Cuijpers, Pim et al. (2022) Guided digital health intervention for depression in Lebanon: randomised trial, Evidence Based Mental Health, vol. 25, pp. e34–e40. ↩︎

  7. Abi Ramia, Jinane et al. (2024) Feasibility and uptake of a digital mental health intervention for depression among Lebanese and Syrian displaced people in Lebanon: a qualitative study, Frontiers in Public Health, vol. 11, p. 1293187. ↩︎

  8. Heim, Eva et al. (2021) Step-by-step: Feasibility randomised controlled trial of a mobile-based intervention for depression among populations affected by adversity in Lebanon, Internet Interventions, vol. 24, p. 100380. ↩︎ ↩︎

huw
3
0
0
1

For those who are not deep China nerds but want a somewhat approachable lowdown, I can highly recommend Bill Bishop's newsletter Sinocism (enough free issues to be worthwhile) and his podcast Sharp China (the latter is a bit more approachable but requires a subscription to Stratechery).

I'm not a China expert so I won't make strong claims, but I generally agree that we should not treat China as an unknowable, evil adversary who has exactly the same imperial desires as 'the west' or past non-Western regimes. I think it was irresponsible of Aschenbrenner to assume this without better research & understanding, since so much of his argument relies on China behaving in a particular way.

huw
7
0
0
2

❤️ I do wanna add that every interaction I had with you, Rachel, Saul, and all staff & volunteers was overwhelmingly positive, and I'd love to hang again IRL :) Were it not for the issue at hand, I would've also rated Manifest an 8–9 on my feedback form, you put on one hell of an event! I also appreciate your openness to feedback; there's no way I would've posted publicly under my real name if I felt like I would get any grief or repercussions for it—that's rare. (I don't think I have much else persuasive to say on the main topic)

huw
28
21
11

I guess I am trying to elucidate that the paradox of intolerance applies to this kind of extreme openness/transparency. The more open Manifest is to offensive, incorrect, and harmful ideas, the less of any other kinds of ideas it will attract. I don’t think there is an effective way to signpost that openness without losing the rest of their audience; nobody but scientific racists would go to a conference that signposted ‘it’s acceptable to be scientifically racist here’.

Anyway. It’s obviously their prerogative to host such a conference if they want. But it is equally up to EA to decide where to draw the line out of their own best interests. If that line isn’t an outright intolerance of scientific racism and eugenics, I don’t think EA will be able to draw in enough new members to survive.

huw
77
29
10
4
7

I was at Manifest as a volunteer, and I also saw much of the same behaviour as you. If I had known scientific racism or eugenics were acceptable topics of conversation there, I wouldn’t have gone. I’m increasingly glad I decided not to organise a talk.

EA needs to recognise that even associating with scientific racists and eugenicists turns away many of the kinds of bright, kind, ambitious people the movement needs. I am exhausted at having to tell people I am an EA ‘but not one of those ones’. If the movement truly values diversity of views, we should value the people we’re turning away just as much.

OpenAI appoints Retired U.S. Army General Paul M. Nakasone to Board of Directors

I don't know anything about Nakasone in particular, but it should be of interest (and concern)—especially after Situational Awareness—that OpenAI is moving itself closer to the U.S. military-industrial complex. The article itself specifically mentions Nakasone's cybersecurity experience as a benefit of having him on the board, and that he will be placed on OpenAI's board's Safety and Security Committee. None of this seems good for avoiding an arms race.

huw
5
0
0
1

Is that just a kind of availability bias—in the 'marketplace of ideas' (scare quotes) they're competing against pure speculation about architecture & compute requirements, which is much harder to make estimates around & generally feels less concrete?

I was under the impression that most people in AI safety felt this way—that transformers (or diffusion models) weren't going to be the major underpinning of AGI. As has been noted a lot, they're really good at achieving human-level performance in most tasks, particularly with more data & training, but that they can't generalise well and are hence unlikely to be the 'G' in AGI. Rather:

  1. Existing models will be economically devastating for large sections of the economy anyway
  2. The rate of progress across multiple domains of AI is concerning, and that the increased funding to AI more generally will flow back to new development domains
  3. Even if neither of these things are true, we still want to advocate for increased controls around the development of future architectures

But please forgive me if I had the wrong impression here.

I'm a bit confused. I was just calling Aschenbrenner unimaginative, because I think trying to avoid stable totalitarianism while bringing about the conditions he identified for stable totalitarianism lacked imagination. I think the onus is on him to be imaginative if he is taking what he identifies as extremely significant risks, in order to reduce those risks. It is intellectually lazy to claim that your very risky project is inevitable (in many cases by literally extrapolating straight lines on charts and saying 'this will happen') and then work to bring it about as quickly and as urgently as possible.

Just to try and make this clear, by corollary, I would support an unimaginative solution that doesn't involve taking these risks, such as by not building AGI. I think the burden for imagination is higher if you are taking more risks, because you could use that imagination to come up with a win-win solution.

Load more