Written by LW user johnswentworth.

This is part of LessWrong for EA, a LessWrong repost & low-commitment discussion group (inspired by this comment). Each week I will revive a highly upvoted, EA-relevant post from the LessWrong Archives, more or less at random

Excerpt from the post:

My goal is usually not to evaluate a single black-box claim in isolation, but rather to build a gears-level model of the system in question. I care about whether hydroxyhypotheticol reduces malignant examplitis only to the extent that it might tell me something about the internal workings of the system. I’m not here to get a quick win by noticing an underutilized dietary supplement; I’m here for the long game, and that means making the investment to understand the system.

Please feel free to,

Initially I talked about hosting Zoom discussion for those that were interested, but I think it’s a bit more than I can take on right now (not so low-commitment). If anyone wants to organize one, comment or PM me and I will be happy to coordinate for future posts.

For now I will  include an excerpt from each post, but if anyone wants to volunteer to do a brief summary instead, please get in touch. 

18

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since:
  • From speaking to PhD candidates in various fields "Zombie theories" seem like a pretty big problem. my understanding is that researchers in those fields know how to read between the lines of the research and downward estimate claims. So there is some amount of translation work that is happening. The problem is that fields that are very interdisciplinary it gets a bit complicated, since every researcher doesn't have this ability for every field.
  • From a community building perspective, I wonder whether insights from posts like this could be distilled and shared with academic communities, and what kind of difference it would make. I imagine effective thesis could run workshops like this, or people working on metascience field building or something. Maybe even a new way of getting people interested in EA (presumably people who worry about things like this would be good fits for EA)
  • I haven't read it yet, but the linked post "Gears level models are capital investments" looks very interesting

Suggestion: include the author's username in your post.

Lighter held suggestion: be more generous with excerpting — I would have liked the excerpt to start at the start of the post and then have ellipses to get to the part you wanted to highlight. Maybe.

Anyway, thanks for doing this! 

Strong +1 on adding more excerpts. I was even thinking that a complete cross post would be the most valuable, but not sure if that makes sense here. 

I feel a bit weird copy/pasting the whole thing as I am not able to contact the authors first and I don’t think I can assume they would want their post completely reproduced on another forum.

Great ideas, thanks!

(Skimmed)

  • This is a really interesting post and seems reasonable if you're trying to rapidly build an inside view on something.
  • I wish literature reviews looked more like this - I think in theory, they are meant to be holistic overviews of the literature, but in practice they tend to look more like summaries and individual assessments of existing studies.
  • In general, agree that one should be more skeptical of 'sexy' research. The bigger the claim, the less time someone spends carefully thinking about each component that goes into it (or so I claim).
More from Jeremy
Curated and popular this week
Relevant opportunities