There's a new paper on jhana (in Cerebral Cortex) out of Matthew Sacchet's Harvard Center: Fu Zun Yang et al. 2023
Got it, thanks. I'm interested in the cattle analysis because cows yield ~4x more meat than pigs per slaughter, and could perform even better than that when factoring in cognition.
Apart from pivoting to “x-risk”, what else could we do?
Cultivate approaches to heal psychological wounds and get people above baseline on ability to coordinate and see clearly.
CFAR was in the right direction goalwise (though its approach was obviously lacking). EA needs more efforts in that direction.
I wrote a thread with some reactions to this.
(Overall I agree with Tyler's outlook and many aspects of his story resonate with my own.)
(b) intriguing IMO and I want to hear more -- #10, #11, #16, #19
10. nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang
See discussion in this thread
11. EA correctly identifies improving institutional decision-making as important but hasn't yet grappled with the radical political implications of doing that
This one feels like it requires substantial unpacking; I'll probably expand on it further at some point.
Essentially the existing power structure is composed of organizations (mostly large bureaucracies) and all of these organizations have (formal and informal) immunological responses that activate when someone tries to change them. (Here's some flavor to pump intuition on this.)
To improve something is to change it. There are few Pareto improvements available on the current margin, and those that exist are often not perceived as Pareto by all who would be touched by the change. So attempts to improve institutional decision-making trigger organizational immune responses by default.
These immune responses are often opaque and informal, especially in the first volleys. And they can arise emergently: top-down coordination isn't required to generate them, only incentive gradients.
The New York Times' assault on Scott Alexander (a) is an example to build some intuition of what this can look like: the ascendant power of Slate Star Codex began to feel threatening to the Times and so the Times moved against SSC.
16. taking dharma seriously a la @RomeoStevens76's current research direction
I've since realized that this would be best accomplished by generalizing (and modernizing) to a broader category, which we've taken to referring to as valence studies.
19. worldview drift of elite EA orgs (e.g. @CSETGeorgetown, @open_phil) via mimesis being real and concerning
I'm basically saying that mimesis is a thing.
It's hard to ground things objectively, so social structures tend to become more like the other social structures around them.
CSET is surrounded by and intercourses with DC-style think tanks, so it is becoming more like a DC-style think tank (e.g. suiting up starts to seem like a good idea).
Open Phil interfaces with a lot of mainstream philanthropy, and it's starting to give away money in more mainstream ways.
ACE isn't fucking around.