Milan Griffes (maɪ-lɪn) is a community member who comes and goes - but so he's a reliable source of very new ideas. He used to work at Givewell, but left to explore ultra-neglected causes (psychedelics for mental health and speculative moral enhancement) and afaict also because he takes cluelessness unusually seriously, which makes it hard to be a simple analyst.

He's closer to nearby thinkers like David Pearce, Ben Hoffman, Andres Gomez Emilsson, and Tyler Alterman who don't glom with EA for a bunch of reasons, chiefly weirdness or principles or both.

Unlike most critics, he has detailed first-hand experience of the EA heartlands. For years he has tried to explain his disagreements, but they didn't land, mostly (I conjecture) because of his style - but plausibly also because of an inferential distance it's important for us to bridge. 

He just put up a list of possible blindspots on Twitter which is very clear:

I think EA takes some flavors of important feedback very well but it basically can't hear other flavors of important feedback [such as:]

  1. basically all of @algekalipso's stuff [Gavin: the ahem direct-action approach to consciousness studies]
  2. mental health gains far above baseline as an important x-risk reduction factor via improved decision-making 
  3. understanding psychological valence as an input toward aligning AI
  4. @ben_r_hoffman's point about seeking more responsibility implying seeking greater control of others / harming ability to genuinely cooperate
  5. relatedly how paths towards realizing the Long Reflection are most likely totalitarian
  6. embodied virtue ethics and neo-Taoism as credible alternatives to consequentialism that deserve seats in the moral congress 
  7. metaphysical implications of the psychedelic experience, esp N, N-DMT and 5-MeO-DMT
  8. general importance of making progress on our understanding of reality, a la Dave. (Though EA is probably reasonably sympathetic to a lot of this tbh)
  9. consequentialist cluelessness being a severe challenge to longtermism
  10. nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang
  11. EA correctly identifies improving institutional decision-making as important but hasn't yet grappled with the radical political implications of doing that
  12. generally too sympathetic to whatever it is we're pointing to when we talk about "neoliberalism"
  13. burnout & lack of robust community institutions actually being severe problems with big knock-on effects, @ben_r_hoffman has written some on this 
  14. declining fertility rate being concerning globally and also a concern within EA (its implications about longrun movement health)
  15. @MacaesBruno's virtualism stuff about LARPing being what America is doing now and the implications of that for effective (political) action 
  16. taking dharma seriously a la @RomeoStevens76's current research direction
  17. on the burnout & institution stuff, way more investment in the direction @utotranslucence's psych crisis stuff and also investment in institutions further up the stack
  18. bifurcation of the experience of elite EAs housed in well-funded orgs and plebeian EAs on the outside being real and concerning
  19. worldview drift of elite EA orgs (e.g. @CSETGeorgetown, @open_phil) via mimesis being real and concerning
  20. psychological homogeneity of folks attracted to EA (on the Big 5, Myers-Briggs, etc) being curious and perhaps concerning re: EA's generalizability
  21. relatedly, the "walking wounded" phenomenon of those attracted to rationality being a severe adverse selection problem
  22. tendency towards image management (e.g. by @80000Hours, @open_phil) cutting against robust internal criticism of the movement; generally low support of internal critics (Future Fund grant-making could help with this but I'm skeptical)

[Gavin editorial: I disagree that most of these are not improving at all / are being wrongly ignored. But most should be thought about more, on the margin.

I think #5, #10, #13, #20 are important and neglected. I'm curious about #2, #14, #18. I think #6, #7, #12, #15 are wrong / correctly ignored. So a great hits-based list.]

Comments52
Sorted by Click to highlight new comments since:

Fwiw “… Tyler Alterman who don't glom with EA” Just to clarify, I glom very much with broad EA philosophy, but I don’t glom with many cultural tendencies inside the movement, which I believe make the movement a non-sufficient vehicle to implement the philosophy. There seem to be an increasing amount of former hardcore EA movement folks with the same stance. (Though this is what you might expect as movements grow, change, and/or ossify.)

(I used to do EA movement-building full time. Now I think of myself as an EA who collaborated with the movement from the outside, rather than the inside.)

Planning to write up my critique and some suggested solutions soon.

Yeah that's what I meant. Looking forward to reading it!

This orientation resonates with me too fwiw. 

To be frank, I think most of these criticisms are nonsense and I am happy that the EA community is not spending its time engaging with whatever the 'metaphysical implications of the psychedelic experience' are.

I get a sense of dejavu reading this criticism as I  feel I've sixteen variants of this over the years of how EA has psychological problem this, deep nietzschean struggle that and fails to value <author's pet interest>

If the EA community has not thought sufficiently about a problem, anyone is very welcome to spend time thinking about it and do a write-up of what they learned. Doing the most good doesn't require others approval. I would even wager that if someone wrote a convincing case for why we should be 'taking dharma seriously', then many would start taking it seriously.

Yes, there are biases and blindspots in EA that lead us to have less accurate beliefs, but by large I think the primary reason many of these topics aren't taken seriously is that the case for doing so usually isn't all that great.

There's definitely more charitable readings than I give here, but after seeing variants of this criticism again and again I don't think the charitable enterpretations are the most accurate. The EA community has a thousand flaws, but I don't think these are it.

It would be extremely surprising if all of them were being given the correct amount of attention. (For a start, #10 is vanilla and highly plausible, and while I've heard it before, I've never given it proper attention. #5 should worry us a lot.) Even highly liquid markets don't manage to price everything right all the time, when it comes to weird things.

What would the source of EA's perfect efficiency be? The grantmakers (who openly say that they have a sorta tenuous grasp on impact even in concrete domains)? The perfectly independent reasoning of each EA, including very new EAs? The philosophers, who sometimes throw up their hands and say "ah hold up we don't understand enough yet, let's wait and think instead"? 

For about 4 years I've spent most of my time on EA, and 7 of these ideas are new to me. Even if they weren't, lack of novelty is no objection. Repetition is only waste if you assume that our epistemics are so good that we're processing everything right the first (or fourth) time we see it.

What do you think EA's biases and blindspots are?

One estimate from 2019 is that EA has 2315 "highly-engaged" EAs and 6500 "active EAs in the community."

So a way of making your claims more precise is to estimate how many of these people should drop some or all of what they're doing now to focus on these cause areas. It would also  be helpful to specify what sorts of projects you think they'd be stopping in order to do that. If you think it would cause an influx of new members, they could be included in the anlaysis as well. Finally, I know that some of these issues do already receive attention from within EA (Michael Plant's wellbeing research, for example), so making an accounting for that would be beneficial.

To be clear, I think it would be best if all arguments about causes being neglected did this. I also think arguments in favor of the status quo should do so as well.

I also think it's important to address why the issue in question is pressing enough that it needs a "boost" from EA relative to what it receives from non-EAs. For example, there's a fair amount of attention paid to nuclear risk already in the non-EA governance and research communities. Or in the case of "taking dharma seriously," which I might interpret as the idea that religious obervation is in fact the central purpose of human life, why are the religious institutions of the world doing an inadequate job in this area, such that EA needs to get involved?

I realize this is just a list on Twitter, a sort of brainstorm or precursor to a deeper argument.  That's a fine place to start. Without an explicit argument on the pros and cons of any given point, though, this list is almost completely illegible on its own. And it would not surprise me at all if any given list of 22 interdependent bullet-point-length project ideas and cause areas contained zero items that really should cause EA to shift its priorities.

Maybe there are other articles out there making deeper arguments in favor of making these EA cause areas. If so, then it seems to me that we should make efforts to center conversation on those, rather than "regressing" to Twitter claims.

Alternatively, if this is where we're at, then I'd encourage the author, or anyone whose intuition is that these are neglected, to make a convincing argument for them. These are sort of the "epistemic rules" of EA.

In fact, I think that's sort of the movement's brand. EA isn't strictly about "doing the most good." How could we ever know that for sure?

Instead, it's about centering issues for which the strongest, most legible case can be made. This may indeed cause some inefficiencies, as you say. Some weird issues that are even more important than the legible ones we support may be ignored by EA, simply because they depend on so much illegible information make their importance clear.

Hopefully, those issues will find support outside of EA. I think the example of "dharma," or the "implications of psychedelics," are possibly subject to this dilemma. But I personally think EA is better when it confines itself to legible cause areas. There's already a lot of intuition-and-passion-based activism and charity out there.

If anyone thinks EA ought to encompass illegible cause areas, I would be quite interested to read a (legible!) argument explaining why!

Agree with almost all of this except: the bar for proposing candidates should be way way lower than the bar for getting them funded and staffed and esteemed. I feel you are applying the latter bar to the former purpose.

Legibility is great! The reason I promoted Griffes' list of terse/illegible claims is because I know they're made in good faith and because they make the disturbing claim that our legibility / plausibility sensor is broken. In fact if you look at his past Forum posts you'll see that a couple of them are expanded already. I don't know what mix of "x was investigated silently and discarded" and "movement has a blindspot for x" explains the reception, but hey nor does anyone.

Current vs claimed optimal person allocation is a good idea, but I think I know why we don't do em: because almost no one has a good idea of how large efforts are currently, once we go any more granular than "big 20 cause area".

Very sketchy BOTEC for the ideas I liked:

#5: Currently >= 2 people working on this? Plus lots of outsiders who want to use it as a weapon against longtermism. Seems worth a dozen people thinking out loud and another dozen thinking quietly.

#10: Currently >= 3 people thinking about it, which I only know because of this post. Seems worth dozens of extra nuke people, which might come from the recent Longview push anyway.

#13: Currently around 30? people, including my own minor effort. I think this could boost the movement's effects by 10%, so 250 people would be fine.

#20: Currently I guess >30 people are thinking about it, going to India to recruit, etc. Counting student groups in non-focus places, maybe 300. But this one is more about redirecting some of the thousands in movement building I guess.

That was hard and probably off by an order of magnitude, because most people's work is quiet and unindexed if not actively private.

One constructive project might be to outline a sort of "pipeline"-like framework for how an idea becomes an EA cause area. What is the "epistemic bar" for:

  • Thinking about an EA cause area for more than 10 minutes?
  • Broaching a topic in informal conversation?
  • Investing 10 hours researching it in depth?
  • Posting about it on the EA forum?
  • Seeking grant funding?

Right now, I think that we have a bifurcation caused by feed-forward loops. A popular EA cause area (say, AI risk or global health) becomes an attractor in a way that goes beyond the depth of the argument in favor of it. It's normal and fine for an EA to pursue global health, while there's little formal EA support for some of the ideas on this list. Cause areas that have been normed benefit from accumulating evidence and infrastructure to stay in the spotlight. Causes that haven't benefitted from EA norming languish in the dark.

This may be good or bad. The pro is that it's important to get things done, and concentrating our efforts in a few consensus areas that are imperfect but good enough may ultimately help us organize and establish a track record of success over the long run. In addition, maybe we want to consider "enough founder energy to demand attention" as part of what makes a neglected idea "tractable" to elevate into a cause area.

The con is that it seems like, in theory, we'd want to actually focus extra attention on those neglected (and important, tractable) ideas -- that seems like a self-consistent principle with the heuristics we used to elevate the original cause areas in the first place. And it's possible that conventional EA is monopolizing resources, so that it's harder for someone in 2022 to "found" a new EA cause area than it was in 2008.

So hopefully, it doesn't seem like a distraction from the object-level proposals on the list to bring up this meta-issue.

It's worth pointing out that #5 will not be news to EAs who have come across Bostrom's paper The Vulnerable World Hypothesis which is featured on the Future of Humanity's website. It also generated quite a bit of discussion here.

As for #10 it sounds like people at CSER are investigating similar issues as per the comment by MMMaas elsewhere in this thread. 

I'm not convinced any of the ideas mentioned are very important blindspots.

Rethink also have something coming on #10 apparently. But this is then some evidence for Griffes' nose.

Even if none of these were blindspots, it's worth actively looking for the ones we no doubt have. (from good faith sources.)

But this is then some evidence for Griffes' nose.

Maybe, but if multiple people have come across an idea then that may be evidence it's not very hard to come across...

Even if none of these were blindspots, it's worth actively looking for the ones we no doubt have. (from good faith sources.)

Absolutely.
 

To be frank, I think most of these criticisms are nonsense and I am happy that the EA community is not spending its time engaging with whatever the 'metaphysical implications of the psychedelic experience' are.

...

If the EA community has not thought sufficiently about a problem, anyone is very welcome to spend time thinking about it and do a write-up of what they learned... I would even wager that if someone wrote a convincing case for why we should be 'taking dharma seriously', then many would start taking it seriously.

These two bits seem fairly contradictory to me.

If you think a position is "nonsense" and you're "happy that the EA community is not spending its time engaging with" it, is someone actually "very welcome" to do a write-up about it on the EA Forum?

In a world where a convincing case can be written for a weird view, should we really expect EAs to take that view seriously, if they're starting from your stated position that the view is nonsense and not worth the time to engage with? (Can you describe the process by which a hypothetical weird-but-correct view would see widespread adoption?)

And, who would take the time to try & write up such a case? Milan said he thinks EA "basically can't hear other flavors of important feedback", suggesting a sense in which he agrees with your first paragraph -- EAs tend to think these views are nonsense and not worth engaging with, therefore there is no point in defending them at length because no one is listening.

I'm reminded of this post which stated:

We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism... It was these same people that then tried to prevent this paper from being published.

It doesn't feel contradictory to me, but I think I see where you're coming from. I hold the following two beliefs which may seem contradictory :

1. Many of the aforementioned blindspots seem like nonsense, and I would be surprised if extensive research in any would produce much of value.
2. At large, people should form and act on their own beliefs rather than differing to what is accepted by some authority.

There's an endless number of things which could turn out to be important. All else equal, EA's should prioritise researching the things which seem the most likely to turn out to be important. 

This is why I am happy that the EA community is not spending time engaging with many of these research directions, as I think they're unlikely to bear fruit. That doesn't mean I'm not willing to change my mind if I were presented a really good case for their importance!

If someone disagrees with my assessment then I would very much welcome research and write-ups, after which I would not be paying the cost of

"should I (or someone else) prioritise researching psychedelics over this other really important thing"

but rather

"should I prioritise reading this paper/writeup, over the many other potentially less important papers?"

If everyone would refuse to engage with even a short writeup on the topic, I would agree that there was a problem and to be fair I think there are some issues with misprioritisation due to poor use of proxies such as "does the field sound too weird" or "is the author high status". But I think in the far majority of cases, what happens is simply that the writeup wasn't sufficiently convincing to justify moving away resources from other important research fields to engage further. This will of course seem like a mistake to the people who are convinced of the topic's importance, but like the correct action to those who aren't.

Thanks for this - a good mix of ideas that are:

(a) well-taken and important IMO and indeed neglected by other EAs IMO (though I wouldn't say they're literally unhearable) -- #5, #13, #18, #20

(b) intriguing IMO and I want to hear more -- #10, #11, #16, #19

(c) actually I think taken into account just fine by EAs -- #12, #14, #22

(d) just wrong / correctly ignored IMO -- #2, #3, #6, #7, #9

(e) nonsensical ...at least to me -- #4, #15, #21

(f) not something I know enough about to comment on but also something I don't think I have a reason to prioritize looking into further (as I can't look into everything) -- #1, #8, #17

Though I guess any good list would include a combination of all six. And of course I could be the wrong one!

I'd particularly really like to hear more about "nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang" as I think this could change my priorities if true. Is the idea just that nuclear weapons are a particularly viable attack vector for a hostile AI?

Gavin (the OP) and I agree on #5, #10, #13, #18, and #20 are important, neglected, or at least intriguing. We also agree on #6, #7, and #15 being wrong/nonsensical/correctly ignored.

Gavin thinks #12 is wrong / correctly ignored but I think it is correct + taken into account just fine by EAs. I'm happy to switch to Gavin's position on this.

So where Gavin and I disagree is #2 and #14. Gavin finds these intriguing but I think #2 is wrong / correctly ignored and I think #14 is correct but taken into account just fine by EAs / not very neglected on the margin.

Note that as of the time of writing Gavin has not opined on #1 (IMO no comment), #3 (IMO wrong), #4 (IMO nonsensical), #8 (IMO no comment), #9 (IMO wrong), #11 (IMO intriguing), #16 (IMO intriguing), #17 (IMO no comment), #19 (IMO intriguing), #21 (IMO nonsensical), or #22 (IMO taken into account just fine by EAs).

2. mental health gains far above baseline as an important x-risk reduction factor via improved decision-making

At first I thought this was incorrect but I think there might be a kernel of truth here - although I have a different framing. 

It has been suggested that boosting economic growth can lower existential risk as, if we're richer, we'll want to spend more on safety. On the other hand when you're poor, you just want to get richer.

Similarly, I don't think societal safety will be much of a priority as long as we're a society that is ravaged by mental health problems. It might be that solving mental health is a necessary pre-cursor to the sort of safety spending/efforts we would need to achieve existential security.
 

I’ve thought about this a bit and don’t think #2 is incorrect, although I could quibble with it as an “important” factor.

I think broadly improving mental health could reduce catastrophic risk if:

A. Catastrophic technologies (i.e., Big Red Buttons) will become cheaper to access. B. Someone unhinged is likelier to press a Big Red Button.

The connection here doesn't seem mysterious at all to me. Sane people are less likely to end the world.

However, this may be more about reducing variance in mental health than increasing its average level.

Now I wish there were numbers in the OP to make referencing easier

Edit: thanks

Looks like the OP added numbers. Thanks OP!

(b) intriguing IMO and I want to hear more -- #10, #11, #16, #19

10. nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang 

See discussion in this thread 


11. EA correctly identifies improving institutional decision-making as important but hasn't yet grappled with the radical political implications of doing that 

This one feels like it requires substantial unpacking; I'll probably expand on it further at some point. 

Essentially the existing power structure is composed of organizations (mostly large bureaucracies) and all of these organizations have (formal and informal) immunological responses that activate when someone tries to change them. (Here's some flavor to pump intuition on this.) 

To improve something is to change it. There are few Pareto improvements available on the current margin, and those that exist are often not perceived as Pareto by all who would be touched by the change. So attempts to improve institutional decision-making trigger organizational immune responses by default.  

These immune responses are often opaque and informal, especially in the first volleys. And they can arise emergently: top-down coordination isn't required to generate them, only incentive gradients. 

The New York Times' assault on Scott Alexander (a) is an example to build some intuition of what this can look like: the ascendant power of Slate Star Codex began to feel threatening to the Times and so the Times moved against SSC. 


16. taking dharma seriously a la @RomeoStevens76's current research direction 

I've since realized that this would be best accomplished by generalizing (and modernizing) to a broader category, which we've taken to referring to as valence studies


19. worldview drift of elite EA orgs (e.g. @CSETGeorgetown, @open_phil) via mimesis being real and concerning 

I'm basically saying that mimesis is a thing. 

It's hard to ground things objectively, so social structures tend to become more like the other social structures around them. 

CSET is surrounded by and intercourses with DC-style think tanks, so it is becoming more like a DC-style think tank (e.g. suiting up starts to seem like a good idea). 

Open Phil interfaces with a lot of mainstream philanthropy, and it's starting to give away money in more mainstream ways.  

Hey Peter, on your last point, I believe the clearest paths from AI to x-risk run directly through either nuclear weapons or bioweapons. Not sure if the author believes the same, but here’s some thoughts I wrote up on the topic:

https://forum.effectivealtruism.org/posts/7ZZpWPq5iqkLMmt25/aidan-o-gara-s-shortform?commentId=rnM3FAHtBpymBsdT7

Yes, I have a similar position that early-AGI risk runs through nuclear mostly.  I wrote my thoughts on this here: When Bits Split Atoms

Thanks I'll take a look!

Long back and forth between Griffes and Wiblin

That thread branches sorta crazily, here's the current bottom of one path.

Thank you Gavin (algekalipso here).

I think that the most important EA-relevant link for #1 would be this: Logarithmic Scales of Pleasure and Pain: Rating, Ranking, and Comparing Peak Experiences Suggest the Existence of Long Tails for Bliss and Suffering 

For a summary, see: Review of Log Scales.

In particular, I do think aspiring EAs should take this much more seriously:

An important pragmatic takeaway from this article is that if one is trying to select an effective career path, as a heuristic it would be good to take into account how one’s efforts would cash out in the prevention of extreme suffering (see: Hell-Index), rather than just QALYs and wellness indices that ignore the long-tail. Of particular note as promising Effective Altruist careers, we would highlight working directly to develop remedies for specific, extremely painful experiences. Finding scalable treatments for migraines, kidney stones, childbirth, cluster headaches, CRPS, and fibromyalgia may be extremely high-impact (cf. Treating Cluster Headaches and Migraines Using N,N-DMT and Other Tryptamines, Using Ibogaine to Create Friendlier Opioids, and Frequency Specific Microcurrent for Kidney-Stone Pain). More research efforts into identifying and quantifying intense suffering currently unaddressed would also be extremely helpful. Finally, if the positive valence scale also has a long-tail, focusing one’s career in developing bliss technologies may pay-off in surprisingly good ways (whereby you may stumble on methods to generate high-valence healing experiences which are orders of magnitude better than you thought were possible).

Best,

Andrés :)

I generally agree with Hilary Greaves' view that cluelessness may even act in favour of longtermism (see section "Response five: "Go longtermist").

I also say a bit about her argument here.

Many of these points could have multiple meanings.

Maybe this person (or other proponents of these ideas) could expand some points into a few paragraphs. 

I agree. I would love to have even just a sentence or two explaining what each of these critiques is, possibly with links to more in-depth explanations.

I would give him a poke on twitter

  • nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang

What does "overhang" mean in this context?

Existing nuclear weapon infrastructure, especially ICBMs, could be manipulated by a powerful AI to further its goals (which may well be orthogonal to our goals).

Smart things are not dangerous because they have access to human-built legacy nukes.  Smart things are dangerous because they are smarter than you. 

I expect that the most efficient way to kill everyone is via the biotech->nanotech->tiny diamondoid bacteria hopping the jetstream and replicating using CHON and sunlight->everybody falling over dead 3 days after it gets smart.  I don't expect it would use nukes if they were there.

Smart AIs are not dangerous because somebody built guns for them, smart AIs are not dangerous because cars are connected to the Internet, smart AIs are not dangerous because they can steal existing legacy weapons infrastructure, smart AIs are dangerous because they are smarter than you and can think of better stuff to do.

Some back-and-forth on this between Eliezer & me in this thread.

Sure, but I'm not sure this particular argument means working on nuclear safety is as important as working on AI. We could get rid of all nuclear weapons and a powerful AI could just remake them, or make far worse weapons that we can't even conceive of now. Unless we destroy absolutely everything I'm sure a powerful unaligned AI will be able to wreak havoc, and the best way to prevent that seems to me to ensure  AI is aligned in the first place!

Compare the number of steps required for an agent to initiate the launch of existing missiles to the number of steps required for an agent to build & use a missile-launching infrastructure de novo.

Not sure why number of steps is important. If we're talking about very powerful unaligned AI it's going to wreak havoc in any case. From a longtermist point of view it doesn't matter if it takes it a day, a month, or a year to do so.

Ah cmon it still shifts the P(doom) distribution a bit. 

Consider us having some solid countermeasures with OODA loops of ~days. If we delay doom by y days, then some number of countermeasures can fire where otherwise they wouldn't get to fire at all.

(Though this assumes an imperfectly timed treacherous turn, before it's unstoppable.)

Maybe. I was thinking that the point at which a rogue AI is powerful enough to take control of existing nuclear weapons is the point at which we're already completely screwed, but I could be wrong. 

NC3 early warning systems are susceptible to error signals, and the chain of command hasn't always been v secure (and may not be today), so it wouldn't necessarily be that hard for a relatively unsophisticated AGI to spoof and trigger a nuclear war:* certainly easier than many other avenues that would involve cracking scientific problems.

(*which is another thing from hacking to the level of "controlling" the arsenal and being able to retarget it at will, which would probably require a more advanced capability, where the risk from the nuclear avenue might perhaps be redundant compared to risks from other, direct avenues).

Incidentally, at CSER I've been working with co-authors on a draft chapter that explores "military AI as cause or compounder of global catastrophic risk", and one of the avenues also involves discussion of what we call "weapons/arsenal overhang", so this is an interesting topic that I'd love to discuss more

Ok thanks this makes more sense to me now

Right, that covers hard takeoff or long-con treachery - but there are scenarios where we uncover the risk before strict "prepotence". And imo we should maintain a distribution over a big set of such scenarios at the moment.

Yeah, understandable but I would also push back. Mining / buying your own uranium and building a centrifuge to enrich it and putting it into a missile is difficult for even rogue nations like Iran. An advanced AI system might just be lines of code in a computer that can use the internet and output text or speech, but with no robotics system to give it physical capacity. From that point of view, building your own nukes seems much more difficult than hacking into an existing ICBM system.

I agree that the current nuclear weapon situation makes AI catastrophe more likely on the margin, and said as much here (The paragraph "You might reply: The thing that went wrong in this scenario is not the out-of-control AGI, it’s the fact that humanity is too vulnerable! And my response is: Why can’t it be both? ...")

That said, I do think the nuclear situation is a rather small effect (on AI risk specifically), in that there are many different paths for an intelligent motivated agent to cause chaos and destruction. Even if triggering nuclear war is the lowest-hanging fruit for a hypothetical future AGI aspiring to destroy humanity (it might or might not be, I dunno), I think other fruits are hanging only slightly higher, like causing extended blackouts, arranging for the release of bio-engineered plagues, triggering non-nuclear great power war (if someday nuclear weapons are eliminated), mass spearphishing / hacking, mass targeted disinformation, etc., even leaving aside more exotic things like nanobots. Solving all these problems would be that much harder (still worth trying!), and anyway we need to solve AI alignment one way or the other, IMO. :)

Gotcha, thanks!

Could you explain what "Embodied virtue ethics and neo-Taoism as credible alternatives to consequentialism that deserve seats in the moral congress" would mean for cause prioritization in E.A.? I'm not familiar with either of those concepts.

As of last week I'd agree that cluelessness is neglected, but Jan's new sequence is a great corrective for that.

In case anyone missed it, there was a popular AMA about psychedelics research and philanthropy in May 2021.

Technical - I'd recommend posting as a linkpost instead of writing [Link] in the title by clicking on the hyperlink Icon just below the title when you edit. This makes it a bit easier to read  the title and to find the link. And it also has some benefits concerning SEOs which I don't understand .

An anon comments elsewhere (reposted with permission):

These are very specific points. I think they verge more towards opinions he has, that he thinks others are unreasonably neglecting. i.e. does not fit how I would think about attentional blindspots

I find these interesting (possibly right): #4, #13, #15, #18, #19, #20, #21, #22

These look rather confused to me (and actually quite close to how some people in at least the rationality community already think): #3, #6 
I don’t think these are in a real sense ‘alternatives’. I think they are complementary, interconnected as part of human perception and expression. 
 

#9. consequentialist cluelessness being a severe challenge to longtermism
I think the way people like Amanda Askell and Will MacAskill have thought zoned in on abstact concepts of uncertainty here (also as in moral uncertainty) is itself somewhat confused, because it looks like it has no sound grounding with reality (does not regard representational ambiguity or deeper principles for understanding phenomena). I think there is a case for embodied ethics though. 
 

#2. mental health gains far above baseline as an important x-risk reduction factor via improved decision-making 
Can see how nurturing mental health is important, this framing and others come across as rather reductionistic (in a counterproductive way - since disconnecting from recognising shared perspectives and feeling care towards others as oneself - as EAs seem to do when they talk about ‘self-care’). 

 

This is the Ben Hoffman essay I had in mind: Against responsibility

(I'm more confused about his EA is self-recommending

Here's Ben Hoffman on burnout & building community institutions: Humans need places

More from Gavin
Curated and popular this week
Relevant opportunities