This is a special post for quick takes by calebp. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Some quick thoughts on AI consciousness work, I may write up something more rigorous later.

Normally when people have criticisms of the EA movement they talk about its culture or point at community health concerns.

I think aspects of EA that make me more sad is that there seems to be a few extremely important issues on an impartial welfarist view that don’t seem to get much attention at all, despite having been identified at some point by some EAs. I do think that ea has done a decent job of pointing at the most important issues relative to basically every other social movement that I’m aware of but I’m going to complain about one of it’s shortcomings anyway.

It looks to me like we could build advanced ai systems in the next few years and in most worlds we have little idea of what’s actually going on inside them. The systems may tell us they are conscious, or say that they don’t like the tasks we tell them to do but right now we can’t really trust their self reports. There’ll be a clear economic incentive to ignore self reports that create a moral obligation to using the systems in less useful/efficient ways. I expect the number of deployed systems to be very large and that it’ll be plausible that we lock in the suffering of these systems in a similar way to factory farming. I think there are stronger arguments for the topic’s importance that I won’t dive into right now but the simplest case is just the “big if true-ness” of this area seems very high.

My impression is that our wider society and community is not orienting in a sane way to this topic. I don’t remember ever coming across a junior EA seriously considering directing their career to work in this area. 80k has a podcast with Rob Long and a very brief problem profile (that seems kind of reasonable), ai consciousness (iirc) doesn’t feature in ea virtual programs or any intro fellowship that I’m aware of, there haven’t been many (or any?) talks about it at eag in the last year. I do think that most organisations could turn around and ask “well what concrete action do you actually want our audience to take” and my answers are kind of vague and unsatisfying right now - I think we were at a similar point with alignment a few years ago and my impression is that it had to be on the communities mind for a while before we were able to pour substantial resources into it (though the field of alignment feels pretty sub-optimal to me and I’m interested in working out how to do a better job this time round).

I get that there aren’t shovel ready directions to push people to work on right now, but in so far as our community and its organisations brand themselves substantially as the groups identifying and prioritising the worlds most pressing problems it sure does feel to me like more people should have this topic on their minds.

There are some people I know of dedicating some of their resources to making progress in this area, and I am pretty optimistic about the people involved - the ones that I know of seem especially smart and thoughtful.

I don’t want all of the EA to jump into this rn, and I’m optimistic about having a research agenda in this space that I’m excited about and maybe even a vague plan about what one might do about all this by the end of this year - after which I think we’ll be better positioned to do field building. I am excited about people who feel especially well placed moving into this area - in particular people with some familiarity with both mainstream theories of consciousness and ml research (particularly designing and running empirical experiments). Feel free to reach out to me or apply for funding at the ltff.

(quick thoughts, may be missing something obvious)

Relative the scale of the long term future, the number of AIs deployed in the near term is very small, so to me it seems like there's pretty limited upside to improving that. In the long term, it seems like we have AIs to figure out the nature of consciousness for us. 

Maybe I'm missing the case that lock-in is plausible, it currently seems pretty unlikely to me because the singularity seems like it will transform the ways the AIs are running. So in my mind it mostly matters what happens after the singularity. 

I'm also not sure about the tractability, but the scale is my major crux. 

I do think understanding AI consciousness might be valuable for alignment, I'm just arguing against work on nearterm AI suffering.

Edit: I have a lot of sympathy for the take above but I tried to write up my response around why I think lock-ins are pretty plausible.

I’m not sure rn whether the majority of downside comes from lock-in but I think that’s what I’m most immediately concerned about.

I assume by singularity you mean an intelligence explosion or extremely rapid economic growth. I think my default story for how this happens in the current paradigm involves people using AIs in existing institutions (or institutions that look pretty similar today’s one’s) in markets that looks pretty similar to current markets which (on my view) are unlikely to care about the moral patienthood of AIs in a pretty similar ways to current market failures.

On the “markets still exist and we do things in kind of like how we do now view” - I agree that in principle we’d be better positioned to make progress on problems generally if we had something like PASTA but I feel like you need to tell a reasonable story for one of

  • how governance works post TAI so that you can easily enact improvements like eliminating ai suffering
  • why current markets do allow for things like factory farming and slavery but wouldn’t allow for violation of AI preferences

I’m guessing your view is that progress will be highly discontinuous and society will look extremely different post singularity to how it does now (kind of like going from pre-agricultural revolution to now whereas my view is more like preindustrial revolution to now).

I’m not really sure where the cruxes are on this view or how to reason about it well but my high level argument is that the “god like AGI which has significant responsibility but still checks in with its operators” will still need to make some trade offs across various factors and unless it’s doing some cev type thing, outcomes will be fairly dependent on the goals that you give it and it’s not clear to me that the median world leader or ceo gives the agi goals that concern the ai’s wellbeing (or its subsystems wellbeing) - even if it’s relatively cheap to evaluate it. I am more optimistic about agi controlled by person sampled from a culture that has already set up norms around how to orient to the moral patienthood of ai systems than one that needs to figure it out on the fly. I do feel much better about worlds where some kind of reflection process is overdetermined.

My views here are pretty fuzzy and are often influenced substantially by thought experiments like “If random tech ceo could effectively control all the worlds scientists and have them run at 10x speed and had 100 trillion dollars does factory farming still exist?” which isn’t a very high epistemic bar to beat. (I also don’t think I’ve articulated my models very well and I may take another stab at this later on).

I have some tractability concerns but my understanding is that few people are actually trying to solve the problem right now and when few people are trying it’s pretty hard for me to actually get a sense of how tractable a thing is, so my priors on similarly shaped problems are doing most of the work (which leaves me feeling quite confused).

I'm really glad you wrote this; I've been worried about the same thing. I'm particularly worried at how few people are working on it given the potential scale and urgency of the problem. It also seems like an area where the EA ecosystem has a strong comparative advantage — it deals with issues many in this field are familiar with, requires a blend of technical and philosophical skills, and is still too weird and nascent for the wider world to touch (for now). I'd be very excited to see more research and work done here, ideally quite soon.

Very strong +1 to all this. I honestly think it's the most neglected area relative to its importance right now. It seems plausible that the vast majority of future beings will be digital, so it would be surprising if longtermism does not imply much more attention to the issue.

The 80k job board has too much variance.

(Quickly written, will probably edit at some point in future)

Jobs in the main 80k job board can range from (in my estimation) negligible value to amongst the best opportunities I'm aware of. I have also seen a few jobs that I think are probably actively harmful (e.g. token alignment orgs who are trying to build AGI where the founders haven't thought carefully about alignment - based on my conversations with them).

I think a helpful orientation towards jobs on the jobs board is, at least one person with EA values who happens to work at 80k thinks it's worth signal boosting. And NOT EA/80k endorses all of these jobs without a lot more thought from potential applicants.

Jobs are also on the board for a few different reasons e.g. building career cap Vs direct impact Vs ... And there's isn't lots of info about why the job is there in the first place.

I think 80k does try to to give more of this vibe than people get. I don't mean to imply that they are falling short in an obvious way.

I also think that the jobs board is more influential than 80k thinks. Explicit endorsements of organisations from core EA orgs are pretty rare and I think they'd be surprised how many young EAs overupdate on their suggestions (but only medium confidence about it being pretty influential).

My concrete improvement would be to seperate jobs into a few different boards to the degree that they endorse the organisation.

One thing I find slightly frustrating is the response that I have heard from 80k staff to this is that the main reason they don't do this is around managing relationships with the organisations (which could be valid). Idk if it's the right call but I think it's a little sus, I think people are too quick to jump to the nice thing that doesn't make them feel uncomfortable over the impact maximising thing (pin to write more about this in future).

One error that I think I'm making is criticising an org for doing a thing that is probably much better than not doing the thing even if it think it's leaving some value on the table, I think that this is kind of unhealthy and incentives inaction. I'm not sure what to do about this other than flag that I think 80k is great as is most of the stuff they do and I'd rather orgs had a policy of occasionally producing things that I feel moderately about if this helps them do a bunch of cool stuff, than underperform and not get much done (pin to write more about this in future).

Agree!

My best idea for solving this is making an alternative view for 80k's job board that has some reasons to obviously prefer it, and to add features to it like "here's a link to the org's AMA post", where I hope the community can comment on things like "this org is trying to build an AGI with little concern for safety", and lots of people can upvote it. No political problems for 80k. Lots of good high quality discussions. Hopefully.

What do you think?

Regarding some jobs being there just for building career capital - I only learned about this a few days ago and it kind of worries me. I don't have good ideas on how to solve it

>it kind of worries me
Is that because you think the job board shouldn't list career capital roles,  because it wasn't obvious that the roles were career capital-related, or something else?

What worries me:

I think lots of people take (and took) a job from 80k's board..

hoping to do something impactful.

in fact doing something neutral or perhaps (we could discuss this point,) actively harmful.

Unaware that this is the situation.

 

What do you think? (does this seems true? does it seem worrying?)

In case it's helpful, the first thing below the title on the job board says:
>Some of these roles directly address some of the world’s most pressing problems, while others may help you build the career capital you need to have a big impact later.

I'd be interested in any ideas you had for communicating more clearly that a bunch of the roles are there for a mix of career capital and impact reasons.  Giving our guess of the extent to which each role is being listed for career capital vs impact reasons isn't feasible for various reasons unfortunately.  

TL;DR: I think this is very under communicated

You have that line there, but I didn't notice it in years, and I recently talked to other people who didn't notice it and were also very surprised. The only person I think I talked to who maybe knew about it is Caleb, who wrote this shortform.

Everyone (I talked to) thinks 80k is the place to find an impactful job.

Maybe the people I talk to are a very biased sample somehow, it could be, but they do include many people who are trying to have a high impact with their career right now

I checked if people know this by opening a poll for the EA Twitter community:

Could you say more on why it's not feasible? Maybe it's something we could solve?

Just saying, filtering the jobs by org does sound good to me (in almost all situations), in case that's the bottle neck.

"This org - we think it's impactful. That org - just career building"

Oh this is a cool idea! I endorse this on the current margin and think it's cool that you are trying this out.

I think that ideally a high context person/org could do the curation and split this into a bunch of different categories based on their view (ideally this is pretty opinionated/inside viewy).

Next idea: Have a job board with open vetting, where anyone can comment or disagree with the impact analysis, including the company itself.

What do you think?

I think linking to organisations' AMAs on the EA Forum is a neat idea!  Thanks for sharing.  I've added it to our list of feature ideas we might build in the future.  

  1. Thank you!
  2. I admit I'm a bit worried when I hear "might build in the future" about a feature that seems very small to me (I could add it to my own version), and a part of me is telling me this is your way of saying you actually never want to build it. I'm not sure how to phrase my question exactly.. maybe "if someone else would do the dev work, would you be happy just putting it in, or is there another bottle neck?"
  3. Also excuse me for my difficulty understanding subtext, I am trying

FYI there is a super-linear prize for an automated jobs board. https://www.super-linear.org/prize?recordId=recSFgbnu7VzAHCqY

Yeah, have an automation to put the tweets in an Airtable, and something to export the past tweets, just gotta put them together.

Do note that it doesn't solve the problem of high variance

The next feature I want to get is voting, which will work on that problem.

Oh, may I please try to convince you not to create your own voting system?

 

Initial reasons, as an invitation to push back:

Commenting is more important than voting

If, for example, someone thinks a specific org is actively harmful, I think:

Good situation: Someone writes a comment with the main arguments, references, and so on.

Bad situation: Someone needs to get lots of people to downvote the position. (Or people don't notice) (or the org gets lots of people to upvote) (or other similar situations)

Upvoting comments is better than both

And the double "upvote/downvote" + "agree/disagree" is even better, where the best comments float up.

See how conversations like that in the forum/lesswrong look. This is unusually good for the internet, and definitely better than upvoting/downvoting alone.

Is this system perfect? No, but it's better than anything I've seen, definitely better than upvotes alone.

[Reducing friction for people to voice their opinion] is key

+ For platforms like this, the amount of active users matters, there's an importance in having a critical mass. 

So:

Adding a new platform is friction.

I vote for using an existing platform. Like the EA Forum.

  • Maybe a post without the "frontpage" tag
  • Maybe a comment on a post

These conversations already fit the EA Forum

It's discussing the impact of the org.

(I wouldn't be too surprised if there's a good reason to use something else, but I doubt it would be a good idea to create a NEW platform)

I have tried to convince the forum team of this, using the methods they asked to be convinced via. There has been some move to put jobs on the forum, but no in a searchable way. I think a new site that pushes better norms would be better. 

I largely agree with the object level points you make but I don't see why you wouldn't want a new org with better processes.

https://forum.effectivealtruism.org/posts/uxfWrFNH7jSSGhkkS/unofficial-pr-faq-posting-more-jobs-to-the-forum-but-they

Any chance you'd share what you don't like?

That they posted like they have the job features even though they don't?

(btw I don't recommend using the forum's FILTERING/SEARCHING, I'd only use their commenting and upvoting. And login)

It's not searchable, filterable, or capable of taking a feed from.

I see.

So indeed I wouldn't use the forum for that. I'd only link from [something filterable and so on] to forum comments.

What do you think?

Very half baked

Should recent ai progress change the plans of people working on global health who are focused on economic outcomes?

It seems like many more people are on board with the idea that transformative ai may come soon, let’s say within the next 10 years. This pretty clearly has ramifications for people working on longtermist causes areas but I think it should probably affect some neartermist cause prioritisation as well.

If you think that AI will go pretty well by default (which I think many neartermists do) I think you should expect to see extremely rapid economic growth as more and more of industry is delegated to AI systems.

I’d guess that you should be much less excited about interventions like deworming or other programs that are aimed at improving people’s economic position over a number of decades. Even if you think the economic boosts from deworming and ai will stack, and you won’t have sharply diminishing returns on well-being with wealth I think you should be especially uncertain about your ability to predict the impact of actions in the crazy advanced ai world (which would generally make me more pessimistic about how useful the thing I’m working on is).

I don’t have a great sense of what the neartermists who think AI will go well should do. I’m guessing some could work on accelerating capabilities though I think that’s pretty uncooperative. It’s plausible that saving lives now is more valuable than before if you think they might be uploaded but I’m not sure there is that much of a case for this being super exciting from a consequentialist world view when you can easily duplicate people. I think working on ‘normy’ ai policy is pretty plausible or trying to help governments orient to very rapid economic growth (maybe in a similar way to how various nonprofits helped governments orient to covid).

To significantly change strategy, I think one would need to not only believe "AI will go well" but specifically believe that AI will go well for people of low-to-middle socioeconomic status in developing countries. The economic gains from recent technological explosions (e.g., industrialization, the computing economy) have not lifted all boats equally. There's no guarantee that gaining the technological ability to easily achieve certain humanitarian goals means that we will actually achieve them, and recent history makes me pretty skeptical that it will quickly happen this time.

I’m not an expert but I’d be fairly surprised if the Industrial Revolution didn’t do more to lift people in LMICs out of poverty than any known global health intervention even if you think it increased inequality. Would be open to taking bets on concrete claims here if we can operationalise one well.

I think the Industrial Revolution and other technological explosions very likely did (or will) have an overall anti-poverty impact . . . but I think that impact happened over a considerable amount of time and was not of the magnitude one might have hoped for. In a capitalist system, people who are far removed from the technological improvements often do benefit from them without anyone directing effort at that goal. However, in part because the benefits are indirect, they are often not quick. 

So the question isn't "when will transformational AI exist" but "when will transformational AI have enough of an impact on the wellbeing of economic-development-program beneficiaries that it significantly undermines the expected benefits of those programs?" Before updating too much on the next-few-decades impact of AI on these beneficiaries, I'd want to see concrete evidence of social/legal changes that gave me greater confidence that the benefits of an AI explosion would quickly and significantly reach them. And presumably the people involved in this work modeled a fairly high rate of baseline economic growth in the countries they are working in, so massive AI-caused economic improvement for those beneficiaries (say) 30+ years from now may have relatively modest impact in their models anyway.

Should recent ai progress change the plans of people working on global health who are focused on economic outcomes?

I think so, see here or here for a bit more discussion on this

If you think that AI will go pretty well by default (which I think many neartermists do)

My guess/impression is that this just hasn't been discussed by neartermists very much (which I think is one sad side-effect from bucketing all AI stuff in a "longtermist" worldview)

If you want there to be more great organisations, don’t lower the bar

I sometimes hear a take along the lines of “we need more founders who can start organisations so that we can utilise EA talent better”. People then propose projects that make it easier to start organisations.

I think this is a but confused. I think the reason that we don’t have more founders is due to a having few people who have deep models in some high leverage area and a vision for a project. I don’t think many projects aimed at starting new organisations are really tackling this bottleneck at its core and instead lower the bar by helping people access funding, or appear better positioned than they actually are.

I think in general people that want to do ambitious things should focus on building deep domain knowledge, often by working directly with people with deep domain knowledge. The feedback loops are just too poor within most EA cause areas to be able to learn effectively by starting your own thing. This isn’t always true, but I think it’s more often than not true for most new projects that I see.

I don’t think the normal startup advice where running a startup will teach you a lot applies well here. Most startups are trying to build products that their investors can directly evaluate. They often have nice metrics like revenue and daily active users that track their goals reasonably well. Most EA projects lack credible proxies for success.

Some startups also lack credible success proxies such as bio startups. I think bio startups are particularly difficult for investors to evaluate and many experienced vcs avoid the sector entirely unless they have staff with bio PhDs and even then it’s still pretty hard to evaluate the niche area the startup is working in. Anecdotally, moderately successful bio startups seem much more likely to have a BS product than the average tech startup at a similar level of funding/team size.

Of course, I do think there are founders that are above the bar, but I think starting a new project is actually often very hard and a poor learning environment and I would probably prefer the bar was a bit higher and there were fewer nudges for early career people towards starting new things.

Why handing over vision is hard.

I often see projects of the form [come up with some ideas] -> [find people to execute on ideas] -> [hand over the project].

I haven't really seen this work very much in practice. I have two hypothesis for why.

  1. The skills required to come up with great projects are pretty well correlated with the skills required to execute on them. If someone wasn't able to come up with the idea in the first place, it's evidence against them having the skills to execute well on it.

  2. Executing well looks less like firing a canon and more like deploying a heat seeking missile. In reality most projects are a sequence of decisions that build on each other and the executors need to have the underlying algorithm to keep the project on track. In general when someone explains a project they communicate roughly where the target is and the initial direction to aim in, but it's much harder hand off the algorithm that keep the missile on track.

I'm not saying seperating out ideas and execution is impossible, just that it's really hard and good executors are rare and very valuable. Good ideas are cheap and easy to come by, but good execution is expensive.

A formula that I see more often work well is [person has idea] -> [person executes well in their own idea until they are doing something fairly repitive or otherwise hand over-able] -> person hands over project to competent executor.

I agree with this and I appreciate you writing this up.  I've also been mentioning this idea to folks after Michelle Hutchinson first mentioned it to me. 

The importance of “inside view excitement”

Another model I regularly refer to when advising people on projects to pursue. Quickly written - may come back and try to clarify this later.

I think it’s generally really important for people to be inside view excited about their projects. By which I mean, they think the project is good based on their own model of how the project will interact with the world.

I think this is important for a few reasons. The first obvious one is that it’s generally much more motivating to work on things you thing are good.

The second, and more interesting reason, is that if you are not inside view excited I think (generally speaking) you don’t actually understand why your project will succeed. Which makes it hard to execute well on your project. When people aren’t inside view excited about their project I get the sense they either have the model and don’t actually believe the project is good, or they are just deferring to others on how good it is which makes it hard to execute.

A quickly written model of epistemic health in EA groups I sometimes refer to

I think that many well intentioned ea groups do a bad job cultivating good epistemics. By this I roughly mean the culture of the group does not differentially advantage truthseekjng discussions or other behaviours that helps us figure out what is actually true as opposed to what is convenient or feels nice.

I think that one of the main reasons for this is poor gatekeeping of ea spaces. I do think groups do more and more gate keeping, but they are often not selecting on epistemics as hard as I think they should be. I’m going to say why I think this is and then gesture at some things that might improve the situation. I’m not going to talk at this tjme about why I think it’s important - but I do think it’s really really important.

EA group leaders often exercise a decent amount of control over who should be part of their group (which I think is great). Unfortunately, it’s much easier to evaluate what conclusion a person has come to, than how good were their reasoning processes. So "what a person says they think" becomes the filter for who gets to be in the group as opposed to how do they think. Intuitively I expect a positive feedback loop where groups become worse and worse epistemically as people are incentivised to reach a certain conclusion to be part of the group and future group leaders are drawn from a pool of people with bad epistemics and then reinforce this.

If my model is right there are a few practical takeaways: • be really careful about who you make a group leader or get to start a group (you can easily miss a lot of upside that’s hard to undo later) • make it very clear that your EA group is a place for truth seeking discussion potentially at the expense of being welcoming or inclusive • make rationality/epistemics a more core part of what your group values, idk exactly how to do that - I think a lot of this is making it clear that this is what your group is in part about

I’m hoping to have some better takes on this later, I would strongly encourage the CEA groups team to think about this along with EA group leaders. I don’t think many people are working in this area though I’d also be sad if people fill up the space with low quality content so think really hard about it and try to be careful about what you post.

There seems to be anxiety and concern about EA funds right now. One thread is here.

Your profile says you are the head of EA funds.

Can you personally make a statement to acknowledge these concerns, say this is being looked into, or anything else substantive? I think this would be helpful.

I’m not going to talk at this tjme about why I think it’s important - but I do think it’s really really important.


As someone both trying to start a group and to find someone else to run it so I can move to other places, I'm really curious about your perspective on this.
In my model, a lot of the value of a group comes from helping anyone that's vaguely interested in doing good effectively to better achieve their goals, and to introduce them to online resources, opportunities, and communities.

I would guess that even if the leader has poor epistemics, they can still do a good enough job of telling people: "EA exists, here are some resources/opportunities you might find useful, happy to answer your 101 questions".

I have heard a similar take from someone on the CEA groups team, so I would really want to understand this better.

One of my criticisms of criticisms

I often see public criticisms of EA orgs claiming poor execution on some object level activity or falling short on some aspect of the activity (e.g. my shortform about the 80k jobs board). I think this is often unproductive.

In general I think we want to give feedback to change the organisations policy (decision making algorithm), and maybe the EA movements policy. When you publicly criticise an org on some activity you should be aware that you are disincentivising the org from generally doing stuff.

Imagine the case where the org was choosing between scrappily running a project to get data and some of the upside value strategically as opposed to carefully planning and failing to execute fully. I think in these cases you should react differently and from the outside it is hard to know which situation the org was in.

If we also criticised orgs for not doing enough stuff I might feel differently, but this is an extremely hard criticism to make unless you are on the inside. I'd only trust a few people who didn't have inside information to do this kind of analysis.

Maybe a good idea would be to describe the amount of resources that would have had to have gone into the project for you to see the outcome as being reasonably successful ??? Idk seems hard to be well calibrated.

I expect some people to react negatively to this, and think that I am generally discouraging of criticism. I think that I feel moderately about most criticism, neither helpful nor particularly unhelpful. The few pieces of thoughtful criticism I see written up I think are very valuable, but thoughtful criticism in my view is hard to come by and requires substantial effort.

I adjust upwards on EAs who haven't come from excellent groups

I spend a substantial amount of my time interacting with community builders and doing things that look like community building.

It's pretty hard to get a sense of someone's values, epistemics, agency .... by looking at their CV. A lot of my impression of people that are fairly new to the community is based on a few fairly short conversations at events. I think this is true for many community builders.

I worry that there are some people who were introduced to some set of good ideas first, and then people use this as a proxy for how good their reasoning skills are. On the other hand, it's pretty easy to be in an EA group where people haven't thought hard about different cause areas/interventions/... And come away with the mean take that's not very good despite being relatively good reasoning wise.

When I speak to EAs I haven't met before I try extra hard to get a sense of why they think x and how reasonable a take that is, given their environment. This sometimes means I am underwhelmed by people who come from excellent EA groups, and impressed by people who come from mediocre ones.

You end up winning more Caleb points if your previous EA environment was 'bad' in some sense, all else equal.

(I don't defend why I think a lot of the causal arrow points from the EA environment quality to the EA quality - I may write something on this, another time.)

It's all about the Caleb points man

(crosspost of a comment on imposter syndrome that I sometimes refer to)

I have recently found it helpful to think about how important and difficult the problems I care about are and recognise that on priors I won't be good enough to solve them. That said, the EV of trying seems very very high, and people that can help solve them are probably incredibly useful. 

So one strategy is to just try and send lots of information that might help the community work out whether I can be useful, into the world (by doing my job, taking actions in the world, writing posts, talking to people ...) and trust the EA community to be tracking some of the right things. 

I find it helpful to sometimes be in a mindset of "helping people reject me is good because if they reject me then it was probably positive EV and that means that the EA community is winning therefore I am winning (even if I am locally not winning).

More EAs should give rationalists a chance

My first impression of meeting rationalists was at a AI safety retreat a few years ago. I had a bunch of conversations that were decidedly mixed and made me think that they weren’t taking the project of doing a large amount of good seriously, reasoning carefully (as opposed to just parroting rationalist memes) or any better at winning than the standard EA types that I felt were more ‘my crowd’.

I now think that I just met the wrong rationalists early on. The rationalists that I most admire:

  • Care deeply about their values
  • Are careful reasoners, and actually want to work out what is true
  • Are able to disentangle their views from themselves, making meaningful conversations much more accessible
  • Are willing to seriously consider weird views that run against their current views

Calling yourself a rationalist or EA is a very cheap signal and I made an error early on (insensitivity to small samples sizes etc.) dismissing their community. Whilst there is still some stuff that I would  change, I think that the median EA could move several steps in a ’rationalist’ direction.

Having a rationalist/scout mindset + caring a lot about impact are pretty correlated with me finding someone promising. It’s not essential to having a lot of impact but I am starting to think that EA is doing the altruism (A) part of EA super well and the rationalist are doing the effective (E) part of EA super well. 

My go to resources are probably:

  • The scout mindset - Julia Galef
  • The codex - Scott Alexander
  • The sequences highlights - Eliezer Yudkowsky/Less Wrong
  • The Less Wrong highlights

‘EA is too elitist’ criticisms seem to be more valid from a neartermist perspective than a longtermist one

I sometimes see criticisms around

  • EA is too elitist
  • EA is too focussed on exceptionally smart people

I do think that you can have a very outsized impact even if you're not exceptionally smart, dedicated, driven etc. However I think that from some perspectives focussing on outliery talent seems to be the right move.

A few quick claims that push towards focusing on attracting outliers:

  • The main problems that we have are technical in nature (particularly AI safety)
  • Most progress on technical problems historically seems to be attributable to a surprisingly small set of the total people working on the problem
  • We currently don't have a large fraction of the brightest minds working on what I see as the most important problems

If you are more interested in neartermist cause areas I think it's reasonable to place less emphasis on finding exceptionally smart people. Whilst I do think that very outliery-trait people have a better shot at very outliery impact, I don't think that there is as much of an advantage for exceptionally smart people over very smart people.

(So if you can get a lot of pretty smart people for the price of one exceptionally smart person then it seems more likely to be worth it.)

This seems mostly true to me by observation, but I have some intuition that motivates this claim.

  • AIS is a more novel problem than most neartermist causes, there's a lot of working going in to getting more surface area on the problem as opposed to moving down a well defined path.
  • Being more novel also makes the problem more first mover-y so it seems important to start with a high density of good people to push onto good trajectories.
  • The resources for getting up to speed on the latest stuff seemless good than in more established fields.
More from calebp
Curated and popular this week
Relevant opportunities