(June 2023) Open to roles, probably at the intersection of tech and ops.
Another reason you don't want to run at maximum capacity whenever you have the chance to is in order to conserve the ability to 'sprint' when you actually need/want to.
See also various related thoughts, from the latest 80k After Hours episode with Luisa Rodriguez interviewing Hannah Boettcher:
Luisa:
Concretely, I very much have this if it’s the end of a workday, and I have any more energy and I’m not totally spent, I could obviously do a bit more work. Or just money: like, if I have any savings, that feels wrong. And then what do we do?
Hannah:
Like, every time you get a unit of mental health from the wellness factory and you’re like, “Immediately distribute to impact!” then you’re basically almost empty, or like a little bit in the red all of the time. And that’s just very risky and costly. It’s obviously painful, but it’s also going to put you at risk for burnout and for needing to take longer breaks to basically recover and care for yourself. [...] This is a hard one though. It feels so compelling. And the thing I remind myself of is that that feeling of compellingness is simply an incomplete description of what’s true. It does feel really compelling to use 100% of my capacity, and I do really feel that urge to allocate capacity whenever it shows up. But if and when I ever do this — which I do occasionally try versions of it — I end up feeling the effects, and it is not preferable. It’s almost like I have to tell myself, “You’re not well calibrated on this. You think that you want zero slack and ease, but you want an amount.”
[...] But what I recognise is that at times when I’ve packed my schedule as full as the numbers allow, I end up being, I think, a less thoughtful therapist. I think there is a risk of resenting the work, which I very much do not want to do. I actually genuinely love therapy and being a therapist.
I feel like it’s probably the case that there are a bunch of examples in other contexts where systems need slack: like businesses that have budgets where they have to build in 10% budget wiggle room so that they don’t overspend or something. I wonder if having those models or those examples closer to hand would help me be like, “This is like an established pattern in the world, where no one thinks that businesses can run at 100% capacity and never have issues. They all choose to do this thing called slack. And maybe we should just trust — including for-profits who want to maximise profit — that they are doing what’s best for the company or for the aim.”
The other big [indicator of how to know when perfectionist tendencies go from helpful to unhelpful] would be if it’s costing you in ways that are greater than the benefit of the marginal perfecting. The sorts of costs I’m thinking of are around time and opportunity costs, and also particularly losing clarity on what does and doesn’t need to be perfected. An analogy here is frugality: if you find yourself overspending on a lot of things, and you’re like, “I need to make a change,” then the corrective is not “buy nothing going forward”; it’s about conserving your resources so that you can use money to purchase more value, and not use it when it doesn’t purchase more value.
I think perfectionism or optimising or things in this neighbourhood are the same, where we want to retain the option of judiciously applying the marginal rigour and precision and all the rest — when it is actually going to buy us more value. We have to discern when that is and isn’t the case, because otherwise we’ll run out of resources.
I'm not sure how exactly this would best port across to the computer analogy!
@JP Addison are you open to me working on a PR that offers this to authors as a toggle-able option?
I have just submitted a PR for this. (and I have no association with Omega)
edit: It was approved and merged 😊
Thanks, Nick.
I wanted to aim high with cause diversity, as it seemed vital to convey the important norm that EA is a research question rather than a pile of 'knowledge' one is supposed to imbibe. I consider us to have failed to meet our ambitions as regards cause diversity, and would advise future organisers to move on this even earlier than you think you need to. It seems to me that an EAGx (aimed more towards less experienced people) should do more to showcase cause diversity than an EA Global.
From our internal content strategy doc:
Highest priority:
- AI risk
- Global health (can include mental health) and poverty
- Biosecurity
Second priority:
- Animals, especially alternative proteins
- Global priorities research
Aspiring to include:
- Nuclear war
- Epistemics and institutional decision-making
- Encompassing rationality, forecasting, and IIDM
- Climate change
- Great power conflict
From the retrospective:
In the event, we had a preponderance of AI and meta-EA-related content, with 9 and 15 talks/workshops respectively; we had 4 talks or workshops on each of animals, global health & development, and biosecurity; and 6 on existential risks besides those focused on biosecurity and AI. (These numbers exclude meetups.) This was more lopsided than we had aimed for.
In the end, there are limits to what you can do to control the balance of the programme, as it depends on who responds. The most important tips are to start early and to keep actively tracking the balance. People within the EA movement and people who work on movement-building are more likely to respond.
Some data on response rates (showing basically that 'meta-EA' is the easiest to book):
percent interested | total invited | |
AI | 39.13% | 23 |
Animals | 46.15% | 13 |
GH&D | 42.86% | 14 |
Meta | 65.38% | 26 |
Other | 25.00% | 4 |
All GCRs except AI | 47.83% | 23 |
Biorisk | 33.33% | 15 |
What explains the high rate of inviting AI people? From memory, I might explain it this way: We had someone who worked in the AI safety field working with us on content, who I (half-way through) asked to specialize on AI content in particular, meaning that while my attention (as content lead and team lead) was split among causes and also among non-content tasks, his attention was not, resulting in us overall having more attention on AI. We then (over-?)compensated for a dearth of content not-that-long before the conference by sending out a large number of invites based on the lists we'd compiled, which were AI-heavy by that point. That means that we made a choice, under quite severe time/capacity constraints, to compromise on cause diversity for the sake of having abundant content.
What I'll say should be taken more as representative of how I've been thinking, than of how CEA or other people think about it.
These were our objectives, in order:
1: Connect the EA UK community.
2: Welcome and integrate less well-connected members of the community. Reduce the social distance within the UK EA community.
3: Inspire people to take action based on high-quality reasoning.
The main emphasis was on 1, where the theory of impact is something like:
The EA community will achieve more by working together than they will by working as individuals; facilitating people to build connections makes collaboration more likely. Some valuable kinds of connections might be: mentoring relationships, coworking, cofounding, research collabs, and not least friendships (for keeping up one's motivation to do good).
We added other goals beyond connecting people, since a lot of changes to plans will come from one-off interactions (or even exposures to content); I think of someone deciding to apply for funding after attending a workshop on how to do that.
Plausibly though, longer-lasting, deeper connections dominate the calculation, because of the 'heavy tail' of deep collaborations, such as an intern hire I heard of which resulted from this conference.
I'll tag @OllieBase (CEA Events) in case he wants to give his own answer to this question.
Some reasons could be
a) The purpose of the rest of the questions is to inform the initial sift, and not later stages of the application, and if you have been referred by a trusted colleague, then there is no further use of the optional questions to the initial sift, so it would be a waste of applicants’ time
b) Saving applicants’ time on the initial application makes you likely to receive more applications to choose from
However, these referrals could indeed have a nepotistic effect by allowing networking to have more of an influence on the ease of getting to stage 2.
I was referred to apply to this job by someone who was close to another hiring round I was in (where I reached the final stage but didn’t get an offer).