Open Philanthropy (formerly FTX Future Fund) re-grantee from Kuala Lumpur, spending a year doing some career exploration. Previously I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, ecommerce) and departments (commercial, marketing), after majoring in physics at UCLA. Currently at Trajan House in Oxford.
I've been at the periphery of EA for a long time; my introduction to it in 2014 was via the dead children as unit of currency essay, I started donating shortly thereafter, and I've been "soft-selling" basic EA ideas for years. But I only started actively participating in the community in 2021, when I joined EA Malaysia. Given my career background it perhaps makes sense that my interests center around improving decision-making via better value quantification, distillation and communication, and collaboration, including but not limited to cost-effectiveness analysis, local priorities research, etc.
I'd love to get help with my career exploration:
Do reach out if you're interested to talk about, or collaborate on,
Perhaps it's less surprising given who counted as 'superforecasters', cf magic9mushroom's comment here? I'm not sure how much their personal anecdote as participant generalizes though.
Regarding the specific problems of specific large organizations, maybe there's something here to do with bureaucratic mazes? For instance, Raemon's post Recursive Middle Manager Hell has a section titled Implications for EA and AI that goes like so:
I think it is sometimes appropriate to build large organizations, when you're trying to do a reasonably simple thing at scale.
I think most effective altruist and AI alignment organizations cannot afford to become mazes. Our key value propositions are navigating a confusing world where we don't really know what to do, and our feedback loops are incredibly poor. We're not sure what counts as alignment progress, many things that might help with alignment also help with AI capabilities and push us closer to either hard takeoff or a slow rolling unstoppable apocalypse.
Each stage of organizational growth triggers a bit less contact with reality, a bit more incentive to frame things so they look good.
I keep talking to people who think "Obviously, the thing we need to do is hire more. We're struggling to get stuff done, we need more people." And yes, you are struggling to get stuff done. But I think growing your org will diminish your ability to think, which is one of your rarest and most precious resources.
Look at the 5 example anecdotes I give, and imagine what happens, not when they are happening individually, but all at once, reinforcing each other. When managers are encouraging their researchers to think in terms of legible accomplishments. When managers are encouraging their researchers or programmers to lie. When projects acquire inertia and never stop even if they're pointless, or actively harmful – because they look good and even a dedicated rationalist feels immense pressure to make up reasons his project is worthwhile.
Imagine if my silly IT project had been a tool or research program that turned out to be AI capabilities accelerating, and then the entire company culture converged to make that difficult to stop, or talk plainly about, or even avoid actively lying about it.
What exactly do we do about this is a bigger post. But for now: If your instinct is to grow – grow your org, or grow the effective altruism or AI safety network, think seriously about the costs of scale.
If participating in multiple activities gives the community-builder a strong cross-pollination effect, I could assume
Do you have a sense of what decision-guiding proxies community builders might want to keep in mind to improve the chances that ? My intuition is that the default expectation should be , especially for small or relatively young groups, due to CBs trying out a number of different activities and consequently being stretched too thin, reducing per-hour effectiveness vs mostly focusing on and getting better over time at one CB activity, and I don't have a clear sense of what to do to get to with reasonable confidence.
Agree re: marginal returns on personal spending, very uncertain re: savings, especially given uncertain income (I'm currently midway through a one-year grant) and uncertain projected expenditures (traveling to conferences, moving house, supporting family, etc). I've thought seriously about the "set threshold and donate everything else" strategy for a long time and envied folks with the financial security and other sources of privilege to feel comfortable implementing it, and I think there are many more people like me (especially from LMICs). So for now I default to giving 10%.
Having informally pitched EA in many different ways to many different people, I've noticed that the strongest counter-reaction I tend to get is re: maximization (except when I talk to engineers). So nowadays I replace "maximize" with "more", and proactively talk about scenarios where maximization is perilous, where you can be mislead by modeling when being maximizing-oriented, etc. (Actually I tend to personalize my pitches, but that's not scalable of course.)
You say "I expect discussions based on this elevator pitch to add more value faster to the average passersby" which I interpret as meaning you haven't yet tried this pitch elsewhere, so I'd be curious to hear an update on how that goes. If it works, I might incorporate some elements :)
Hi @MMMaas, will you be continuing this sequence? I found it helpful and was looking forward to the next few posts, but it seems like you stopped after the second one.
...we have, in my opinion, some pretty compelling reasons to think that it not solvable even in principle, (1) given the diversity, complexity, and ideological nature of many human values... There is no reason to expect that any AI systems could be 'aligned' with the totality of other sentient life on Earth.
One way to decompose the alignment question is into 2 parts:
Folks at e.g. MIRI think (1) is the hard problem and (2) isn't as hard; folks like you think the opposite. Then you all talk past each other. ("You" isn't aimed at literally you in particular, I'm summarizing what I've seen.) I don't have a clear stance on which is harder; I just wish folks would engage with the best arguments from each side.
I think I buy that interventions which reduce either catastrophic or extinction risk by 1% for < $1 trillion exist. I'm less sure as to whether many of these interventions clear the 1,000x bar though, which (naively replacing US VSL = $7 mil with AMF's ~$5k) seems to imply 1% reduction for < $1 billion. (I recall Linch's comment being bullish and comfortable on interventions reducing x-risk ~0.01% at ~$100 mil, which could either be interpreted as ~100x i.e. in the ballpark of GiveDirectly's cash transfers, or as aggregating over a longer timescale than by 2050; the latter is probably the case. The other comments to that post offer a pretty wide range of values.)
That said, I've never actually seen a BOTEC justifying an actual x-risk grant (vs e.g. Open Phil's sample BOTECs for various grants with confidential details redacted), so my remarks above seem mostly immaterial to how x-risk cost-effectiveness estimates inform grant allocations in practice. I'd love to see some real examples.