Manuel Allgaier

co-director @ European Network for AI Safety
908 karmaJoined Working (0-5 years)10365 Berlin, Germany

Bio

Participation
8

Co-Director @ENAIS, connecting researchers and policy-makers for safe AI 
Formerly director of EA Germany, EA Berlin and EAGxBerlin 2022 

Happy to connect with people with shared interests. Message me with ideas, proposals, feedback, connections or just random thoughts!

https://www.linkedin.com/in/manuelallgaier/

How others can help me

Collaborators and funding to accelerate AI safety and AI governance careers (subsidized tickets, travel grants, message me for details), feedback for our work at ENAIS

How I can help others

Contacts in European AI safety & AI governance ecosystem, feedback on your strategy, projects, career plans, possibly collaborations

Comments
123

@Jeff Kaufman Would you like to respond to this? Do you feel like this addresses your concerns sufficiently? Any updates in either direction?

I just skimmed it due to time constraints, but from what I read and from the reactions this looks like a very thoughtful response, and at least a short reply seems appropriate. 

If anyone here also would like more context on this, I found @Garrison's reporting from 16 August quite insightful: 

The Tech Industry is the Biggest Blocker to Meaningful AI Safety Regulations

Thank you for the comprehensive research! California state policy as a lever for AI regulation hasn't been much on my radar yet, and as a European concerned about AI risk, I found this very insightful. Curious if you (or anyone here) have thoughts on the following:

1) Is there anything we can and should do right now? Any thoughts on Holly's "tell your reps to vote yes on SB 1047" post from last week? Anything else we can do?

2) How do you see the potential for California state regulation in the next few years. Should we invest more resources in this, relative to US AI policy?
 

I understand your concern, thanks for flagging this!

To add a perspective: As a former EA movement builder who thought about this a bunch, the reputation risks of accepting donations from a platform by an organization that also organizes events that some people found too "edgy" seem very low to me. I'd encourage EA community organizers to apply, if the money would help them do more good, and if they're concerned about risks, ask CEA or other senior EA community organizers for advice. 

Generally, I feel many EAs (me included) lean towards being too risk-averse and hesitant to take action than too risk-seeking (see omission bias). I'm also a bit worried about increasing pressure on community organizers not to take risks and worry about reputation more than they need to. This is just a general impression, I might be wrong, and I still think there are many organizers who might be not sufficiently aware of the risks, so thanks for pointing this out! 

Cool that you're doing this! I could share two failures, one in my career plans and one in job applications. I could do that in 1-4min, depending on how many other people want to share and how much time we have. Looking forward! :)

Nice map! Do you want to upload this on some website, so people can share and find it easier? (similar to aisafety.world) Could be worth investing a tiny bit of money to buy such a domain? 

One additional reason:

If you get your (initial) training from a neutral-ish impact organisation, like some management consulting or tech companies, and then move on to a high impact job, you can add value right away with less 'training costs' for the high impact org = more impact.

All else equal, an EA org with staff with 1-3 years (non-EA) job experience can achieve more impact quicker than one with partly inexperienced staff.

That said, some things such as good epistemics or high moral integrity may be easier to learn at EA orgs (though they can definitely also be learned elsewhere).

I've supported >100 people in their career plans, and this seems pretty solid but underappreciated advice. Thanks for writing it up!

I think I made that mistake too. I went for EA jobs early in my career (running EA Berlin and then EA Germany 2019-22, funded by CEA grants). There were some good reasons: This work seemed particularly neglected in 2019-21, it seemed a good fit and three senior people I had in-depth career 1-1s with all recommended it. I learned a lot, met many inspiring people and I think I did had some significant positive impact as well, on the community overall (it grew and professionalized) and on some individual member's careers.

However, I made a lot of mistakes too, had slow feedback loops (no manager, little mentorship), and I'm pretty sure I would have learned many (soft) skills faster and built overall better career capital (both in- and outside of EA), if I had first spent 1-2 years in management consulting or in a fast growing (non-EA) tech company with good management, and then went on to direct EA work.

I agree that it would be good to have citations. In case neither Ozzie nor anyone else here finds it a good use of their time to do it - I've been following OpenAIs and Sam Altman's messaging specifically for a while and Ozzie's summary of their (conflicting) messaging seems roughly accurate to me. It's easy to notice the inconsistencies in Sam Altman's messaging, especially when it comes to safety. 

Another commenter (whose name I forgot, I think he was from CLTR) put it nicely: It feels like Altman does not have one consistent set of beliefs (like an ethics/safety researcher would) but tends to say different things that are useful for achieving his goals (like many CEOs do), and he seems to do that more than other AI lab executives at Anthropic or Deepmind. 

This could be a community effort. If you're reading this and have a spare minute, can you recall any sources for any of Ozzie's claims and share links to them here? (or go the extra mile, copy his post in a google doc and add sources there?). 

Load more