Agustín Covarrubias 🔸

Co-Founder @ AI Safety Group Support (name TBA)
1530 karmaJoined Pursuing an undergraduate degreeWorking (0-5 years)Santiago, Santiago Metropolitan Region, Chile
agucova.dev

Bio

Participation
10

I’m a generalist and open sourcerer that does a bit of everything, but perhaps nothing particularly well. Until recently, I was the AI Safety Group Support Lead at CEA, and I’m now working on starting a new organization focused on supporting AI Safety groups.

I was previously a Software Engineer in the Worldview Investigations Team at Rethink Priorities.

Posts
17

Sorted by New

Comments
100

Topic contributions
15

There's a lot in this post that I strongly relate to. I also recently left CEA, although after having worked for a much smaller period of time: only 6 months. To give some perspective on how much I agree with Lizka, I'll quote from the farewell letter I wrote to the team:

While I will admit that it took some getting used to, I’m still surprised at how fast I started feeling part of the CEA team and, moreover, how much I came to admire its culture. If you had told me back then that this is what CEA was like, I don’t think I would have bought it. I mean, sure, you can put a lot of nice-sounding principles into your website, but that doesn’t mean you actually embody them. It turns out that it is possible to embody them, and it was then my turn.

I even remember Jessica trying to convince me during my work trial that CEA was friendly and even silly sometimes. To me, CEA was just the scary place where all the important people worked at. I now know what she meant. (...) It’s now gone from a scary place to my favorite team of people. It’s become much more special to me than I ever suspected.

So I want to second Lizka's thoughts: I feel very honored to have worked with them.

SB 1047 is a critical piece of legislation for AI safety, but there haven’t been great ways of getting up to speed, especially since the bill has been amended several times. Since the bill's now finalized, better resources exist to catch up. Here's a few:

If you are working in AI safety or AI policy, I think understanding this bill is pretty important. Hopefully this helps.

I'm so excited about the risk-based portfolio builder!

This is one of the most exciting forecasting proposals I've read in some time. I'm thoroughly impressed, and I'm really eager to see more work building on top of this.

If someone ends up running an algorithmic forecasting tournament like the ones proposed in this article, count me in ;)

I agree that it's important to consider both needs and interests. Ultimately, a branding strategy should be embedded in a larger theory of change and strategy for your group, and that should determine which audiences you reach out to.
 

Regarding the latter, I agree that an interest in, say, hacker culture, does not adequately describe all people interested in CS. It might actually leave out a bunch of people that you should care about joining our group. At the same time, branding is all about tradeoffs, and you have to pick which things you cater to. Spread too thin, and you risk making the content too unappealing.

It's hard to give empirical data on this because I don't think we have a good track record of actually collecting it. I would be curious about groups trying things like A/B tests to refine their strategies.

So yeah, most of this is backed by a mix of anecdotes from organizers, marketing know-how, and some of what I learned running my old AI Safety Initiative. This is why I want to emphasize that most of this is to be taken as provisional rather than the final word on the matter.

I think this is fine: Epoch's work appeals to a broad audience, and Nat Friedman is a well-respected technologist.

Load more