Y

Yadav

AI Governance Fellow @ Pivotal Research
822 karmaJoined

Bio

Gaurav Yadav 

Comments
62

Topic contributions
1

Yadav
63
12
0
4
3

I've been meaning to write about what it's like trying to figure out your career direction between 18 and 21 while being part of the EA community—a time that feels, at least for me, like the most uncertain and exploratory part of life. You're not just asking yourself what you want to do, but also grappling with questions about impact and doing the most good, which adds another hard layer to an already complex period.

For anyone at university who's getting introduced to EA and feeling overwhelmed about career decisions, I feel like I want to share some thoughts. I've been there—feeling unsure about whether I was making the "right" choices or if I was doing enough to have impact. I've seen others around that age wrestle with the same questions. I still wrestle with my impact. You have my empathy.

If that's where you are right now, maybe a few scattered pieces of advice from someone a little further down the road could help. I won't turn this into a long essay, but if any of this resonates—or if you want more specific guidance—I'd be happy to expand on these thoughts:

1. 80,000 Hours is not gospel: Of course, they don't claim to be gospel, and explicitly want you to explore different options in your career and provide tools for you to do the thinking. But it's very easy to just default to their listed career options and cause areas. 80,000 Hours won't help you make the right choices if you aren't willing to accept that it's just one piece of your career puzzle. Most people don't just end up working on what they want by default, unfortunately. 


2. Think beyond conventional impact paths: On that note, if something doesn't fit neatly into how a typical EA career pans out, that might feel uncomfortable. But that's okay—outside this community, people do all sorts of things in the world. Going to university doesn't automatically prepare you for AI policy work or give you operations skills. You'll probably need to get experience in the outside world that isn't an EA career path, and that can be hard if all you've consumed at university is EA philosophy and its traditional career paths. This is why not pigeon-holing yourself is a good idea (see point 4).

3. Get ready for ego hits: Yes, there's plenty written about how EA jobs are hard to get, but lots of jobs that seem shiny and potentially useful for career capital will reject you—because you're not the only one who wants that shiny job. You might get lucky and end up right where you want to be, but you're probably just like everybody else: inexperienced and trying to make it in the world. Each job application can take weeks of effort. You can make it to interview rounds, all excited about the possibility of doing something you want, only to be rejected because someone else has 10 years more experience than you. This will happen, and it will be hard. You just have to get back up and try again. I found it useful to remind myself every time I got rejected that 'they can reject me, but they can't kill my spirit'—and that helped muster up the motivation to push forward.

4. Don't let EA become your whole life: I switched my degree to be very AI Governance focused (which maybe is paying off), made EA friends, went to EA retreats. It's so enticing because the university EA community tends to be interesting, thoughtful, and ambitious—that's pleasant to be around and can mean your life gets wrapped up in it. Getting invited to EA Global conferences in the Bay Area when you're a twenty-year-old at university hits that status-seeking part of your brain hard. People think it's really cool, and it feels good when they do. I wish I could say I was above caring what others think, but my brain (like most people's) is wired to chase social validation at times. While there's plenty of advice out there about letting go of status-seeking—and you should definitely work on that—I think it's important to acknowledge how these dynamics can pull you deeper into making EA your whole identity. I strongly suggest building yourself in other communities and finding interests outside the movement. This advice might seem obvious to any adult, but when you're at university finding your people, and those people happen to offer both intellectual stimulation and status boosts, it's really easy to stick to the comfortable option.

5. Don't dismiss grades—they're part of the bigger picture: I absorbed some wrong advice about grades not mattering through the rationality community. But they do matter: not just for masters applications, but as a signal to employers about your ability to work hard and follow through. Even if EA jobs don't always list grade requirements, having good grades demonstrates competence and work ethic. More importantly, engaging deeply with your subject teaches you how to tackle difficult problems and work systematically—skills that matter regardless of where you end up. And actually trying with your degree and doing well can make university a much more pleasant experience.

Hope that's useful to somebody. 

PauseAI seem funding constraint - probably needs more runway for returns to be seen on their work

]

Nice! I'm down in Sheffield during points, would love to visit when I am around!

The Bill has passed the appropriations committee and will now move onto the Assembly floor.  There were some changes made to the Bill. From the press release

Removing perjury – Replace criminal penalties for perjury with civil penalties. There are now no criminal penalties in the bill. Opponents had misrepresented this provision, and a civil penalty serves well as a deterrent against lying to the government.

Eliminating the FMD – Remove the proposed new state regulatory body (formerly the Frontier Model Division, or FMD). SB 1047’s enforcement was always done through the AG’s office, and this amendment streamlines the regulatory structure without significantly impacting the ability to hold bad actors accountable. Some of the FMD’s functions have been moved to the existing Government Operations Agency.

Adjusting legal standards - The legal standard under which developers must attest they have fulfilled their commitments under the bill has changed from “reasonable assurance” standard to a standard of “reasonable care,” which is defined under centuries of common law as the care a reasonable person would have taken. We lay out a few elements of reasonable care in AI development, including whether they consulted NIST standards in establishing their safety plans, and how their safety plan compares to other companies in the industry.

New threshold to protect startups’ ability to fine-tune open sourced models – Established a threshold to determine which fine-tuned models are covered under SB 1047. Only models that were fine-tuned at a cost of at least $10 million are now covered. If a model is fine-tuned at a cost of less than $10 million dollars, the model is not covered and the developer doing the fine tuning has no obligations under the bill. The overwhelming majority of developers fine-tuning open sourced models will not be covered and therefore will have no obligations under the bill.

Narrowing, but not eliminating, pre-harm enforcement – Cutting the AG’s ability to seek civil penalties unless a harm has occurred or there is an imminent threat to public safety.

I'd like to get opinions on something. I'm planning to experiment with making YouTube videos on AI Governance over the next month or two. Ideally, I want people to see these videos so I can get feedback or get told that I've said something incorrect, which is helpful for correcting my own model around things.

I'd share these videos by posting on the EA Forum, but I'm unsure about the best approach:

a) Posting on the frontpage feels like seeking attention or promoting for views, especially since I'm new to video-making and don't expect high quality initially.
b) Posting as personal blog posts seems less intrusive, as only those who opt to see personal posts will see them. This feels like I have "permission" to make noise and is less intimidating.
C) Putting them in my quick takes section, which is currently my default, would be even more out of the way. 

Given my account's karma, my posts typically start with 4 or 5 karma and stay on the frontpage for a few hours by default. I think the forum has improved a lot recently - there's less volume of posts and more interesting discussions. I don't want to create noise each time I make a video.

However, each video is relevant to the EA community. If people don't like a video, it'll naturally move off the front page fairly soon. I'm more likely to get views if I don't make it a personal blog post or update my quick takes. These views are important to me because they mean more interesting feedback and a higher likelihood that I'll improve at making videos. (Also given I am only human, more views and engagement means more motivation to keep making things). 

I'd appreciate others' opinions on this. I recognise that part of my hesitation probably stems from a lack of confidence and fear of others' opinions, but I don't think these are necessarily good justifications for my decision.

I am now trying to make YouTube videos explaining AI Governance. Here is a video on RSPs. The video has a few problems, and the editing is sometimes choppy. This can be a fun hobby, and I could improve on skills that seem useful to have. The first is having the confidence to talk to a camera. If you have feedback, here is a form.

I run frequently, and it would be nice to eventually see more GiveWell-recommended charities represented at marathon events in the UK. For example, I didn't get a place through the ballot for the London Marathon, but I could still obtain a charity place. However, I don't find any of the available charities particularly appealing to fundraise for, and I wish orgs like the Against Malaria Foundation were offered instead.

I knew Marissa briefly while they were running EA Anywhere, it was one of my first points of contact with the EA community given I was living somewhere without much of an EA presence at the time. This is painful news to hear. May they rest in peace.

(I am mostly articulating feelings here. I am unsure about what I think should change). 

I am somewhat disappointed with the way Manifund has turned out. This isn't to critique the manifund team or that regranting as an idea is bad, but after a few months of excitement and momentum, things have somewhat decelerated. While you get the occasional cool projects, most of the projects on the website don't seem particularly impressive to me. I also feel like some of the regrantors seem slow to move money, but it could be that the previous problem is feeding into this. 

Load more