This is a special post for quick takes by Yadav. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I am now trying to make YouTube videos explaining AI Governance. Here is a video on RSPs. The video has a few problems, and the editing is sometimes choppy. This can be a fun hobby, and I could improve on skills that seem useful to have. The first is having the confidence to talk to a camera. If you have feedback, here is a form.

For a first video, I thought it was surprisingly good! :) I appreciate that you speak clearly, the script is pretty short and to the point, and honestly I thought the editing was way better than most of YouTube (you cut enough to keep it moving, but not too much as to be being annoying or distracting). There were a couple times I felt like you could have edited it down more. I liked the infographic cut-ins, and you could probably add slightly more visual aids before it gets to be too many.

I'm glad you enjoy making them, and I encourage you to keep doing it!

I run frequently, and it would be nice to eventually see more GiveWell-recommended charities represented at marathon events in the UK. For example, I didn't get a place through the ballot for the London Marathon, but I could still obtain a charity place. However, I don't find any of the available charities particularly appealing to fundraise for, and I wish orgs like the Against Malaria Foundation were offered instead.

The value to the charity consists of both the funds counterfactually raised through the race and the value of the fundraising leads generated through the runner's activity. I'm curious about what BOTEC someone who knows more about the marathon-fundraising model than I might come up with. My off-the-cuff guess is that, to make the effort cost-effective enough, the charity would need a critical mass of runners who were (a) sufficiently invested in the charity to appear credible to their networks (vs. using it more as a way to gain entry) and (b) could tap wealthy-enough fundraising networks to generate significant post-race expected value.

(I am mostly articulating feelings here. I am unsure about what I think should change). 

I am somewhat disappointed with the way Manifund has turned out. This isn't to critique the manifund team or that regranting as an idea is bad, but after a few months of excitement and momentum, things have somewhat decelerated. While you get the occasional cool projects, most of the projects on the website don't seem particularly impressive to me. I also feel like some of the regrantors seem slow to move money, but it could be that the previous problem is feeding into this. 

I'd like to get opinions on something. I'm planning to experiment with making YouTube videos on AI Governance over the next month or two. Ideally, I want people to see these videos so I can get feedback or get told that I've said something incorrect, which is helpful for correcting my own model around things.

I'd share these videos by posting on the EA Forum, but I'm unsure about the best approach:

a) Posting on the frontpage feels like seeking attention or promoting for views, especially since I'm new to video-making and don't expect high quality initially.
b) Posting as personal blog posts seems less intrusive, as only those who opt to see personal posts will see them. This feels like I have "permission" to make noise and is less intimidating.
C) Putting them in my quick takes section, which is currently my default, would be even more out of the way. 

Given my account's karma, my posts typically start with 4 or 5 karma and stay on the frontpage for a few hours by default. I think the forum has improved a lot recently - there's less volume of posts and more interesting discussions. I don't want to create noise each time I make a video.

However, each video is relevant to the EA community. If people don't like a video, it'll naturally move off the front page fairly soon. I'm more likely to get views if I don't make it a personal blog post or update my quick takes. These views are important to me because they mean more interesting feedback and a higher likelihood that I'll improve at making videos. (Also given I am only human, more views and engagement means more motivation to keep making things). 

I'd appreciate others' opinions on this. I recognise that part of my hesitation probably stems from a lack of confidence and fear of others' opinions, but I don't think these are necessarily good justifications for my decision.

If it were me, I would default to posting them as quick takes. I think that would get them more visibility than a personal blog post (not sure), and quick takes are a good fit for asking for feedback on more early stage things.

But I am somewhat biased because I'm pretty scared to publish frontpage posts, and I don't want to discourage you from posting it there, especially if you are willing to put in some additional effort to make it valuable for frontpage readers (such as by including a written version of the contents, or by asking for specific feedback in the post, or framing your post as a discussion about the video topic that people can continue in the comments). As you say, in the worst case, if it doesn't get many upvotes, it will fall off pretty quickly.

On another note, I think the Forum isn't currently that well-suited for sharing video content, so if you have suggestions for how we can do better there, let me know! :)

Suggestion: Enlarge the font size for pronouns on EA Global/EA retreat name cards

There was a period when I used they/them pronouns and was frequently misgendered at EA events. This likely occurred because I present as male, but regardless, it was a frustrating experience. I often find it difficult to correct people and explicitly mention my preferred pronouns, especially in socially taxing environments like EAGs or retreats. Increasing the size of the pronouns on name cards could be helpful.

I wonder if anyone has examined the pros and cons of protesting against AI labs? I have seen a lot of people uncertain about this. It may be useful to have someone have a post up, having done maybe <10 hours of thinking on this.

I'm doing some thinking on the prospects for international cooperation on AI safety, particularly potential agreements to slow down risky AI progress like CHARTS. Does anyone know of a good website or resource that summarizes different countries' current views and policies regarding deliberately slowing AI progress? For example, something laying out which governments seem open to restrictive policies or agreements to constrain the development of advanced AI (like the EU?) versus which ones want to charge full steam ahead, no matter the risks. Or which countries seem undecided or could be persuaded. Basically, I'm looking for something that synthesizes various countries' attitudes and stated priorities when it comes to potentially regulating the pace of AI advancement, especially policies that could slow the race to AGI. Let me know if you have any suggestions!

Not exactly what you're looking for (because it focuses on the US and China rather than giving an overview of lots of countries), but you might find "Prospects for AI safety agreements between countries" useful if you haven't already read it, particularly the section on CHARTS.

More from Yadav
Curated and popular this week
Relevant opportunities