Haven't seen anyone mention RAND as a possible best charity for AI stuff and I guess I'd like to throw their hat in the ring or at least invite people to tell me why I'm wrong. My core claims are approximately:
Influencing the US (federal) government is probably one of the most scalable cost-effective routes for AI safety.
Think tanks are one of the most cost-effective ways to influence the US government.
The prestige of the think tank matters for getting into the room/influencing change.
Rand is among the most prestigious think tank doing AI safety work.
It's also probably the most value-aligned, given Jason Matheny is in charge.
You can earmark donations to the catastrophic risks/emerging risks departments
I'll add I have no idea if they need/have asked for marginal funding.
I am seeing here that they already work closely with Open Philanthropy and were involved in drafting the Executive Order on AI. So this does not seem like a neglected avenue.
Yea I have no idea if they actually need money but if they still want to hire more people to the AI team wouldn't it be better to give the money to RAND to hire those policymakers rather than like the Americans for Responsible Innovation - which open phil currently recommends but is much less prestigious and I'm not sure if they are working side by side with legislators. The fact that open phil gave grants but doesn't currently recommend for individual donors makes me think you are right that they don't need money atm but it would be nice to be sure.
I'll start by saying I absolutely think it's a terrible idea to try to destroy humanity. I am 100% not saying we should do that. Ok, now that we have that out of the way. If you decide to commit your life to x-risk reduction because there are "trillions of units of potential value in the future", you are in a bit of sticky situation if someone credibly argues that the expected value of the future is lower if humans become grabby than if they don't. And that's ok! It's still probably one of the highest EV things you can do.
this ^ post is not great. The entire thing basically presupposes that human society is positive, that aliens will not exist, that animals will not re-evolve if we die. I wouldn't bring this up if not for it being one most upvoted posts on the forum ever (top 5 if you don't include posts about ea drama).
EA tends to be anti-revolution, for a variety of reasons. The recent trump appointments have had me wondering if people here have a "line" in their head. By line I mean something like, I need to drop everything and start protesting or do something fast.
Like I don't think appointing RFK jr. to health secretary is that line for me, but I also realize I don't have a clear "line" in my head. If trump appointed a nazi who credibly claimed they were going to commit mass scale war crimes to the sec defense, is that enough for the people here to drop their current work?
I'm definitely generally on the side of engaging in reactionary politics being worthless, and further I don't feel like the US is about to fall apart or completely go off the rails, but it would be really interesting to see if we could teleport some EAs back in time right before the rise of hitler or pre chinese revolution etc. (while wiping their brains of the knowledge of what would come) and see if they would say stuff like "politics is the mind killer and I need to focus on xyz"
I stopped being vegetarian almost 2 years ago - one of the biggest reasons I'm not a vegetarian is that I stay up late pretty much every day and don't always feel like cooking or eating snacks so I will go to whatever is open near me. During university, nothing really stayed open after 10 anyway because Evanston is a lame place. So I would often eat at or before 10, and if I was eating out there were vegetarian options (stir fry with tofu, chipotle, etc.) still at this time.
Now I live in a predominantly eastern European and Mexican area of Chicago. There isn't much vegetarian food in this neighborhood in general, although there is some still. However, the vegetarian restaurants here seem to service a wealthier demographic than the non vegetarian food. It closes earlier, more expensive, etc. The cheap and late night options are fast food and taquerias, which essentially have no quality vegetarian items. But since this stuff is open, it actually makes me lazier and I'll often eat at 11:00 PM because I can. However getting into this routine means I eat more meat.
I'm pretty sure if there was a decent cheap vegetarian restaurant that stayed open till 2:00 am I would eat at least 1 less meat meal a week, probably 2-3.
why aren't there any vegetarian late night options near me? probably the normal reasons - no one around here wants or can open one, or there isn't enough demand.
In either case it got me wondering. If there is enough demand to recoup say 95% ish of cost for a late night falafel stand, would it be a cost effective intervention (over whatever other things ACE recommends) to fund that last 5%? I might think more about this unless it's super obvious to someone that this is orders of magnitude worse than other options.
A five percent subsidy is about fifty cents a meal in Chicago, roughly. However, some subsidized diners would have eaten a vegetarian meal with or without the subsidy, so the true cost per meat meal averted would likely be higher -- so maybe a dollar or so? So you could predict the cost per farmed animal averted from that, keeping in mind that the demand elasticities aren't 1:1.
It doesn't sound terribly promising on my three-minute BOTEC. Notably, much of the displaced meat would be beef, leading to a high cost per 1 cow reduction in demand.
Not sure that's entirely true (I think it's very interesting). I feel like the grantmaking process is not top→down, but bottom→up→top→down (someone has an idea, they start working on it in their free time (this part is optional), they apply, that thing gets evaluated, they get rejected/receive some money). I think in historical command economies the first/second part of was kind of rare.
Ok this is a solid point and made me slightly reconsider.
However, I question both how much grant making actually is bottom up and also even if it is how different this is from historical command economies.
Depending on the grant makers, I feel like there is a range between "has already decided more or less the projects the want to fund" and "funding anything that seems promising under x goals", where the former isn't really "taking suggestions".
More importantly, while I agree the bottom up thing was rare, I don't think the issue with command economies is that they don't gather any information from the crowd, it's that the information they have available is funneled through human bias and is nontransparent. Would a system of town halls in which command economy leaders had a chance to listen to the complaints of the citizens fix the command economy? This just seems like a worse version of markets.
By the same logic I think impact markets, while they may have a long learning curve, will clearly be superior to what we are currently doing. The main reason I see us not doing this is because you have to actually specify the currency, which would be specifying a moral world view, which would shatter the communities nebulous ethical agnosticism.
Re Coase and Gwern's post, I think it gives my whole point even more firepower. I feel like the point is you only need one logical feedback mechanism to create evolution. The rest of the system can be nearly random or very nonlogical. But there is no feedback mechanism. We will never be able to tease out the impact of 99% of the things we are currently doing. If grant makers are totalitarians like the firms, where is the analogous feedback loop to money? (without RCTs up the wazoo).
What do people think about posting urban planning/yimby/ local gov policy thoughts on the forum?
I find that stuff really interesting and admittedly don't believe it is the most important stuff to work on but is "effective altruism" if you have an extremely local moral circle.
For my part I think the fact that Open Phil look at YIMBY stuff is weird and wish it were explained better (but don't think it's important enough to actively pursue)
Theoretical idea that could be implemented into Metaculus
tldr; add an option to submit models of how to forecast a question, and also voting on the models.
To be more concrete, when someone submits a question, in addition to forecasting the question, you can submit a squiggle -- or just plain mathematical model -- of your best current guess of how to approach the problem. You define each subcomponent that is important in the final forecast and also how these subcomponents combine into the final forecast. Each subcomponent automatically becomes another forecasting question on the site that people can do the same to (if it is not already one).
Then in addition to a normal forecast, as we do right now, people can also forecast the subcomponents of the models, as well as vote on the models. If a model already includes previously forecasted questions, they automatically populate in the model.
The voting system on models could either just draw attention to the best models and encourage forecasting of the subcomponents, or even weight the models estimates into the overall forecast of the question. No idea if this would improve forecasting but it might make it more transparent and scalable.
I wrote a bit more in this google doc if interested.
edit: I think this might just be guesstimate with memoization
Has anyone thought through if EA should try to start a charter school/ done some sort of impact estimate? Specifically with the purpose of making a really good school, not specifically related to EA outreach.
I searched google and the forum for posts on this and couldn't find anything.
The article itself is bad-faith, bordering on the stupidest critique I have read on EA thus far, but I feel like WSJ op-ed is pretty important, and if SBF signed off on a response they would most likely publish it.
Couldn't read the article so I can't say. EA does have some red flags it needs to deal with (just like literally any movement in existence) so it's easy to pick on. How red flags are handled is what's important, and based on the amount of posts I've seen saying that the movement struggles to address logitimate criticisms internally, it needs a shift to sincerely move forward. And I say that because some of the first things mentioned were things I personally had concerns about. I will say, if it's low-effort then the best response might be no response. It's an op-ed, it's someone's big fat opinion. If EA was somehow perfect beyond the limits of reality someone would still write a low-effort op-ed.
Re the "EAs should not should" debate about whether we can use the word "should" which pops up occasionally, most recently on the "university groups need fixing".
My take is that you can use "should/ought" as long as your target audience has sufficiently grappled with meta-ethics and both parties are clear about what ethical system you are using.
"Should" (to an anti-realist) is shorthand for (the best action under X moral framework). I don't mind it being used in this context (though I agree with ozzies previous shortform on this that it seems unnecessarily binary), but it's problematic using this word around people you don't know or non-philosophy heads. It's completely absurd to tell an 18-year-old or anyone else who doesn't know what utilitarianism and virtue ethics are that they "should" do anything, and if they believe you, then you tricked them into that view (unless you are a moral realist, which I think is also absurd).
If your target audience does not know what the is-ought problem is, it's better to stick to output-based cost-benefit and not enter into this "cause agnostic" tier list type thing since inter-output rankings rely on arbitrary metaethical functions that aren't well-known by most or standardized for quick and reliable reference.
However among my friends, we use should all the time because we know what generally mean (our relatively shared utilitarian-ish meta-ethical worldview), and we feel comfortable clarifying this if it seems to be the crux of the debate. But at this point, should loses all of its emotional oomph and maybe it's just not worth the hassle to shorthand a 7-word sentence.
Would be interesting to compare my likes on the ea forum with other people. I feel like what I up/downvote is way more honest than what I comment. If I could compare with someone the posts/comments where we had opposite reactions, i.e. they upvoted and I downvoted I feel like it could start some honest and interesting discussions.
Assume there are two societies that passed the great filter and are now grabby. Society EA and society NOEA.
Society EA you could say is quite similar to our own society. The majority of the dominant species is not concerned with passing the great filter and most individuals are inadvertently increasing the chance of the species extinction. However, a small contingent had become utilitarian rationalists and speced heavily into reducing x-risk. Since the group passed the great filter, you can assume this is in large part due to this contingent of EAs/guardian angels.
Now society NOEA is a species that passed the filter, but they didn't have EA rationalists. The only way they were able to pass the filter was because as a species, they are overall quite careful and thoughtful. The whole species rather than a divergent few has enough of a security mindset that there was no special group that "saved" them.
Which species would we prefer to get more control of resources?
The punchline is that the very fact that we "need" EA on earth might provide evidence that our values are worse than the species that didn't need EA to pass the filter.
I feel like " x-risk" is basically tautologically important and thus ceases to be a useful word in many cases. It's like the longtermist equivalent of a neartermist saying "it would be good to solve everything really bad about the current world".
Haven't seen anyone mention RAND as a possible best charity for AI stuff and I guess I'd like to throw their hat in the ring or at least invite people to tell me why I'm wrong. My core claims are approximately:
I'll add I have no idea if they need/have asked for marginal funding.
What have they done or are planning to do that seems worth supporting?
I am seeing here that they already work closely with Open Philanthropy and were involved in drafting the Executive Order on AI. So this does not seem like a neglected avenue.
Yea I have no idea if they actually need money but if they still want to hire more people to the AI team wouldn't it be better to give the money to RAND to hire those policymakers rather than like the Americans for Responsible Innovation - which open phil currently recommends but is much less prestigious and I'm not sure if they are working side by side with legislators. The fact that open phil gave grants but doesn't currently recommend for individual donors makes me think you are right that they don't need money atm but it would be nice to be sure.
I'll start by saying I absolutely think it's a terrible idea to try to destroy humanity. I am 100% not saying we should do that. Ok, now that we have that out of the way. If you decide to commit your life to x-risk reduction because there are "trillions of units of potential value in the future", you are in a bit of sticky situation if someone credibly argues that the expected value of the future is lower if humans become grabby than if they don't. And that's ok! It's still probably one of the highest EV things you can do.
And I'll say it again years later, https://forum.effectivealtruism.org/posts/KDjEogAqWNTdddF9g/long-termism-vs-existential-risk
this ^ post is not great. The entire thing basically presupposes that human society is positive, that aliens will not exist, that animals will not re-evolve if we die. I wouldn't bring this up if not for it being one most upvoted posts on the forum ever (top 5 if you don't include posts about ea drama).
EA tends to be anti-revolution, for a variety of reasons. The recent trump appointments have had me wondering if people here have a "line" in their head. By line I mean something like, I need to drop everything and start protesting or do something fast.
Like I don't think appointing RFK jr. to health secretary is that line for me, but I also realize I don't have a clear "line" in my head. If trump appointed a nazi who credibly claimed they were going to commit mass scale war crimes to the sec defense, is that enough for the people here to drop their current work?
I'm definitely generally on the side of engaging in reactionary politics being worthless, and further I don't feel like the US is about to fall apart or completely go off the rails, but it would be really interesting to see if we could teleport some EAs back in time right before the rise of hitler or pre chinese revolution etc. (while wiping their brains of the knowledge of what would come) and see if they would say stuff like "politics is the mind killer and I need to focus on xyz"
Are there estimates for the different per animal unit suffering from animal consumption in different countries (holding animal constant)?
I stopped being vegetarian almost 2 years ago - one of the biggest reasons I'm not a vegetarian is that I stay up late pretty much every day and don't always feel like cooking or eating snacks so I will go to whatever is open near me. During university, nothing really stayed open after 10 anyway because Evanston is a lame place. So I would often eat at or before 10, and if I was eating out there were vegetarian options (stir fry with tofu, chipotle, etc.) still at this time.
Now I live in a predominantly eastern European and Mexican area of Chicago. There isn't much vegetarian food in this neighborhood in general, although there is some still. However, the vegetarian restaurants here seem to service a wealthier demographic than the non vegetarian food. It closes earlier, more expensive, etc. The cheap and late night options are fast food and taquerias, which essentially have no quality vegetarian items. But since this stuff is open, it actually makes me lazier and I'll often eat at 11:00 PM because I can. However getting into this routine means I eat more meat.
I'm pretty sure if there was a decent cheap vegetarian restaurant that stayed open till 2:00 am I would eat at least 1 less meat meal a week, probably 2-3.
why aren't there any vegetarian late night options near me? probably the normal reasons - no one around here wants or can open one, or there isn't enough demand.
In either case it got me wondering. If there is enough demand to recoup say 95% ish of cost for a late night falafel stand, would it be a cost effective intervention (over whatever other things ACE recommends) to fund that last 5%? I might think more about this unless it's super obvious to someone that this is orders of magnitude worse than other options.
A five percent subsidy is about fifty cents a meal in Chicago, roughly. However, some subsidized diners would have eaten a vegetarian meal with or without the subsidy, so the true cost per meat meal averted would likely be higher -- so maybe a dollar or so? So you could predict the cost per farmed animal averted from that, keeping in mind that the demand elasticities aren't 1:1.
It doesn't sound terribly promising on my three-minute BOTEC. Notably, much of the displaced meat would be beef, leading to a high cost per 1 cow reduction in demand.
Grant-making as we currently do it seems pretty analogous to a command economy.
Not sure that's entirely true (I think it's very interesting). I feel like the grantmaking process is not top→down, but bottom→up→top→down (someone has an idea, they start working on it in their free time (this part is optional), they apply, that thing gets evaluated, they get rejected/receive some money). I think in historical command economies the first/second part of was kind of rare.
See also Coase's Theory of the Firm and Evolution as a Backstop to Reinforcement Learning.
Ok this is a solid point and made me slightly reconsider.
However, I question both how much grant making actually is bottom up and also even if it is how different this is from historical command economies.
Depending on the grant makers, I feel like there is a range between "has already decided more or less the projects the want to fund" and "funding anything that seems promising under x goals", where the former isn't really "taking suggestions".
More importantly, while I agree the bottom up thing was rare, I don't think the issue with command economies is that they don't gather any information from the crowd, it's that the information they have available is funneled through human bias and is nontransparent. Would a system of town halls in which command economy leaders had a chance to listen to the complaints of the citizens fix the command economy? This just seems like a worse version of markets.
By the same logic I think impact markets, while they may have a long learning curve, will clearly be superior to what we are currently doing. The main reason I see us not doing this is because you have to actually specify the currency, which would be specifying a moral world view, which would shatter the communities nebulous ethical agnosticism.
Re Coase and Gwern's post, I think it gives my whole point even more firepower. I feel like the point is you only need one logical feedback mechanism to create evolution. The rest of the system can be nearly random or very nonlogical. But there is no feedback mechanism. We will never be able to tease out the impact of 99% of the things we are currently doing. If grant makers are totalitarians like the firms, where is the analogous feedback loop to money? (without RCTs up the wazoo).
What do people think about posting urban planning/yimby/ local gov policy thoughts on the forum?
I find that stuff really interesting and admittedly don't believe it is the most important stuff to work on but is "effective altruism" if you have an extremely local moral circle.
Makes sense to me. Thinking about being more effective locally through an EA lens sounds like a good plan.
Also think Open Phil have funded some Yimby groups so it’s hard outside the scope of potentially cost effective interventions
For my part I think the fact that Open Phil look at YIMBY stuff is weird and wish it were explained better (but don't think it's important enough to actively pursue)
I think if you subscribe to a Housing-Theory-Of-Everything or a Lars Doucet Georgist Persepctive[1] then YIMBY stuff might be seen as an unblocker to good political-economic outcomes in everything else.
Funny story version here
Theoretical idea that could be implemented into Metaculus
tldr; add an option to submit models of how to forecast a question, and also voting on the models.
To be more concrete, when someone submits a question, in addition to forecasting the question, you can submit a squiggle -- or just plain mathematical model -- of your best current guess of how to approach the problem. You define each subcomponent that is important in the final forecast and also how these subcomponents combine into the final forecast. Each subcomponent automatically becomes another forecasting question on the site that people can do the same to (if it is not already one).
Then in addition to a normal forecast, as we do right now, people can also forecast the subcomponents of the models, as well as vote on the models. If a model already includes previously forecasted questions, they automatically populate in the model.
The voting system on models could either just draw attention to the best models and encourage forecasting of the subcomponents, or even weight the models estimates into the overall forecast of the question. No idea if this would improve forecasting but it might make it more transparent and scalable.
I wrote a bit more in this google doc if interested.
edit: I think this might just be guesstimate with memoization
Has anyone thought through if EA should try to start a charter school/ done some sort of impact estimate? Specifically with the purpose of making a really good school, not specifically related to EA outreach.
I searched google and the forum for posts on this and couldn't find anything.
WSJ Op-Ed: ‘Effective Altruism’ Is Neither - I didn't see any discussion of this low effort but scathing review of EA. I wonder if people feel we should write a response?
The article itself is bad-faith, bordering on the stupidest critique I have read on EA thus far, but I feel like WSJ op-ed is pretty important, and if SBF signed off on a response they would most likely publish it.
Couldn't read the article so I can't say. EA does have some red flags it needs to deal with (just like literally any movement in existence) so it's easy to pick on. How red flags are handled is what's important, and based on the amount of posts I've seen saying that the movement struggles to address logitimate criticisms internally, it needs a shift to sincerely move forward. And I say that because some of the first things mentioned were things I personally had concerns about.
I will say, if it's low-effort then the best response might be no response. It's an op-ed, it's someone's big fat opinion. If EA was somehow perfect beyond the limits of reality someone would still write a low-effort op-ed.
Re the "EAs should not should" debate about whether we can use the word "should" which pops up occasionally, most recently on the "university groups need fixing".
My take is that you can use "should/ought" as long as your target audience has sufficiently grappled with meta-ethics and both parties are clear about what ethical system you are using.
"Should" (to an anti-realist) is shorthand for (the best action under X moral framework). I don't mind it being used in this context (though I agree with ozzies previous shortform on this that it seems unnecessarily binary), but it's problematic using this word around people you don't know or non-philosophy heads. It's completely absurd to tell an 18-year-old or anyone else who doesn't know what utilitarianism and virtue ethics are that they "should" do anything, and if they believe you, then you tricked them into that view (unless you are a moral realist, which I think is also absurd).
If your target audience does not know what the is-ought problem is, it's better to stick to output-based cost-benefit and not enter into this "cause agnostic" tier list type thing since inter-output rankings rely on arbitrary metaethical functions that aren't well-known by most or standardized for quick and reliable reference.
However among my friends, we use should all the time because we know what generally mean (our relatively shared utilitarian-ish meta-ethical worldview), and we feel comfortable clarifying this if it seems to be the crux of the debate. But at this point, should loses all of its emotional oomph and maybe it's just not worth the hassle to shorthand a 7-word sentence.
Would be interesting to compare my likes on the ea forum with other people. I feel like what I up/downvote is way more honest than what I comment. If I could compare with someone the posts/comments where we had opposite reactions, i.e. they upvoted and I downvoted I feel like it could start some honest and interesting discussions.
Assume there are two societies that passed the great filter and are now grabby. Society EA and society NOEA.
Society EA you could say is quite similar to our own society. The majority of the dominant species is not concerned with passing the great filter and most individuals are inadvertently increasing the chance of the species extinction. However, a small contingent had become utilitarian rationalists and speced heavily into reducing x-risk. Since the group passed the great filter, you can assume this is in large part due to this contingent of EAs/guardian angels.
Now society NOEA is a species that passed the filter, but they didn't have EA rationalists. The only way they were able to pass the filter was because as a species, they are overall quite careful and thoughtful. The whole species rather than a divergent few has enough of a security mindset that there was no special group that "saved" them.
Which species would we prefer to get more control of resources?
The punchline is that the very fact that we "need" EA on earth might provide evidence that our values are worse than the species that didn't need EA to pass the filter.
I feel like " x-risk" is basically tautologically important and thus ceases to be a useful word in many cases. It's like the longtermist equivalent of a neartermist saying "it would be good to solve everything really bad about the current world".