This is a special post for quick takes by Eevee🔹. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
I've heard from women I know in this community that they are often shunted into low-level or community-building roles rather than object-level leadership roles. Does anyone else have knowledge about and/or experience with this?
Could you expand a bit on how this would look like? How are they being "shunted", what kind of roles are low-level roles? (E.g. your claim could be that the average male EA CS-student is much less likely to hear "You should change from AI safety to community-building" than female EA CS-students.)
4
Chris Leong
Ironically, I think one of the best ways to address this is more movement building. Lots of groups provide professional training to their movement builders and more of this (in terms of AI/AI safety knowledge) would reduce the chance that someone who could and wants to do technical work gets stuck in a community building role.
Not sure who to alert to this, but: when filling out the EA Organization Survey, I noticed that one of the fields asks for a date in DD/MM/YYYY format. As an American this tripped me up and I accidentally tried to enter a date in MM/DD/YYYY format because I am more used to seeing it.
I suggest using the ISO 8601 (YYYY-MM-DD) format on forms that are used internationally to prevent confusion, or spelling out the month (e.g. "1 December 2023" or "December 1, 2023").
I'm concerned about the new terms of service for Giving What We Can, which will go into effect after August 31, 2024:
6.3 Feedback. If you provide us with any feedback or suggestions about the GWWC Sites or GWWC’s business (the “Feedback”), GWWC may use the Feedback without obligation to you, and you irrevocably assign to GWWC all right, title, and interest in and to the Feedback. (emphasis added)
This is a significant departure from the Effective Ventures' TOS (GWWC is spinning out of EV), which has users grant EV an unlimited but non-exclusive license to use feedback or suggestions they send, while retaining the right to do anything with it themselves. I've previously talked to GWWC staff about my ideas to help people give effectively, like a donation decision worksheet that I made. If this provision goes into effect, it would deter me from sharing my suggestions with GWWC in the future because I would risk losing the right to disseminate or continue developing those ideas or materials myself.
After your email last week, we agreed to edit that section and copy EV's terms on Feedback. I've just changed the text on the website.
We only removed the part about "all Feedback we request from you will be collected on an anonymous basis", as we might want to collect non-anonymous feedback in the future.
If anyone else has any feedback, make sure to also send us an email (like Eevee did) as we might miss things on the EA Forum.
Disclaimer: I'm a former PayPal employee. The following statements are my opinion alone and do not reflect PayPal's views. Also, this information is accurate as of 2024-10-14 and may become outdated in the future.
More donors should consider using PayPal Giving Fund to donate to charities. To do so, go to this page, search for the charity you want, and donate through the charity's page with your PayPal account. (For example, this is GiveDirectly's page.)
PayPal covers all processing fees on charitable donations made through their giving website, so you don't have to worry about the charity losing money to credit card fees. If you use a credit card that gives you 1.5 or 2% cash back (or 1.5-2x points) on all purchases, your net donation will be multiplied by ~102%. I don't know of any credit cards that offer elevated rewards for charitable donations as a category (like many do for restaurants, groceries, etc.), so you most likely can't do better than a 2% card for donations (unless you donate stocks).
For political donations, platforms like ActBlue and Anedot charge the same processing fees to organizations regardless of what payment metho... (read more)
Thanks for the reminder! I used to do this before EA Giving Tuesday and should probably start doing it again.
2
Eevee🔹
Fwiw, there are ways to get more than 2% cash back:
* Citi Double Cash and Citi Rewards+: you get 10% points back when redeeming points with the Rewards+ card, so if you "pool" the reward accounts together you can get effectively 2.¯2% back on donations made with the Double Cash.
* A number of credit cards give unlimited 3-4% cash back on all purchases, but there's usually a catch.
Bret Taylor (chair): Co-created Google Maps, ex-Meta CTO, ex-Twitter Chairperson, current co-founder of Sierra (AI company)
Larry Summers: Ex U.S. Treasury Secretary, Ex Harvard president
Adam D'Angelo: Co-founder, CEO Quora
Dr. Sue Desmond-Hellmann: Ex-director P&G, Meta, Bill & Melinda Gates; Ex-chancellor UCSF. Pfizer board member
Nicole Seligman: Ex-Sony exec, Paramount board member
Fidji Simo: CEO & Chair Instacart, Ex-Meta VP
Sam Altman
Also, Microsoft are allowed to observe board meetings
The only people here who even have rumours of being safety-conscious (AFAIK) is Adam D'Angelo, who allegedly played a role in kickstarting last year's board incident, and Sam, who has contradicted a great deal of his rhetoric with his actions. God knows why Larry Summers is there (give it an air of professionalism?), the rest seem to me like your typical professional board members (i.e. unlikely to understand OpenAI's unique charter & structure). In my opinion, any hope of restraint from this board or OpenAI's current leadership is misplaced.
Okay, so one thing I don't get about "common sense ethics" discourse in EA is, which common sense ethical norms prevail? Different people even in the same society have different attitudes about what's common sense.
For example, pretty much everyone agrees that theft and fraud in the service of a good cause - as in the FTX case - is immoral. But what about cases where the governing norms are ambiguous or changing? For example, in the United States, it's considered customary to tip at restaurants and for deliveries, but there isn't much consensus on when and how much to tip, especially with digital point-of-sale systems encouraging people to tip in more situations. (Just as an example of how conceptions of "common sense ethics" can differ: I just learned that apparently, you're supposed to tip the courier before you get a delivery now, otherwise they might refuse to take your order at all. I've grown up believing that you're supposed to tip after you get service, but many drivers expect you to tip beforehand.) You're never required to tip as a condition of service, so what if you just never tipped and always donated the equivalent amount to highly effective charities instead? That sou... (read more)
Has there been research on what interventions are effective at facilitating dialogue between social groups in conflict?
I remember an article about how during the last Israel-Gaza flare-up, Israelis and Palestinians were using the audio chatroom app Clubhouse to share their experiences and perspectives. This was portrayed as a phenomenon that increased dialogue and empathy between the two groups. But how effective was it? Could it generalize to other ethnic/religious conflicts around the world?
Copenhagen Consensus has some older work on what might be cost-effective to preventing armed conflicts, like this paper.
4
EdoArad
Joshua Greene recently came to Israel to explore extending their work aiming at bridging the Republican-Democrat divide in the US to the Israel-Palestine conflict. A 2020 video here.
2
Jamie_Harris
There's psychological research finding that both "extended contact" interventions and interventions that "encourage participants to rethink group boundaries or to prioritize common identities shared with specific outgroups" can reduce prejudice, so I can imagine the Clubhouse stuff working (and being cheap + scalable).
https://forum.effectivealtruism.org/posts/re6FsKPgbFgZ5QeJj/effective-strategies-for-changing-public-opinion-a#Prejudice_reduction_strategies
Crazy idea: When charities apply for funding from foundations, they should be required to list 3-5 other charities they think should receive funding. Then, the grantmaker can run a statistical analysis to find orgs that are mentioned a lot and haven't applied before, reach out to those charities, and encourage them to apply. This way, the foundation can get a more diverse pool of applicants by learning about charities outside their network.
Maybe EA philanthropists should be invest more conservatively, actually
The pros and cons of unusually high risk tolerance in EA philanthropy have been discussed a lot, e.g. here. One factor that may weigh in favor of higher risk aversion is that nonprofits benefit from a stable stream of donations, rather than one that goes up and down a lot with the general economy. This is for a few reasons:
Funding stability in a cause area makes it easier for employees to advance their careers because they can count on stable employment. It also makes it easier for nonp
These are good arguments for providing stable levels of funding per year, but there are often ways to further that goal without materially dialing back the riskiness of one's investments (probable exception: crypto, because the swings can be so wild and because other EA donors may be disproportionately in crypto). One classic approach is to set a budget based on a rolling average of the value of one's investments -- for universities, that is often a rolling three-year average, but it apparently goes back much further than that at Yale using a weighted-average approach. And EA philanthropists probably have more flexibility on this point than universities, whose use of endowments is often constrained by applicable law related to endowment spending.
I think we separate causes and interventions into "neartermist" and "longtermist" causes too much.
Just as some members of the EA community have complained that AI safety is pigeonholed as a "long-term" risk when it's actually imminent within our lifetimes[1], I think we've been too quick to dismiss conventionally "neartermist" EA causes and interventions as not valuable from a longtermist perspective. This is the opposite failure mode of surprising and suspicious convergence - instead of assuming (or rationalizing) that the spaces of interventions that are... (read more)
We should be able to compare global catastrophic risks in terms of the amount of time they make global civilization significantly worse and how much worse it gets. We might call this measure "quality-adjusted civilization years" (QACYs), or the quality-adjusted amount of civilization time that is lost.
For example, let's say that the COVID-19 pandemic reduces the quality of civilization by 50% for 2 years. Then the QACY burden of COVID-19 is 0.5×2=1 QACYs.
Another example: suppose climate change will reduce the quality of civilization by 80% for 200 years, and then things will return to normal. Then the total QACY burden of climate change over the long term will be 0.8×200=160 QACYs.
In the limit, an existential catastrophe would have a near-infinite QACY burden.
I think we need to be careful when we talk about AI and automation not to commit the lump of labor fallacy. When we say that a certain fraction of economically valuable work will be automated at any given time, or that this fraction will increase, we shouldn't implicitly assume that the total amount of work being done in the economy is constant. Historically, automation has increased the size of the economy, thereby creating more work to be done, whether by humans or by machines; we should expect the same to happen in the future. (Note that this doesn't exclude the possibility of increasingly general AI systems performing almost all economically valuable work. This could very well happen even as the total amount of work available skyrockets.)
Also see a recent paper finding no evidence for the automation hypothesis:
http://www.overcomingbias.com/2019/12/automation-so-far-business-as-usual.html
EA discussions often assume that the utility of money is logarithmic, but while this is a convenient simplification, it's not always the case. Logarithmic utility is a special case of isoelastic utility, a.k.a. power utility, where the elasticity of marginal utility is η=1. But η can be higher or lower. The most general form of isoelastic utility is the following:
u(c)={c1−η−11−ηη≥0,η≠1ln(c)η=1
Some special cases:
When η=0, we get linear utility, or u(c)=c.
When η=0.5, we get the square root utility function, u(c)=2(√c−1).
When η=1, we get the familiar logarithmic utility function, u(c)=ln(c).
For any η>1, the utility function asymptotically approaches a constant as c approaches infinity. When η=2, we get the utility function u(c)=1−1/c.
η tells us how sharply marginal utility drops off with increasing consumption: if a person already has k times as much money as the baseline, then giving them an extra dollar is worth (1/k)η times as much. Empirical studies have found that η for most people is between 1 and 2. So if the average GiveDirect... (read more)
The ratio of (jargon+equations):complexity in this shortform seems very high. Wouldn't it be substantially easier to write and read to just use terms and examples like "a project might have a stair-step or high-threshold function: unless the project gets enough money, it provides no return on investment"?
Or am I missing something in all the equations (which I must admit I don't understand)?
8
Eevee🔹
I'm basically saying that the logarithmic utility function, which is where we get the idea that doubling one's income from any starting point raises their happiness by the same amount, is a special case of a broader class of utility functions, in which marginal utility can decline faster or slower than in the logarithmic utility function.
4
Larks
All of the maths here assumes smooth utility returns to money; there are no step functions or threshold effects. Rather, it discusses different possible curvatures.
1
Marcel D
I wasn't trying to imply that was the only possibility, I was just highlighting step/threshold functions as an example of how the utility of money is not always logarithmic. In short, I just think that if the goal of the post is to dispute that simplification, it doesn't need to be so jargon/equation heavy, and if one of the goals of the post is to also discuss different possible curvatures, it would probably help to draw rough a diagram that can be more-easily understood.
7
Charles He
My fan fiction about what is going on in this thread:
A good guess is that "log utility" is being used by EAs for historical reasons (e.g. GiveWell's work) and is influenced by economics, where log is used a lot because it is extremely convenient. Economists don't literally believe people have log utility in income, it just makes equations work to show certain ideas.
It's possible that log utility actually is a really good approximation of welfare and income.
But sometimes ideas or notions get codified/canonized inappropriately and accidentally, and math can cause this.
With the context above, my read is that evelynciara is trying to show that income might be even more important to poor people than believed.
She's doing this in a sophisticated and agreeable way, by slightly extending the math.
So her equations aren't a distraction or unnecessary mathematical, it's exactly the opposite, she's protecting against math's undue influence.
1
Marcel D
I was hoping for a more dramatic and artistic interpretation of this thread, but I’ll accept what’s been given. In the end, I think there are three main audiences to this short form:
1. People like me who read the first sentence, think “I agree,” and then are baffled by the rest of the post.
2. People who read the first sentence, are confused (or think they disagree), then are baffled by the rest of the post.
3. People who read the first sentence, think “I agree,” are not baffled by the rest of the post and say “Yep, that’s a valid way of framing it.”
In contrast, I don’t think there is a large group of people in category 4. Read the first sentence, think “I disagree,” then understand the rest of the post. But do correct me if I’m wrong!
2
Charles He
Well, I don't agree with this perspective and its premise. I guess my view is that it doesn't seem compatible for what I perceive as the informal, personal character of shortform (like, "live and let live") which is specifically designed to have different norms than posts.
I won't continue this thread because it feels like I'm supplanting or speaking for the OP.
UK prime minister Rishi Sunak got some blowback for meeting with Elon Musk to talk about existential AIS stuff on Sky News, and that clip made it into this BritMonkey video criticizing the state of British politics. Starting at moment 1:10:57:
...the Prime Minister of the United Kingdom interviewing the richest man in the world, talking about AI in the context of the James Cameron Terminator films. I can barely believe I'm saying all of this.
Disclaimer: This shortform contains advice about navigating unemployment benefits. I am not a lawyer or a social worker, and you should use caution when applying this advice to your specific unemployment insurance situation.
Tip for US residents: Depending on which state you live in, taking a work test can affect your eligibility for unemployment insurance.
Unemployment benefits are typically reduced based on the number of hours you've worked in a given week. For example, in New York, you are eligible for the full benefit rate if you worked 10 hours or less ... (read more)
YIMBY groups in the United States (like YIMBY Action) systematically advocate for housing developments as well as rezonings and other policies to create more housing in cities. YIMBYism is an explicit counter-strategy to the NIMBY groups that oppose housing development; however, NIMBYism affects energy developments as well - everything from solar farms to nuclear power plants to power lines - and is thus an obstacle to the clean energy transition.
There should be groups that systematically advocate for energy projects (which are mostly in rural areas), borrowing the tactics of the YIMBY movement. Currently, when developers propose an energy project, they do an advertising campaign to persuade local residents of the benefits of the development, but there is often opposition as well.
I thought YIMBYs were generally pretty in favor of this already? (Though not generally as high a priority for them as housing.) My guess is it would be easier to push the already existing YIMBY movement to focus on energy more, as opposed to creating a new movement from scratch.
Yeah, I think that might be easier too. But YIMBY groups focus on housing in cities whereas most utility-scale energy developments are probably in suburbs or rural areas.
3
Daniel_Eth
Hmm, culturally YIMBYism seems much harder to do in suburbs/rural areas. I wouldn't be too surprised if the easiest ToC here is to pass YIMBY-energy policies on the state level, with most of the support coming from urbanites.
But sure, still probably worth trying.
2
Eevee🔹
Yeah, good point. Advocating for individual projects or rezonings is so time-consuming, even in the urban housing context.
I think an EA career fair would be a good idea. It could have EA orgs as well as non-EA orgs that are relevant to EAs (for gaining career capital or earning to give)
One thing the EA community should try doing is multinational op-ed writing contests. The focus would be op-eds advocating for actions or policies that are important, neglected, and tractable (although the op-eds themselves don't have to mention EA); and by design, op-eds could be submitted from anywhere in the world. To make judging easier, op-eds could be required to be in a single language, but op-ed contests in multiple languages could be run in parallel (such as English, Spanish, French, and Arabic, each of which is an official language in at least 20 countries).
This would have two benefits for the EA community:
It would be a cheap way to spread EA-aligned ideas in multiple countries. Also, the people writing the op-eds would know more about the political climates of the countries for which they are publishing them than the organizers of the contest would, and we can encourage them to tailor their messaging accordingly.
It would also be a way to measure countries' receptiveness to EA ideas. For example, if there were multiple submissions about immigration policy, we could use them to compare the receptiveness of different countries to immigration reforms that would increase global well-being.
I think this is a great idea. A related idea I had is a competition for "intro to EA" pitches because I don't currently feel like I can send my friends a link to a pitch that I'm satisfied with.
A simple version could literally just be an EA forum post where everyone comments an "intro to EA" pitch under a certain word limit, and other people upvote / downvote.
A fancier version could have a cash prize, narrowing down entries through EA forum voting, and then testing the top 5 through online surveys.
I think in a more general sense, we should create markets to incentivise and select persuasive writing on EA issues aimed at the public.
2
muskaan
That’s a great idea! I’ve been trying to find a good intro to EA talk for a while and I recently came across the EA for Christians YouTube video about intro to EA and though it’s kinda leaning towards to the religious angle, it seemed like a pretty good intro for a novice. Would love to hear your thoughts about that. Here’s the link: https://youtu.be/Unt9iHFH5-E
Discuss actions society can take to minimize its existential risk (Chapter 7)
What this leaves out:
Chapter 2 - mostly a discussion of the moral arguments for x-risk's importance. Can assume that the audience will already care about x-risk at a less sophisticated level, and focus on making the case that x-risk is high and we sort of know what to do about it.
The discussion of joint probabilities of x-risks in Chapter 6 - too technical for a general audience
Another way to do it would be to do an episode on each type of risk and what can be done about it, for ... (read more)
If S, M, or L is any small, medium, or large catastrophe and X is human extinction, then the probability of human extinction is
Pr(X)=Pr(S)Pr(M∣S)Pr(L∣S,M)Pr(X∣S,M,L).
So halving the probability of all small disasters, the probability of any small disaster becoming a medium-sized disaster, etc. would halve the probability of human extinction.
When it comes to comparing non-longtermist problems from a longtermist perspective, I find it useful to evaluate them based on their "stickiness": the rate at which they will grow or shrink over time.
A problem's stickiness is its annual growth rate. So a problem has positive stickiness if it is growing, and negative stickiness if it is shrinking. For long-term planning, we care about a problem's expected stickiness: the annual rate at which we think it will grow or shrink. Over the long term - i.e. time frames of 50 years or more - we want to focus on problems that we expect to grow over time without our intervention, instead of problems that will go away on their own.
For example, global poverty has negative stickiness because the poverty rate has declined over the last 200 years. I believe its stickiness will continue to be negative, barring a global catastrophe like climate change or World War III.
On the other hand, farm animal suffering has not gone away over time; in fact, it has gotten worse, as a growing number of people around the world are eating meat and dairy. This trend will continue at least until alternative proteins become com
On the difference between x-risks and x-risk factors
I suspect there isn't much of a meaningful difference between "x-risks" and "x-risk factors," for two reasons:
We can treat them the same in terms of probability theory. For example, if X is an "x-risk" and Y is a "risk factor" for X, then Pr(X∣Y)>Pr(X). But we can also say that Pr(Y∣X)>Pr(Y), because both statements are equivalent to Pr(X,Y)>Pr(X)Pr(Y). We can similarly speak of the total probability of an x-risk factor because of the law of total probability (e.g. Pr(Y)=Pr(Y∣X1)+Pr(Y∣X2)+…) like we can with an x-risk.
Concretely, something can be both an x-risk and a risk factor. Climate change is often cited as an example: it could cause an existential catastrophe directly by making all of Earth unable to support complex societies, or indirectly by increasing humanity's vulnerability to other risks. Pandemics might also be an example, as a pandemic could either directly cause the collapse of civilization or expose humanity to other risks.
I think the difference is that x-risks are events that directly cause an existential catastrophe, such as exti... (read more)
I think your comment (and particularly the first point) has much more to do with the difficulty of defining causality than with x-risks.
It seems natural to talk about force causing the mass to accelerate: when I push a sofa, I cause it to start moving. but Newtonian mechanics can't capture casualty basically because the equality sign in →F=m→a lacks direction. Similarly, it's hard to capture causality in probability spaces.
Following Pearl, I come to think that causality arises from manipulator/manipulated distinction.
So I think it's fair to speak about factors only with relation to some framing:
If you are focusing on bio policy, you are likely to take great-power conflict as an external factor.
Similarly, if you are focusing on preventing nuclear war between India and Pakistan, you are likely to take bioterrorism as an external factor.
Usually, there are multiple external factors in your x-risk modeling. The most salient and undesirable are important enough to care about them (and give them a name).
Calling bio-risks an x-factor makes sense formally; but doesn't make sense pragmatically because bio-risks are very salient (in our community) on their own because they are a canonica... (read more)
Status: Fresh argument I just came up with. I welcome any feedback!
Allowing the U.S. Social Security Trust Fund to invest in stocks like any other national pension fund would enable the U.S. public to capture some of the profits from AGI-driven economic growth.
Currently, and uniquely among national pension funds, Social Security is only allowed to invest its reserves in non-marketable Treasury securities, which are very low-risk but also provide a low return on investment relative to the stock market. By contrast, the Government Pension Fund of Norway (als... (read more)
It might be worthwhile reading about historical attempts to semi-privatize social security, which would have essentially created an opt-in version of your proposal, since individual people could then choose whether to have their share of the pot in bonds or stocks.
Decibels are a relative quantity: they express the intensity of a signal relative to another. A 10x difference is 10 dB, a 100x difference is 20 dB, and so on. The "just noticeable difference" in amplitude of sound is ~1 dB, or a ~25% increase. But decibels can ... (read more)
Episodes 5 and 6 of Netflix's 3 Body Problem seem to have longtermist and utilitarian themes (content warning: spoiler alert)
In episode 5 ("Judgment Day"), Thomas Wade leads a secret mission to retrieve a hard drive on a ship in order to learn more about the San-Ti who are going to arrive on Earth in 400 years. The plan involves using an array of nanofibers to tear the ship to shreds as it passes through the Panama Canal, killing everyone on board. Dr. Auggie Salazar (who invented the nanofibers) is uncomfortable with this plan, but Wade justifies it in th
I'm excited about Open Phil's new cause area, global aid advocacy. Development aid from rich countries could be used to serve several goals that many EAs care about:
Economic development and poverty reduction
Public health and biosecurity, including drug liberalization
Promoting liberal democracy
Climate change mitigation and adaptation
Also, development aid can fund a combination of randomista-style and systemic interventions (such as building infrastructure to promote growth).
The United States has two agencies that provide development aid: USAID, which provid... (read more)
When estimating the amount of good that can be done by working on a given cause, a good first approximation might be the asymptotic behavior of the amount of good done at each point in time (the trajectory change).
Other important factors are the magnitude of the trajectory change (how much good is done at each point in time) and its duration (how long the trajectory change lasts).
For example, changing the rate of economic growth (population growth * GDP/capita growth) has an O(t2) trajectory change i... (read more)
I just listened to Andrew Critch's interview about "AI Research Considerations for Human Existential Safety" (ARCHES). I took some notes on the podcast episode, which I'll share here. I won't attempt to summarize the entire episode; instead, please see this summary of the ARCHES paper in the Alignment Newsletter.
We need to explicitly distinguish between "AI existential safety" and "AI safety" writ large. Saying "AI safety" without qualification is confusing for both people who focus on near-term AI safety problems and those who focus on AI existential safety problems; it creates a bait-and-switch for both groups.
Although existential risk can refer to any event that permanently and drastically reduces humanity's potential for future development (paraphrasing Bostrom 2013), ARCHES only deals with the risk of human extinction because it's easier to reason about and because it's not clear what other non-extinction outcomes are existential events.
ARCHES frames AI alignment in terms of delegation from m ≥ 1 human stakeholders (such as individuals or organizations) to n ≥ 1 AI systems. Most alignment literature to date focuses on the single-single setting (one principal, one agent), b
Making specialty meats like foie gras using cellular agriculture could be especially promising. Foie gras traditionally involves fattening ducks or geese by force-feeding them, which is especially ethically problematic (although alternative production methods exist). It could probably be produced by growing liver and fat cells in a medium without much of a scaffold, which would make it easier to develop.
This sounds plausible to me, and there's already at least one company working on this, but I'm actually pretty confused about what goes into foie gras. Like do we really think just having liver and fat cells will be enough, or are there weird consistency/texture criteria that foie gras eaters really care about?
Would be excited to hear more people chime in with some expertise, eg if they have experience working in cellular agriculture or are French.
I've been tying myself up in knots about what causes to prioritize. I originally came back to effective altruism because I realized I had gotten interested in 23 different causes and needed to prioritize them. But looking at the 80K problem profile page (I am fairly aligned with their worldview), I see at least 17 relatively unexplored causes that they say could be as pressing as the top causes they've created profiles for. I've taken a stab at one of them: making surveillance compatible with privacy, civil libert
I've heard from women I know in this community that they are often shunted into low-level or community-building roles rather than object-level leadership roles. Does anyone else have knowledge about and/or experience with this?
Not sure who to alert to this, but: when filling out the EA Organization Survey, I noticed that one of the fields asks for a date in DD/MM/YYYY format. As an American this tripped me up and I accidentally tried to enter a date in MM/DD/YYYY format because I am more used to seeing it.
I suggest using the ISO 8601 (YYYY-MM-DD) format on forms that are used internationally to prevent confusion, or spelling out the month (e.g. "1 December 2023" or "December 1, 2023").
I'm concerned about the new terms of service for Giving What We Can, which will go into effect after August 31, 2024:
This is a significant departure from the Effective Ventures' TOS (GWWC is spinning out of EV), which has users grant EV an unlimited but non-exclusive license to use feedback or suggestions they send, while retaining the right to do anything with it themselves. I've previously talked to GWWC staff about my ideas to help people give effectively, like a donation decision worksheet that I made. If this provision goes into effect, it would deter me from sharing my suggestions with GWWC in the future because I would risk losing the right to disseminate or continue developing those ideas or materials myself.
Thank you for raising this!
After your email last week, we agreed to edit that section and copy EV's terms on Feedback. I've just changed the text on the website.
We only removed the part about "all Feedback we request from you will be collected on an anonymous basis", as we might want to collect non-anonymous feedback in the future.
If anyone else has any feedback, make sure to also send us an email (like Eevee did) as we might miss things on the EA Forum.
A hack to multiply your donations by up to 102%
Disclaimer: I'm a former PayPal employee. The following statements are my opinion alone and do not reflect PayPal's views. Also, this information is accurate as of 2024-10-14 and may become outdated in the future.
More donors should consider using PayPal Giving Fund to donate to charities. To do so, go to this page, search for the charity you want, and donate through the charity's page with your PayPal account. (For example, this is GiveDirectly's page.)
PayPal covers all processing fees on charitable donations made through their giving website, so you don't have to worry about the charity losing money to credit card fees. If you use a credit card that gives you 1.5 or 2% cash back (or 1.5-2x points) on all purchases, your net donation will be multiplied by ~102%. I don't know of any credit cards that offer elevated rewards for charitable donations as a category (like many do for restaurants, groceries, etc.), so you most likely can't do better than a 2% card for donations (unless you donate stocks).
For political donations, platforms like ActBlue and Anedot charge the same processing fees to organizations regardless of what payment metho... (read more)
Asking for a friend - there's no dress code for EAG, right?
Are there currently any safety-conscious people on the OpenAI Board?
The current board is:
The only people here who even have rumours of being safety-conscious (AFAIK) is Adam D'Angelo, who allegedly played a role in kickstarting last year's board incident, and Sam, who has contradicted a great deal of his rhetoric with his actions. God knows why Larry Summers is there (give it an air of professionalism?), the rest seem to me like your typical professional board members (i.e. unlikely to understand OpenAI's unique charter & structure). In my opinion, any hope of restraint from this board or OpenAI's current leadership is misplaced.
Okay, so one thing I don't get about "common sense ethics" discourse in EA is, which common sense ethical norms prevail? Different people even in the same society have different attitudes about what's common sense.
For example, pretty much everyone agrees that theft and fraud in the service of a good cause - as in the FTX case - is immoral. But what about cases where the governing norms are ambiguous or changing? For example, in the United States, it's considered customary to tip at restaurants and for deliveries, but there isn't much consensus on when and how much to tip, especially with digital point-of-sale systems encouraging people to tip in more situations. (Just as an example of how conceptions of "common sense ethics" can differ: I just learned that apparently, you're supposed to tip the courier before you get a delivery now, otherwise they might refuse to take your order at all. I've grown up believing that you're supposed to tip after you get service, but many drivers expect you to tip beforehand.) You're never required to tip as a condition of service, so what if you just never tipped and always donated the equivalent amount to highly effective charities instead? That sou... (read more)
Crazy idea: A vegan hot dog eating contest
Content warning: Israel/Palestine
Has there been research on what interventions are effective at facilitating dialogue between social groups in conflict?
I remember an article about how during the last Israel-Gaza flare-up, Israelis and Palestinians were using the audio chatroom app Clubhouse to share their experiences and perspectives. This was portrayed as a phenomenon that increased dialogue and empathy between the two groups. But how effective was it? Could it generalize to other ethnic/religious conflicts around the world?
Although focused on civil conflicts, Lauren Gilbert's shallow explores some possible interventions in this space, including:
Crazy idea: When charities apply for funding from foundations, they should be required to list 3-5 other charities they think should receive funding. Then, the grantmaker can run a statistical analysis to find orgs that are mentioned a lot and haven't applied before, reach out to those charities, and encourage them to apply. This way, the foundation can get a more diverse pool of applicants by learning about charities outside their network.
testing - I renamed my shortform page
Maybe EA philanthropists should be invest more conservatively, actually
The pros and cons of unusually high risk tolerance in EA philanthropy have been discussed a lot, e.g. here. One factor that may weigh in favor of higher risk aversion is that nonprofits benefit from a stable stream of donations, rather than one that goes up and down a lot with the general economy. This is for a few reasons:
- Funding stability in a cause area makes it easier for employees to advance their careers because they can count on stable employment. It also makes it easier for nonp
... (read more)April Fools' Day is in 11 days! Get yer jokes ready 🎶
I think we separate causes and interventions into "neartermist" and "longtermist" causes too much.
Just as some members of the EA community have complained that AI safety is pigeonholed as a "long-term" risk when it's actually imminent within our lifetimes[1], I think we've been too quick to dismiss conventionally "neartermist" EA causes and interventions as not valuable from a longtermist perspective. This is the opposite failure mode of surprising and suspicious convergence - instead of assuming (or rationalizing) that the spaces of interventions that are... (read more)
"Quality-adjusted civilization years"
We should be able to compare global catastrophic risks in terms of the amount of time they make global civilization significantly worse and how much worse it gets. We might call this measure "quality-adjusted civilization years" (QACYs), or the quality-adjusted amount of civilization time that is lost.
For example, let's say that the COVID-19 pandemic reduces the quality of civilization by 50% for 2 years. Then the QACY burden of COVID-19 is 0.5×2=1 QACYs.
Another example: suppose climate change will reduce the quality of civilization by 80% for 200 years, and then things will return to normal. Then the total QACY burden of climate change over the long term will be 0.8×200=160 QACYs.
In the limit, an existential catastrophe would have a near-infinite QACY burden.
I think we need to be careful when we talk about AI and automation not to commit the lump of labor fallacy. When we say that a certain fraction of economically valuable work will be automated at any given time, or that this fraction will increase, we shouldn't implicitly assume that the total amount of work being done in the economy is constant. Historically, automation has increased the size of the economy, thereby creating more work to be done, whether by humans or by machines; we should expect the same to happen in the future. (Note that this doesn't exclude the possibility of increasingly general AI systems performing almost all economically valuable work. This could very well happen even as the total amount of work available skyrockets.)
Utility of money is not always logarithmic
EA discussions often assume that the utility of money is logarithmic, but while this is a convenient simplification, it's not always the case. Logarithmic utility is a special case of isoelastic utility, a.k.a. power utility, where the elasticity of marginal utility is η=1. But η can be higher or lower. The most general form of isoelastic utility is the following:
u(c)={c1−η−11−ηη≥0,η≠1ln(c)η=1
Some special cases:
η tells us how sharply marginal utility drops off with increasing consumption: if a person already has k times as much money as the baseline, then giving them an extra dollar is worth (1/k)η times as much. Empirical studies have found that η for most people is between 1 and 2. So if the average GiveDirect... (read more)
UK prime minister Rishi Sunak got some blowback for meeting with Elon Musk to talk about existential AIS stuff on Sky News, and that clip made it into this BritMonkey video criticizing the state of British politics. Starting at moment 1:10:57:
Disclaimer: This shortform contains advice about navigating unemployment benefits. I am not a lawyer or a social worker, and you should use caution when applying this advice to your specific unemployment insurance situation.
Tip for US residents: Depending on which state you live in, taking a work test can affect your eligibility for unemployment insurance.
Unemployment benefits are typically reduced based on the number of hours you've worked in a given week. For example, in New York, you are eligible for the full benefit rate if you worked 10 hours or less ... (read more)
Nonprofit idea: YIMBY for energy
YIMBY groups in the United States (like YIMBY Action) systematically advocate for housing developments as well as rezonings and other policies to create more housing in cities. YIMBYism is an explicit counter-strategy to the NIMBY groups that oppose housing development; however, NIMBYism affects energy developments as well - everything from solar farms to nuclear power plants to power lines - and is thus an obstacle to the clean energy transition.
There should be groups that systematically advocate for energy projects (which are mostly in rural areas), borrowing the tactics of the YIMBY movement. Currently, when developers propose an energy project, they do an advertising campaign to persuade local residents of the benefits of the development, but there is often opposition as well.
I thought YIMBYs were generally pretty in favor of this already? (Though not generally as high a priority for them as housing.) My guess is it would be easier to push the already existing YIMBY movement to focus on energy more, as opposed to creating a new movement from scratch.
I think an EA career fair would be a good idea. It could have EA orgs as well as non-EA orgs that are relevant to EAs (for gaining career capital or earning to give)
One thing the EA community should try doing is multinational op-ed writing contests. The focus would be op-eds advocating for actions or policies that are important, neglected, and tractable (although the op-eds themselves don't have to mention EA); and by design, op-eds could be submitted from anywhere in the world. To make judging easier, op-eds could be required to be in a single language, but op-ed contests in multiple languages could be run in parallel (such as English, Spanish, French, and Arabic, each of which is an official language in at least 20 countries).
This would have two benefits for the EA community:
Possible outline for a 2-3 part documentary adaptation of The Precipice:
Part 1: Introduction & Natural Risks
Part 2: Human-Made Risks
Part 3: What We Can Do
What this leaves out:
Another way to do it would be to do an episode on each type of risk and what can be done about it, for ... (read more)
An idea I liked from Owen Cotton-Barratt's new interview on the 80K podcast: Defense in depth
If S, M, or L is any small, medium, or large catastrophe and X is human extinction, then the probability of human extinction is
Pr(X)=Pr(S)Pr(M∣S)Pr(L∣S,M)Pr(X∣S,M,L).
So halving the probability of all small disasters, the probability of any small disaster becoming a medium-sized disaster, etc. would halve the probability of human extinction.
Tentative thoughts on "problem stickiness"
When it comes to comparing non-longtermist problems from a longtermist perspective, I find it useful to evaluate them based on their "stickiness": the rate at which they will grow or shrink over time.
A problem's stickiness is its annual growth rate. So a problem has positive stickiness if it is growing, and negative stickiness if it is shrinking. For long-term planning, we care about a problem's expected stickiness: the annual rate at which we think it will grow or shrink. Over the long term - i.e. time frames of 50 years or more - we want to focus on problems that we expect to grow over time without our intervention, instead of problems that will go away on their own.
For example, global poverty has negative stickiness because the poverty rate has declined over the last 200 years. I believe its stickiness will continue to be negative, barring a global catastrophe like climate change or World War III.
On the other hand, farm animal suffering has not gone away over time; in fact, it has gotten worse, as a growing number of people around the world are eating meat and dairy. This trend will continue at least until alternative proteins become com
... (read more)On the difference between x-risks and x-risk factors
I suspect there isn't much of a meaningful difference between "x-risks" and "x-risk factors," for two reasons:
I think the difference is that x-risks are events that directly cause an existential catastrophe, such as exti... (read more)
I think your comment (and particularly the first point) has much more to do with the difficulty of defining causality than with x-risks.
It seems natural to talk about force causing the mass to accelerate: when I push a sofa, I cause it to start moving. but Newtonian mechanics can't capture casualty basically because the equality sign in →F=m→a lacks direction. Similarly, it's hard to capture causality in probability spaces.
Following Pearl, I come to think that causality arises from manipulator/manipulated distinction.
So I think it's fair to speak about factors only with relation to some framing:
Usually, there are multiple external factors in your x-risk modeling. The most salient and undesirable are important enough to care about them (and give them a name).
Calling bio-risks an x-factor makes sense formally; but doesn't make sense pragmatically because bio-risks are very salient (in our community) on their own because they are a canonica... (read more)
Status: Fresh argument I just came up with. I welcome any feedback!
Allowing the U.S. Social Security Trust Fund to invest in stocks like any other national pension fund would enable the U.S. public to capture some of the profits from AGI-driven economic growth.
Currently, and uniquely among national pension funds, Social Security is only allowed to invest its reserves in non-marketable Treasury securities, which are very low-risk but also provide a low return on investment relative to the stock market. By contrast, the Government Pension Fund of Norway (als... (read more)
I lowkey miss the name "shortform" 🙁
It seems like decibels (dB) are a natural unit for perceived pleasure and pain, since they account for the fact that humans and other beings mostly perceive sensations in proportion to the logarithm of their actual strength. (This is discussed at length in "Logarithmic Scales of Pleasure and Pain".)
Decibels are a relative quantity: they express the intensity of a signal relative to another. A 10x difference is 10 dB, a 100x difference is 20 dB, and so on. The "just noticeable difference" in amplitude of sound is ~1 dB, or a ~25% increase. But decibels can ... (read more)
I think partnering with local science museums to run events on EA topics could be a great way to get EA-related ideas out to the public.
Episodes 5 and 6 of Netflix's 3 Body Problem seem to have longtermist and utilitarian themes (content warning: spoiler alert)
- In episode 5 ("Judgment Day"), Thomas Wade leads a secret mission to retrieve a hard drive on a ship in order to learn more about the San-Ti who are going to arrive on Earth in 400 years. The plan involves using an array of nanofibers to tear the ship to shreds as it passes through the Panama Canal, killing everyone on board. Dr. Auggie Salazar (who invented the nanofibers) is uncomfortable with this plan, but Wade justifies it in th
... (read more)I'm excited about Open Phil's new cause area, global aid advocacy. Development aid from rich countries could be used to serve several goals that many EAs care about:
Also, development aid can fund a combination of randomista-style and systemic interventions (such as building infrastructure to promote growth).
The United States has two agencies that provide development aid: USAID, which provid... (read more)
Big O as a cause prioritization heuristic
When estimating the amount of good that can be done by working on a given cause, a good first approximation might be the asymptotic behavior of the amount of good done at each point in time (the trajectory change).
Other important factors are the magnitude of the trajectory change (how much good is done at each point in time) and its duration (how long the trajectory change lasts).
For example, changing the rate of economic growth (population growth * GDP/capita growth) has an O(t2) trajectory change i... (read more)
I just listened to Andrew Critch's interview about "AI Research Considerations for Human Existential Safety" (ARCHES). I took some notes on the podcast episode, which I'll share here. I won't attempt to summarize the entire episode; instead, please see this summary of the ARCHES paper in the Alignment Newsletter.
Making specialty meats like foie gras using cellular agriculture could be especially promising. Foie gras traditionally involves fattening ducks or geese by force-feeding them, which is especially ethically problematic (although alternative production methods exist). It could probably be produced by growing liver and fat cells in a medium without much of a scaffold, which would make it easier to develop.
Some rough thoughts on cause prioritization
- I've been tying myself up in knots about what causes to prioritize. I originally came back to effective altruism because I realized I had gotten interested in 23 different causes and needed to prioritize them. But looking at the 80K problem profile page (I am fairly aligned with their worldview), I see at least 17 relatively unexplored causes that they say could be as pressing as the top causes they've created profiles for. I've taken a stab at one of them: making surveillance compatible with privacy, civil libert
... (read more)