All posts

New & upvoted

Today, 12 November 2024
Today, 12 Nov 2024

Quick takes

Has anybody changed their behaviour after the animal welfare vs global health debate week? A month or so on, I'm curious if anybody is planning to donate differently, considering a career pivot, etc. If anybody doesn't want to share publicly but would share privately, please feel free to message me. Linking @Angelina Li's post asking how people would change their behaviour, and tagging @Toby Tremlett🔹 who might have thought about tracking this.
Someone really needs to make Asterisk meetup groups a thing.
People in EA end up optimizing for EA credentials so they can virtue signal to grantmakers, but grantmakers would probably like people to scope out non-EA opportunities because that allows us to introduce unknown people to the concerns we have

Monday, 11 November 2024
Mon, 11 Nov 2024

Frontpage Posts

Saturday, 9 November 2024
Sat, 9 Nov 2024

Quick takes

During the animal welfare vs global health debate week, I was very reluctant to make a post or argument in favor of global health, the cause I work in and that animates me. Here are some reflections on why, that may or may not apply to other people: 1. Moral weights are tiresome to debate. If you (like me) do not have a good grasp of philosophy, it's an uphill struggle to grasp what RP's moral weights project means exactly, and where I would or would not buy into its assumptions. 2. I don't choose my donations/actions based on impartial cause prioritization. I think impartially within GHD (e.g. I don't prioritize interventions in India just because I'm from there, I treat health vs income moral weights much more analytically than species moral weights) but not for cross-cause comparison. I am okay with this. But it doesn't make for a persuasive case to other people. 3. It doesn't feel good to post something that you know will provoke a large volume of (friendly!) disagreement. I think of myself as a pretty disagreeable person, but I am still very averse to posting things that go against what almost everyone around me is saying, at least when I don't feel 100% confident in my thesis. I have found previous arguments about global health vs animal welfare to be especially exhausting and they did not lead to any convergence, so I don't see the upside that justifies the downside. 4. I don't fundamentally disagree with the narrow thesis that marginal money can do more good in animal welfare. I just feel disillusioned with the larger implications that global health is overfunded and not really worth the money we spend on it. I'm deliberately focusing on emotional/psychological inhibitions as opposed to analytical doubts I have about animal welfare. I do have some analytical doubts, but I think of them as secondary to the personal relationship I have with GHD.
1
Phib
4d
0
A quick OpenAI-o1 preview BOTEC for additional emissions from a sort of Leopold scenario ~2030, assuming energy is mostly provided by natural gas, since I was kinda curious. Not much time spent on this and took the results at face value. I (of course?) buy that emissions don't matter in short term, in a world where R&D is increasingly automated and scaled. Phib: Say an additional 20% of US electricity was added to our power usage (e.g. for AI) over the next 6 years, and it was mostly natural gas. Also, that AI inference is used at an increasing rate, say 100Million H100s equivalent, increasing from today's level to 100M over the next 6 years, and it's more or less the same as today's energy breakdown, although with more natural gas over time. What do the emissions from such a build out of natural gas look like? [Reasoning available here: https://chatgpt.com/share/672ed183-7c3c-800d-ab0e-d08020d072c7] **Answer: The emissions from this build-out of natural gas over the next six years would be significant. Specifically: * Total Additional Emissions Over Six Years: Approximately 1.136 billion metric tons of CO₂. * Average Annual Emissions Increase: About 189 million metric tons of CO₂ per year. * Percentage of Current U.S. Emissions: This represents an approximate 2.9% increase over the current annual U.S. greenhouse gas emissions. These emissions result from the increased natural gas usage required to meet the additional electricity demand—both from the general 20% increase in electricity consumption and the substantial growth in AI inference workloads powered by the equivalent of 100 million NVIDIA H100 GPUs. Phib: Imagine there is a similar buildout in China, if not larger, how do the emissions from all of the above (within some confidence interval) place us environmentally within 2 years after 2030 of such a buildout and increase in emissions? Within 10 years? Considering a more or less constant rate of emissions thereafter for each. Conclusion The combi

Friday, 8 November 2024
Fri, 8 Nov 2024

Frontpage Posts

Quick takes

A thing that seems valuable but is not talked about much is organizations that bring talent into the EA/impact-focused charity world, vs. re-using people already in the movement, vs. turning people off the movement. The difference in these effects seems both significant and pretty consistent within an organization. I think Founders Pledge is a good example of an organization that, I think, net brings talent into the effective charities world. I often see their hires, post-leaving FP, go on to pretty impactful other roles that it’s not clear they would have done absent their experience working for FP. I wish more organizations did this vs. re-using/turning people off.
Why should you donate to the Forum’s Donation Election Fund? * It could change the way you donate, for the better: We all have limited information to decide how we should donate. Giving via the Donation Election Fund let’s you benefit from the collectively held information of the Forum’s users, as well as up to date facts from the marginal funding posts from organisations. If enough users take part in the voting, you won’t have to read all of the marginal funding posts to benefit from the information they contain.  * It could boost engagement in the election, which leads to: * More funding for charities: Last year, the donation election and surrounding events moved a lot of money. In our squiggle model, we get this distribution: * The headline “$30k raised through the election” does not represent all of the money raised because of the Forum’s events. But giving money to the election fund will likely increase the attention on the Forum, the amount of effort that organisations and individuals put into posts etc… in a way which will increase the amount raised overall.  * Influencing other’s donations for the better: In the EA survey (post coming soon) we saw that the donation election had influenced people’s donation choices. We also saw in the comments on last years votes that specific posts had influenced donations, especially, shifting people towards animal welfare organisations, and increasing donations to rethink priorities.  Also, maybe you just want to get these sweet sweet rewards. 
I’d love to dig a bit more into some real data and implications for this (hence, just a quick take for now), but I suspect that (EA) donors may not take the current funding allocation within and across cause areas into account when making donation decisions - and that taking it sufficiently into account may mean that small donors shouldn’t diversify? For example, the recent Animal Welfare vs. Global Health Debate Week posed the statement “It would be better to spend an extra $100m on animal welfare than on global health.” Now, one way to think through this question is “How would the ideal funding split between Animal Welfare vs. Global Health look like” and test whether an additional $100m on Animal Welfare would bring us closer to the ideal funding split (in this case, it appears that spending the $100m on Animal Welfare increases the share of AW from 0.41% to 0.55% - meaning that if your ideal funding split would allocate more than 0.55% to AW, you should be in favor of directing $100m there). I am not sure if this perspective is the right or even the best to take, but I think it may often be missing. I think it’s important to think through it, because it takes into account “how much money should be spent on X vs. Y” as opposed to “how much money I should spend on X vs. Y” (or maybe even “How much money should EA spend on X vs. Y”?) - which I think closer to what we should care about. I think this is interesting, because: * If you primarily, but not strictly and solely favor a comparably well-funded area (say, GHD or Climate Change), you may want to donate all your money towards a cause area that don’t even value particularly highly. * Ironically, this type of thinking only applies if you value diversification in your donations in the first place. So, if you are wondering how much % of your money should go to X vs. Y, I suspect that looking at the current global funding allocation will likely (for most people, necessarily?) lead to pouring all your money into
What is malevolence? On the nature, measurement, and distribution of dark traits was posted two weeks ago (and i recommend it). there was a questionnaire discussed in that post which tries to measure the levels of 'dark traits' in the respondent. i'm curious about the results[1] of EAs[2] on that questionnaire, if anyone wants to volunteer theirs. there are short and long versions (16 and 70 questions). 1. ^ (or responses to the questions themselves) 2. ^ i also posted the same quick take to LessWrong, asking about rationalists
I’d be grateful if some people could fill this survey https://forms.gle/RdQfJLs4a5jd7KsQA The survey will ask you to compare different intensities of pain. In case you're interested why you might want to do it, you’ll be helping me to estimate plausible weights for different categories of pain used by the Welfare Footprint Project. This will help me with to summarise their conclusions into easily digestible statements like “switch from battery cage to cage-free reduces suffering of hens by at least 60%” and with some cost-effectiveness estimates. Thanks :)

Thursday, 7 November 2024
Thu, 7 Nov 2024

Frontpage Posts

1
· · 1m read

Quick takes

As earn to giver, I found contributing to funding diversification challenging Jeff Kaufmann posted a different version of the same argument earlier than me. Some have argued that earning to give can contribute to funding diversification. Having a few dozen mid-sized donors, rather than one or two very large donors, would make the financial position of an organization more secure. It allows them to plan for the future and not worry about fundraising all the time. As earn to giver, I can be one of those mid-sized donors. I have tried. However, it is challenging. First of all, I don't have expertise, and don't have much time to build the expertise. I spend most of my time on my day job, which has nothing to do with any cause I care about. Any research must be done in my free time. This is fine, but it has some cost. This is time I could have spent on career development, talking to others about effective giving, or living more frugally. Motivation is not the issue, at least for me. I've found the research extremely rewarding and intellectually stimulating to do. Yet, fun doesn't necessarily translate to effectiveness. I've seen peer earn to givers just defer to GiveWell or other charity evaluators without putting much thought into it. This is great, but isn't there more? Others said that they talked to an individual organization, thought "sounds reasonable", and transferred the money. I fell for that trap too! There is a lot at stake. It's about hard-earned money that has the potential to help large numbers of people and animals in dire need. Unfortunately, I don't trust my own non-expert judgment to do this. So I find myself donating to funds, and then the funding is centralized again. If others do the same, charities will have to rely on one grantmaker again, rather than a diverse pool of donors. Ideas What would help to address this issue? Here are a few ideas, some of them are already happening. * funding circles. Note that most funding circles I know r
17
saulius
5d
4
I was thinking on ways to reduce political polarization and thought about AI chatbots like Talkie. Imagine an app where you could engage with a chatbot representing someone with opposing beliefs. For example: * A Trump voter or a liberal voter * A woman who chose to have an abortion or an anti-abortion activist * A transgender person or someone opposed to transgender rights * A person from another race, religion, or a country your country might be at odds with Each chatbot would explain how they arrived at their beliefs, share relatable backstories, and answer questions. This kind of interaction could offer a low-risk, controlled environment for understanding diverse political perspectives, potentially breaking the echo chambers reinforced by social media. AI-based interactions might appeal to people who find real-life debates intimidating or confrontational, helping to demystify the beliefs of others.  The app could perhaps include a points system for engaging with different viewpoints, quizzes to test understanding, and start conversations in engaging, fictional scenarios. Chatbots should ideally be created in collaboration with people who hold these actual views, ensuring authenticity. Or maybe chatbots could even be based on concrete actual people who could hold AMAs. Ultimately, users might even be matched with real people of differing beliefs for video calls or correspondence. If done well, such an app could perhaps even be used in schools, fostering empathy and reducing division from an early age.  Personally, I sometimes ask ChatGPT to write a story of how someone came to have views I find difficult to relate to (e.g., how someone might become a terrorist), and I find that very helpful. I was told that creating chatbots is very easy. It’s definitely easy to add them to Talkie, there are so many of them there. Still, to make this impactful and good, this needs a lot more than that. I don’t intend to build this app. I just thought the idea is worth sh
Flaming hot take: I wonder if some EAs suffer from Scope Oversensitivity - essentially the inverse of the identifiable victim effect. Take the animal welfare vs global health debate: are we sometimes biased by the sheer magnitude of animal suffering numbers, rather than other relevant factors? Just as the identifiable victim effect leads people to overweight individual stories, maybe we're overweighting astronomical numbers. EAs pride themselves on scope sensitivity to combat emotional biases, but taken to an extreme, could this create its own bias? Are we sometimes too seduced by bigger numbers = bigger problem? The meta-principle might be that any framework, even one designed to correct cognitive biases, needs wisdom and balance to avoid becoming its own kind of distortion.
I think eventually, working on changing the EA introductory program is important. I think it is an extremely good thing to do well, and I think it could be improved. I'm running a 6 week version right now, and I'll see if I feel the same way at the end.

Wednesday, 6 November 2024
Wed, 6 Nov 2024

Frontpage Posts

Quick takes

The value of re-directing non-EA funding to EA orgs might still be under-appreciated. While we obsess over (rightly so) where EA funding should be going, shifting money from one EA cause to another "better" ne might often only make an incremental difference, while moving money from a non-EA pool to fund cost-effective interventions might make an order of magnitude difference. There's nothing new to see here. High impact foundations are being cultivated to shift donor funding to effective causes, the “Center for effective aid policy”  was set up (then shut down) to shift governement money to more effective causes, and many great EAs work in public service jobs partly to redirect money. The Lead exposure action fund spearheaded by OpenPhil is hopefully re-directing millions to a fantastic cause as we speak. I would love to see an analysis (might have missed it) which estimates the “cost-effectiveness” of redirecting a dollar into a 10x or 100x more cost-effective intervention, How much money/time would it be worth spending to redirect money this way? Also I'd like to get my head around how much might the working "cost-effectiveness" of an org improve if its budget shifted from 10% non-EA funding to 90% non- EA funding. There are obviously costs to roping in non-EA funding. From my own experience it often takes huge time and energy. One thing I’ve appreciated about my 2 attempts applying for EA adjacent funding is just how straightforward It has been – probably an order of magnitude less work than other applications.  Here’s a few practical ideas to how we could further redirect funds 1. EA orgs could put more effort into helping each other access non-EA money. This is already happening through the AIM cluster, but I feel the scope could be widened to other orgs, and co-ordination could be improved a lot without too much effort. I’m sure pools of money are getting missed all the time. For example I sure hope we're doing whatever we can through our networks to hel
Current takeaways from the 2024 US election <> forecasting community. First section in Forecasting newsletter: US elections, posting here because it has some overlap with EA. 1. Polymarket beat legacy institutions at processing information, in real time and in general. It was just much faster at calling states, and more confident earlier on the correct outcome. 2. The OG prediction markets community, the community which has been betting on politics and increasing their bankroll since PredictIt, was on the wrong side of 50%—1, 2, 3, 4, 5. It was the democratic, open-to-all nature of it, the Frenchman who was convinced that mainstream polls were pretty tortured and bet ~$45M, what moved Polymarket to the right side of 50/50. 3. Polls seem like a garbage in garbage out kind of situation these days. How do you get a representative sample? The answer is maybe that you don't. 4. Polymarket will live. They were useful to the Trump campaign, which has a much warmer perspective on crypto. The federal government isn't going to prosecute them, nor bettors. Regulatory agencies, like the CFTC and the SEC, which have taken such a prominent role in recent editions of this newsletter, don't really matter now, as they will be aligned with financial innovation rather than opposed to it. 5. NYT/Siena really fucked up with their last poll and the coverage of it. So did Ann Selzer. Some prediction market bettors might have thought that you could do the bounded distrust, but in hindsight it turns out that you can't. Looking back, to the extent you trust these institutions, they can ratchet their deceptiveness (from misleading headlines, incomplete stories, incomplete quotes out of context, not reporting on important stories, etc.) for clicks and hopium, to shape the information landscape for a managerial class that... will no longer be in power in America. 6. Elon Musk and Peter Thiel look like geniuses. In contrast Dustin Moskovitz couldn't get SB 1047 passed despite being the s
Celebrating your users - this just popped into my inbox celebrating my double digit meetings using the Calendly tool.  It highlights a great practice of understanding your users' journey and celebrating the key moments that matter.  Onboarding and offboarding are key moments, but so are points that can transition them to a power user.  From forum stalker to contributor.  This allows me to reflect on how good an experience I've had that I keep using this tool (make sure it is good), and as a next step suggests tips on how I can use the tool more pervasively to get more embedded in the ecosystem.  So think about how you can celebrate your users when community building.  
I've been thinking that there is a "fallacious, yet reasonable as a default/fallback" way to choose moral circles based on the Anthropic principle, which is closely related to my article "The Putin Fallacy―Let’s Try It Out". It's based on the idea that consciousness is "real" (part of the territory, not the map), in the same sense that quarks are real but cars are not. In this view, we say: P-zombies may be possible, but if consciousness is real (part of the territory), then by the Anthropic principle we are not P-Zombies, since P-zombies by definition do not have real experiences. (To look at it another way, P-Zombies are intelligences that do not concentrate qualia or valence, so in a solar system with P-zombies, something that experiences qualia is as likely to be found alongside one proton as any other, and there are about 10^20 times more protons in the sun as there are in the minds of everyone on Zombie Earth combined.) I also think that real qualia/valence is the fundamental object of moral value (also reasonable IMO, for why should an object with no qualia and no valence have intrinsic worth?) By the Anthropic principle, it is reasonable to assume that whatever we happen to be is somewhat typical among beings that have qualia/valence, and thus, among beings that have moral worth. By this reasoning, it is unlikely that the sum total |W| of all qualia/valence in the world is dramatically larger than the sum total |H| of all qualia/valence among humans, because if |W| >> |H|, you and I are unlikely to find ourselves in set H. I caution people that while reasonable, this view is necessarily uncertain and thus fallacious and morally hazardous if it is treated as a certainty. Yet if we are to allocate our resources in the absence of any scientific clarity about which animals have qualia/valence, I think we should take this idea into consideration. P.S. given the election results, I hope more people are doing now the soul-searching we should've done in 2016. I pr

Tuesday, 5 November 2024
Tue, 5 Nov 2024

Frontpage Posts

Quick takes

I think that EA outreach can be net positive in a lot of circumstances, but there is one version of it that always makes me cringe. That version is the targeting of really young people (for this quicktake, I will say anyone under 20). This would basically include any high school targeting and most early-stage college targeting. I think I do not like it for two reasons: 1) it feels a bit like targeting the young/naive in a way I wish we would not have to do, given the quality of our ideas, and 2) these folks are typically far from making a real impact, and there is lots of time for them to lose interest or get lost along the way. Interestingly, this stands in contrast to my personal experience—I found EA when I was in my early 20s and would have benefited significantly from hearing about it in my teenage years.
I'm pretty confident that Marketing is in the top 1-3 skill bases for aspiring Community / Movement Builders. When I say Marketing, I mean it in the broad sense it used to mean. In recent years "Marketing" = "Advertising", but I use the classic Four P's of Marketing to describe it. The best places to get such a skill base is at FMCG / mass marketing organisations such as the below. Second best would be consulting firms (McKinsey & Company): * Procter & Gamble (P&G) * Unilever * Coca-Cola * Amazon 1. Product - What you're selling (goods or services) - Features and benefits - Quality, design, packaging - Brand name and reputation - Customer service and support 2. Price - Retail/wholesale pricing - Discounts and promotions - Payment terms - Pricing strategy (premium, economy, etc.) - Price comparison with competitors 3. Place (Distribution) - Sales channels - Physical/online locations - Market coverage - Inventory management - Transportation and logistics - Accessibility to customers 4. Promotion - Advertising - Public relations - Sales promotions - Direct marketing - Digital marketing - Personal selling
In the spirit of Funding Strategy Week, I'm resharing this post from @Austin last week:

Monday, 4 November 2024
Mon, 4 Nov 2024

Frontpage Posts

Quick takes

I just learned that Lawrence Lessig, the lawyer who is/was representing Daniel Kokateljo and other OpenAI employees, supported and encouraged electors to be faithless and vote against Trump in 2016. He wrote an opinion piece in the Washington Post (archived) and offered free legal support. The faithless elector story was covered by Politico, and was also supported by Mark Ruffalo (the actor who recently supported SB-1047). I think this was clearly an attempt to steal an election and would discourage anyone from working with him. I expect someone to eventually sue AGI companies for endangering humanity, and I hope that Lessig won't be involved.
I've heard from women I know in this community that they are often shunted into low-level or community-building roles rather than object-level leadership roles. Does anyone else have knowledge about and/or experience with this?
A little while ago I posted this quick take:  I didn't have a good response to @DanielFilan, and I'm pretty inclined to defer to orgs like CEA to make decisions about how to use their own scarce resources.  At least for EA Global Boston 2024 (which ended yesterday), there was the option to pay a "cost covering" ticket fee (of what I'm told is $1000).[1] All this is to say that I am now more confident (although still <80%) that marginal rejected applicants who are willing to pay their cost-covering fee would be good to admit.[2] In part this stems from an only semi-legible background stance that, on the whole, less impressive-seeming people have more ~potential~ and more to offer than I think "elite EA" (which would those running EAG admissions) tend to think. And this, in turn, has a lot to do with the endogeneity/path dependence of I'd hastily summarize as "EA involvement." That is, many (most?) people need a break-in point to move from something like "basically convinced that EA is good, interested in the ideas and consuming content, maybe donating 10%" to anything more ambitious. For some, that comes in the form of going to an elite college with a vibrant EA group/community. Attending EAG is another—or at least could be. But if admission is dependent on doing the kind of things and/or having the kinds of connections that a person might only pursue after getting on such an on-ramp, you have a vicious cycle of endogenous rejection. The impetus for writing this is seeing a person who was rejected with some characteristics that seem plausibly pretty representative of a typical marginal EAG rejectee: * College educated but not via an elite university * Donates 10%, mostly to global health * Normal-looking middle or upper-middle class career * Interested in EA ideas but not a huge amount to show for it * Never attended EAG Of course n=1, this isn't a tremendous amount of evidence, I don't have strictly more information than the admissions folks, the optim
"Freedom has come to mean choice.  It has less to do with the human spirit than with different brands of deodorant...The "Market" is a deterritorialized space where faceless corporations do business, including buying and selling "futures."  -Arundhati Roy
Flaming hot take: if you think Digital Sentience should be taken seriously but not Human Awakening / Enlightenment, then EA culture might have its hooks in a bit deep.

Sunday, 3 November 2024
Sun, 3 Nov 2024

Quick takes

What are the norms on the EA Forum about ChatGPT-generated content? If I see a forum post that looks like it was generated by a LLM generative AI tool, it is rude to write a comment asking "Was this post written by generative AI?" I'm not sure what the community's expectations are, and I want to be cognizant of not assuming my own norms/preferences are the appropriate ones.
Having a nondual Awakening was the second most important thing to happen to me (after my daughters birth). It has led to incredibly low levels of suffering and incredibly high levels of wellbeing. I write this because I think it is still incredibly under-appreciated and attainable for most people (maybe literally anyone). There are traditions (Dzogchen, Zen, modern nonduality) where this shift in consciousness can be experienced simply by hearing the right combination of words and insights. As our understanding and tools for communicating these insights evolve, including through advances in AI, I believe this transformative experience could become accessible to many more people.

Load more days