Funding Diversification Week
Marginal Funding Week
Donation Election
Pledge Highlight
Donation Celebration
Nov 4 - 10
Funding Diversification Week
This week, we are encouraging content around a range of important funding considerations. Read more.
Nov 12 - 18
Marginal Funding Week
Here is a description of what Marginal Funding Week is and how to engage with it. Probably also a link to the posts.
Nov 18 - Dec 3
Donation Election
A crowd-sourced pot of funds will be distributed amongst three charities based on your votes. Find out more.
Dec 16 - 22
Pledge Highlight
A week to post about your experience with pledging, and to discuss the value of pledging. Read more.
Dec 23 - 31
Donation Celebration
When the donation celebration starts, you’ll be able to add a heart to the banner showing that you’ve done your annual donations.
Donation Election Fund
Donate to the fund to boost the value of the Election. Learn more.
$0 raised

New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
I am organizing a fundraising competition between Philosophy Departments for AMF. You can find it here: https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9191 Previous editions have netted (badum-tschak) roughly $40.000: https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9189 Any contributions are very welcome, as is sharing the fundraiser. A more official-looking announcement is on Dailynous, a central blog of academic philosophy: people found this ideal for sharing via e.g. department listservs.  https://dailynous.com/2024/12/02/philosophers-against-malaria-a-fundraising-competition/ These are relatively low-effort to set up - I spend maybe 10-20h a year on them. If you are interested in setting up a similar thing for your discipline/social circles, feel very welcome to reach out for help.
Around EA Priorities: Personally, I feel fairly strongly convinced to favor interventions that could help the future past 20 years from now. (A much lighter version of "Longtermism"). If I had a budget of $10B, I'd probably donate a fair bit to some existing AI safety groups. But it's tricky to know what to do with, say, $10k. And the fact that the SFF, OP, and others have funded some of the clearest wins makes it harder to know what's exciting on-the-margin. I feel incredibly unsatisfied with the public dialogue around AI safety strategy now. From what I can tell, there's some intelligent conversation happening by a handful of people at the Constellation coworking space, but a lot of this is barely clear publicly. I think many people outside of Constellation are working on simplified models, like "AI is generally dangerous, so we should slow it all down," as opposed to something like, "Really, there are three scary narrow scenarios we need to worry about." I recently spent a week in DC and found it interesting. But my impression is that a lot of people there are focused on fairly low-level details, without a great sense of the big-picture strategy. For example, there's a lot of work into shovel-ready government legislation, but little thinking on what the TAI transition should really look like. This sort of myopic mindset is also common in the technical space, where I meet a bunch of people focused on narrow aspects of LLMs, without much understanding of how their work exactly fits into the big picture of AI alignment. As an example, a lot of work seems like it would help with misuse risk, even when the big-picture EAs seem much more focused on accident risk. Some (very) positive news is that we do have far more talent in this area than we did 5 years ago, and there's correspondingly more discussion. But it still feels very chaotic. A bit more evidence - it seems like OP has provided very mixed messages around AI safety. They've provided surprisingly little
a moral intuition i have: to avoid culturally/conformistly-motivated cognition, it's useful to ask: if we were starting over, new to the world but with all the technology we have now, would we recreate this practice? example: we start and out and there's us, and these innocent fluffy creatures that can't talk to us, but they can be our friends. we're just learning about them for the first time. would we, at some point, spontaneously choose to kill them and eat their bodies, despite us having plant-based foods, supplements, vegan-assuming nutrition guides, etc? to me, the answer seems obviously not. the idea would not even cross our minds. (i encourage picking other topics and seeing how this applies)
I think I broadly like the idea of Donation Week.  One potential weakness is that I'm curious if it promotes the more well-known charities due to the voting system. I'd assume that these are somewhat inversely correlated with the most neglected charities. Related, I'm curious if future versions could feature specific subprojects/teams within charities. "Rethink Priorities" is a rather large project compared to "PauseAI US", I assume it would be interesting if different parts of it were put here instead.  (That said, in terms of the donation, I'd hope that we could donate to RP as a whole and trust RP to allocate it accordingly, instead of formally restricting the money, which can be quite a hassle in terms of accounting) 
How do I offset my animal product consumption as easily as possible? The ideal product would be a basket of offsets that's * easy to set up-- ideally a single monthly donation equivalent to the animal product consumption of the average American, which I can scale up a bit to make sure I'm net positive * based on well-founded impact estimates * affects a wide variety of animals reflecting my actual diet-- at a minimum my donation would be split among separate nonprofits improving the welfare of mammals, birds, fish, and invertebrates, and ideally it would closely track the suffering created by each animal product within that category * includes all animal products, not just meat. I know I could potentially have higher impact just betting on saving 10 million shrimp or whatever, but I have enough moral uncertainty that I would highly value this kind of offset package. My guess is there are lots of people for whom going vegan is not possible or desirable, who would be in the same boat.