Ten months ago I met Australia's Assistant Defence Minister about AI Safety because I sent him one email asking for a meeting. I wrote about that here. In total I sent 21 emails to Politicians and had 4 meetings. AFAICT there is still no organisation with significant funding that does this as their primary activity. AI Safety advocacy is IMO still extremely low hanging fruit. My best theory is EAs don't want to do it / fund it because EAs are drawn to spreadsheets and google docs (it isn't their comparative advantage). Hammers like nails etc.
I'd love to see an 'Animal Welfare vs. AI Safety/Governance Debate Week' happening on the Forum. The risks from AI cause has grown massively in importance in recent years, and has become a priority career choice for many in the community. At the same time, the Animal Welfare vs Global Health Debate Week demonstrated just how important and neglected the cause of animal welfare remains. I know several people (including myself) who are uncertain/torn about whether to pursue careers focused on reducing animal suffering or mitigating existential risks related t...
tldr: Do high school still neglect ethics / moral philosophy in their curriculums? Mine did (year 2012). Are there tractable ways to improve the situation, through national/state education policy or reaching out to schools and teachers? Has this been researched / tried before?
The public high school I went to in Rottweil (rural Southern Germany) was overall pretty good, probably top 2-10% globally, except for one thing: Moral philosophy. 90min/week "Christian Religion" was t...
In England, secular ethics isn't really taught until Year 9 (age 13-14) or Year 10, as part of Religious Studies classes. Even then, it might be dependent on the local council, the type of school or even the exam boards/modules that are selected by the school. And by Year 10, students in some schools can opt out of taking religious studies for their GCSEs.
Anecdotally, I got into EA (at least earlier than I would have) because my high school religious studies teacher (c. 2014) could see that I had utilitarian intuitions (e.g. in discussions about animal experimentation and assisted dying) and gave me a copy of Practical Ethics to read. I then read The Life You Can Save.
(Haven’t thought about this really, might be very wrong, but have this thought and seems good to put out there.) I feel like putting 🔸 at the end of social media names might be bad. I’m curious what the strategy was.
The willingness to do this might be anti-correlated with status. It might be a less important part of identity of more important people. (E.g., would you expect Sam Harris, who is a GWWC pledger, to do this?)
I’d guess that ideally, we want people to associate the GWWC pledge with role models (+ know that people similar to them take the p
Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:
What’s a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, incre...
I think eventually, working on changing the EA introductory program is important. I think it is an extremely good thing to do well, and I think it could be improved. I'm running a 6 week version right now, and I'll see if I feel the same way at the end.
I've had a couple of organisations ask me to clarify the Donation Election's vote-brigading rules. Understandably, they want to promote the donation election amongst their supporters, but they aren't sure to what extent this is vote-brigading. The answer is- it depends.
We want to avoid the Donation Election being a popularity contest/ favouring the candidates with bigger networks. Neither popularity, nor size of network, is perfectly correlated with impact.
If you'd like to reach out to your audience, feel free, but please don't tell them to vot...
🎧 We've created a Spotify playlist with this years marginal funding posts.
Posts with <30 karma don't get narrated so aren't included in the playlist.
Re: a recent quick take in which I called on OpenPhil to sue OpenAI: a new document in Musk's lawsuit mentions this explicitly (page 91)
Interesting lawsuit; thanks for sharing! A few hot (unresearched, and very tentative) takes, mostly on the Musk contract/fraud type claims rather than the unfair-competition type claims related to x.ai:
Is there a maximum effective membership size for EA?
@Joey 🔸 spoke at EAGx last night and one of my biggest take-aways was the (controversial maybe) take that more projects should decline money.
This resonates with my experience; constraint is a powerful driver of creativity and with less constraint you do not necessarily create more creativity (or positive output).
Does the EA movement in terms of number of people have a similar dynamic within society? What growth rate is optimal for a group of members to expand, before it becomes sub-optimal? Zillions of factors to consider of course but... something maybe fun to ponder.
Compassion fatigue should be focused on less.
I had it hammered into me during training as a crisis supporter and I still burnt out.
Now I train others, have seen it hammered into them and still watch countless of them burn out.
I think we need to switch at least 60% of compassion fatigue focus to compassion satisfaction.
Compassion satisfaction is the warm feeling you receive when you give something meaningful to someone, if you're 'doing good work' I think that feeling (and its absence) ought to be spoken about much more.
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:
The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual...
Thanks for this reply — it does resonate with me. It actually got me thinking back to Paul Bloom's Against Empathy book, and how when I read that I thought something like: "oh yeah empathy really isn't the best guide to acting morally," and whether that view contradicts what I was expressing in my quick take above.
I think I probably should have framed the post more as "longtermism need not be totally cold and utilitarian," and that there's an emotional, caring psychological relationship we can have to hypothetical future people because we can imaginatively...
Well known EA sympathizer Richard Hanania writes about his donation to the Shrimp Welfare Project.
I have some hesitations about supporting Richard Hanania given what I understand of his views and history. But in the same way I would say I support *example economic policy* of *example politician I don't like* if I believed it was genuinely good policy, I think I should also say that I found this article of Richard's quite warming.
Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you're likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
I see a dynamic playing out here, where a user has made a falsifiable claim, I have attempted to falsify it, and you've attempted to deny that the claim is falsifiable at all.
My claim is that the org values your time at a rate that is significantly higher than the rate they pay you for it, because the cost of employment is higher than just salary and because the employer needs to value your work above its cost for them to want to hire you. I don't see how this is unfalsifiable. Mostly you could falsify them by asking orgs how they think about the cost o...
For Pause AI or Stop AI to succeed, pausing / stopping needs to be a viable solution. I think some AI capabilities people who believe in existential risk may (perhaps?) be motivated by the thought that the risk of civilisational collapse is high without AI, so it's worth taking the risk of misaligned AI to prevent that outcome.
If this really is cruxy for some people, it's possible this doesn't get noticed because people take it as a background assumption and don't tend to discuss it directly, so they don't realize how much they disagree and how crucial that disagreement is.
EA tends to be anti-revolution, for a variety of reasons. The recent trump appointments have had me wondering if people here have a "line" in their head. By line I mean something like, I need to drop everything and start protesting or do something fast.
Like I don't think appointing RFK jr. to health secretary is that line for me, but I also realize I don't have a clear "line" in my head. If trump appointed a nazi who credibly claimed they were going to commit mass scale war crimes to the sec defense, is that enough for the people here to drop t...
Some musings about experience and coaching. I saw another announcement relating to mentorship/coaching/career advising recently. It looked like the mentors/coaches/advisors were all relatively junior/young/inexperienced. This isn't the first time I've seen this. Most of this type of thing I've seen in and around EA involves the mentors/advisors/coaches being only a few years into their career. This isn't necessarily bad. A person can be very well-read without having gone to school, or can be very strong without going to a gym, or can speak excellent Japane...
EA already skews somewhat young, but from the last EA community survey it looks like the average age was around 29. So I wonder why are the vast majority of people doing mentorship/coaching/career advising are younger than that? Maybe the older people involved in EA are disproportionality not employed for EA organizations and are thus less focused on funneling people into impactful careers?
I checked and people who currently work in an EA org are only slightly older on average (median 29 vs median 28).
EA in a World Where People Actually Listen to Us
I had considered calling the third wave of EA "EA in a World Where People Actually Listen to Us".
Leopold's situational awareness memo has become a salient example of this for me. I used to sometimes think that arguments about whether we should avoid discussing the power of AI in order to avoid triggering an arms race were a bit silly and self important because obviously defense leaders aren't going to be listening to some random internet charity nerds and changing policy as a result.
Well, they are and t... (read more)
In my post, I suggested that one possible future is that we stay at the "forefront of weirdness." Calculating moral weights, to use your example.
I could imagine though that the fact that our opinions might be read by someone with access to the nuclear codes changes how we do things.
I wish there was more debate about which of these futures is more desirable.
(This is what I was trying to get out with my original post. I'm not trying to make any strong claims about whether any individual person counts as "EA".)