All posts

New & upvoted

Today, 21 November 2024
Today, 21 Nov 2024

Quick takes

Ten months ago I met Australia's Assistant Defence Minister about AI Safety because I sent him one email asking for a meeting. I wrote about that here. In total I sent 21 emails to Politicians and had 4 meetings. AFAICT there is still no organisation with significant funding that does this as their primary activity. AI Safety advocacy is IMO still extremely low hanging fruit. My best theory is EAs don't want to do it / fund it because EAs are drawn to spreadsheets and google docs (it isn't their comparative advantage). Hammers like nails etc.

Wednesday, 20 November 2024
Wed, 20 Nov 2024

Frontpage Posts

Quick takes

I'd love to see an 'Animal Welfare vs. AI Safety/Governance Debate Week' happening on the Forum. The risks from AI cause has grown massively in importance in recent years, and has become a priority career choice for many in the community. At the same time, the Animal Welfare vs Global Health Debate Week demonstrated just how important and neglected the cause of animal welfare remains. I know several people (including myself) who are uncertain/torn about whether to pursue careers focused on reducing animal suffering or mitigating existential risks related to AI. It would help to have rich discussions comparing both causes's current priorities and bottlenecks, and a debate week would hopefully expose some useful crucial considerations.

Tuesday, 19 November 2024
Tue, 19 Nov 2024

Frontpage Posts

Quick takes

EA in a World Where People Actually Listen to Us I had considered calling the third wave of EA "EA in a World Where People Actually Listen to Us".  Leopold's situational awareness memo has become a salient example of this for me. I used to sometimes think that arguments about whether we should avoid discussing the power of AI in order to avoid triggering an arms race were a bit silly and self important because obviously defense leaders aren't going to be listening to some random internet charity nerds and changing policy as a result. Well, they are and they are. Let's hope it's for the better.
How tractable is improving (moral) philosophy education in high schools?  tldr: Do high school still neglect ethics / moral philosophy in their curriculums? Mine did (year 2012). Are there tractable ways to improve the situation, through national/state education policy or reaching out to schools and teachers? Has this been researched / tried before?   The public high school I went to in Rottweil (rural Southern Germany) was overall pretty good, probably top 2-10% globally, except for one thing: Moral philosophy. 90min/week "Christian Religion" was the default for everyone, in which we spent most of the time interpreting stories from the bible, most of which to me felt pretty irrelevant to the present. This was in 2012 in Germany, a country with more atheists than Christians as of 2023, and even in 2012 my best guess is that <20% of my classmates were practicing a religion.  Only in grade 10, we got the option to switch to secular Ethics classes instead, which only <10% of the students did (Religion was considered less work).  Ethics class quickly became one of my favorite classes. For the first time in my life I had a regular group of people equally interested in discussing Vegetarianism and other such questions (almost everyone in my school ate meat, and vegetarians were sometimes made fun of). Still, the curriculum wasn't great, we spent too much time with ancient Greek philosophers and very little time discussing moral philosophy topics relevant to the present.  How have your experiences been in high school? I'm especially curious about more recent experiences.  Are there tractable ways to improve the situation? Has anyone researched this?  1) Could we get ethics classes in the mandatory/default curriculum in more schools? Which countries or states seem best for that? In Germany, education is state-regulated - which German state might be most open to this? Hamburg? Berlin?  2) Is there a shortage in ethics teachers (compared to religion teachers)? Can we
(Haven’t thought about this really, might be very wrong, but have this thought and seems good to put out there.) I feel like putting 🔸 at the end of social media names might be bad. I’m curious what the strategy was. * The willingness to do this might be anti-correlated with status. It might be a less important part of identity of more important people. (E.g., would you expect Sam Harris, who is a GWWC pledger, to do this?) * I’d guess that ideally, we want people to associate the GWWC pledge with role models (+ know that people similar to them take the pledge, too). * Anti-correlation with status might mean that people will identify the pledge with average though altruistic Twitter users, not with cool people they want to be more like. * You won’t see a lot of e/accs putting the 🔸 in their names. There might be downside effects of perception of a group of people as clearly outlined and having this as an almost political identity; it seems bad to have directionally-political properties that might do mind-killing things both to people with 🔸 and to people who might argue with them.
I missed a meeting

Monday, 18 November 2024
Mon, 18 Nov 2024

Frontpage Posts

Quick takes

I've had a couple of organisations ask me to clarify the Donation Election's vote-brigading rules. Understandably, they want to promote the donation election amongst their supporters, but they aren't sure to what extent this is vote-brigading. The answer is- it depends.  We want to avoid the Donation Election being a popularity contest/ favouring the candidates with bigger networks. Neither popularity, nor size of network, is perfectly correlated with impact.  If you'd like to reach out to your audience, feel free, but please don't tell them to vote for you. You can explain the event, and mention that you are a candidate, but we want the votes to inform us of the Forum audience's opinions of marginal impact of money donated to these charities, not to the strength of their networks.  I'm aware this exortation won't do all the work- we will also be looking into voting patterns, and new accounts (made after October 22, when the election was announced) won't be eligible to vote. 
🎧 We've created a Spotify playlist with this years marginal funding posts.  Posts with <30 karma don't get narrated so aren't included in the playlist.

Sunday, 17 November 2024
Sun, 17 Nov 2024

Quick takes

Is there a maximum effective membership size for EA? @Joey 🔸 spoke at EAGx last night and one of my biggest take-aways was the (controversial maybe) take that more projects should decline money.  This resonates with my experience; constraint is a powerful driver of creativity and with less constraint you do not necessarily create more creativity (or positive output).  Does the EA movement in terms of number of people have a similar dynamic within society? What growth rate is optimal for a group of members to expand, before it becomes sub-optimal? Zillions of factors to consider of course but... something maybe fun to ponder. 
Compassion fatigue should be focused on less.  I had it hammered into me during training as a crisis supporter and I still burnt out.  Now I train others, have seen it hammered into them and still watch countless of them burn out.  I think we need to switch at least 60% of compassion fatigue focus to compassion satisfaction.  Compassion satisfaction is the warm feeling you receive when you give something meaningful to someone, if you're 'doing good work' I think that feeling (and its absence) ought to be spoken about much more. 

Saturday, 16 November 2024
Sat, 16 Nov 2024

Quick takes

Re: a recent quick take in which I called on OpenPhil to sue OpenAI: a new document in Musk's lawsuit mentions this explicitly (page 91)

Friday, 15 November 2024
Fri, 15 Nov 2024

Frontpage Posts

97
RobM
· · 1m read

Quick takes

30
Buck
6d
1
Well known EA sympathizer Richard Hanania writes about his donation to the Shrimp Welfare Project.

Thursday, 14 November 2024
Thu, 14 Nov 2024

Frontpage Posts

Quick takes

161
lukeprog
7d
16
Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things: * Open Philanthropy (OP) and our largest funding partner Good Ventures (GV) can't be or do everything related to GCRs from AI and biohazards: we have limited funding, staff, and knowledge, and many important risk-reducing activities are impossible for us to do, or don't play to our comparative advantages. * Like most funders, we decline to fund the vast majority of opportunities we come across, for a wide variety of reasons. The fact that we declined to fund someone says nothing about why we declined to fund them, and most guesses I've seen or heard about why we didn't fund something are wrong. (Similarly, us choosing to fund someone doesn't mean we endorse everything about them or their work/plans.) * Very often, when we decline to do or fund something, it's not because we don't think it's good or important, but because we aren't the right team or organization to do or fund it, or we're prioritizing other things that quarter. * As such, we spend a lot of time working to help create or assist other philanthropies and organizations who work on these issues and are better fits for some opportunities than we are. I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages — whether through existing large-scale philanthropies turning their attention to these risks or through new philanthropists entering the space. * While Good Ventures is Open Philanthropy's largest philanthropic partner, we also regularly advise >20 other philanthropists who are interested to hear about GCR-related funding opportunities. (Our GHW team also does similar work partnering with many other philanthropist
For Pause AI or Stop AI to succeed, pausing / stopping needs to be a viable solution. I think some AI capabilities people who believe in existential risk may (perhaps?) be motivated by the thought that the risk of civilisational collapse is high without AI, so it's worth taking the risk of misaligned AI to prevent that outcome. If this really is cruxy for some people, it's possible this doesn't get noticed because people take it as a background assumption and don't tend to discuss it directly, so they don't realize how much they disagree and how crucial that disagreement is.
EA tends to be anti-revolution, for a variety of reasons. The recent trump appointments have had me wondering if people here have a "line" in their head. By line I mean something like, I need to drop everything and start protesting or do something fast.   Like I don't think appointing RFK jr. to health secretary is that line for me, but I also realize I don't have a clear "line" in my head. If trump appointed a nazi who credibly claimed they were going to commit mass scale war crimes to the sec defense, is that enough for the people here to drop their current work?  I'm definitely generally on the side of engaging in reactionary politics being worthless, and further I don't feel like the US is about to fall apart or completely go off the rails, but it would be really interesting to see if we could teleport some EAs back in time right before the rise of hitler or pre chinese revolution etc. (while wiping their brains of the knowledge of what would come) and see if they would say stuff like "politics is the mind killer and I need to focus on xyz"
Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you're likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
Some musings about experience and coaching. I saw another announcement relating to mentorship/coaching/career advising recently. It looked like the mentors/coaches/advisors were all relatively junior/young/inexperienced. This isn't the first time I've seen this. Most of this type of thing I've seen in and around EA involves the mentors/advisors/coaches being only a few years into their career. This isn't necessarily bad. A person can be very well-read without having gone to school, or can be very strong without going to a gym, or can speak excellent Japanese without having ever been to Japan. A person being two or three or four years into their career doesn't mean that it is impossible for them to have have good ideas and good advice.[1] But it does seem a little... odd. The skepticism I feel is similar to having a physically frail person as a fitness trainer: I am assessing the individual on a proxy (fitness) rather than on the true criteria (ability to advise me regarding fitness). Maybe that thinking is a bit too sloppy on my part. This doesn't mean that if you are 24 and you volunteer as a mentor that you should stop; you aren't doing anything wrong. And I wouldn't want some kind a silly and arbitrary rule, such as "only people age 40+ are allowed to be career coaches." And there are some people doing this kind of work that have a decade or more of professional experience; I don't want to make it sound like all of the people doing coaching and advising are fresh grads. I wonder if there are any specific advantages or disadvantages to this 'junior skew.' Is there a meaningful correlation between length of career and ability to help other people with their careers?  EA already skews somewhat young, but from the last EA community survey it looks like the average age was around 29. So I wonder why are the vast majority of people doing mentorship/coaching/career advising are younger than that? Maybe the older people involved in EA are disproportionality not employ

Wednesday, 13 November 2024
Wed, 13 Nov 2024

Frontpage Posts

Quick takes

There’s an asymmetry between people/orgs that are more willing to publicly write impressions and things they’ve heard, and people/orgs that don’t do much of that. You could call the continuum “transparent and communicative, vs locked down and secretive” or “recklessly repeating rumors and speculation, vs professional” depending on your views! When I see public comments about the inner workings of an organization by people who don’t work there, I often also hear other people who know more about the org privately say “That’s not true.” But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their org’s communications staff, and then follow whatever discussion comes from it. A downside is that if an organization isn’t prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. That’s frustrating, but it’s a standard way that many organizations operate, both in EA and in other spaces. There are some good reasons to be slower and more coordinated about communications. For example, I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if they’d all waited a day or two to get on the same page and write a response with the correct facts. This process is worth doing for some important discussions, but few organizations will prioritize doing this every time someone is wrong on the internet. So what’s a reader to do? When you see a claim that an org is doing some shady-sounding thing, made by someone who doesn’t work at that org, remember the asymmetry. These situations will look identical to most readers: * The org really is doing a shady thing, and doesn’t want to discuss it * The org
32
saulius
8d
16
What’s a realistic, positive vision of the future worth fighting for? I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
Applying my global health knowledge to the animal welfare realm, I'm requesting 1,000,000 dollars to launch this deep net positive (Shr)Impactful charity. I'll admit the funding opportunity is pretty marginal...   Thanks @Toby Tremlett🔹 for bringing this to life. Even though she doesn't look so happy I can assure you this intervention nets a 30x welfare range improvement for this shrimp, so she's now basically a human.
Update: Pushing for messenger interoperability (part of EU Digital Markets Act) might be more tractable and more helpful. Forwarding private comment from a friend: Interoperability was part of Digital Markets Act, so EVP Ribera will be main enforcer, and was asked about her stance in her EU parliament confirmation hearing yesterday. You could watch that / write her team abt the underrated cybersecurity benefits of interoperability esp. given it would upgrade WhatsApp's encryption TLDR: Improving Signal (messenger) seems important, [edit: maybe] neglected and tractable. Thoughts? Can we help?  Signal (similar to Whatsapp) is the only truly privacy-friendly popular messenger I know. Whatsapp and Telegram also offer end-to-end encryption (Telegram only in "secret chats") but they still collect metadata like your contacts, and many people I meet strongly prefer Signal for various reasons: Some people work in cybersecurity and have strong privacy preferences, others dislike Telegram (bad rep, popular among conspiratists, spam) and Meta (Whatsapp owner). For some vulnerable people such activists in authoritarian regimes or whistleblowers in powerful organizations, secure messaging seems essential, and Signal seems to be the best tool we have.  While Signal is improving, I still often find it annoying to use compared to Telegram. Here just some examples: 1) it's easily overwhelming: No sorting chats in folders, archiving group chats doesn't really work (they keep popping back to 'unarchived' whenever someone writes a new message), lots of notifications I don't care about like "user xyz changed their security number" and no way to turn them off 2) no option to make chat history visible to new group members, which is really annoying for some use cases 3) no poll feature, no live location sharing 4) no "community"/supergroup feature, people need to find and manually join all different groups in a community 5) no threads (in Telegram that's possible in announcement
Does anyone have thoughts on whether it’s still worthwhile to attend EAGxVirtual in this case? I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I haven't: * I would only be able to attend on Sunday afternoon CET, and it seems like it might be a waste to apply if I'm only available for that time slot, as this is something I would never do for an in-person conference. * I can't find the schedule anywhere. You probably only have access to it if you are on Swapcard, but this makes it difficult to decide ahead of time whether it is worth attending, especially if I can only attend a small portion of the conference.

Topic Page Edits and Discussion

Tuesday, 12 November 2024
Tue, 12 Nov 2024

Frontpage Posts

Personal Blogposts

Quick takes

Has anybody changed their behaviour after the animal welfare vs global health debate week? A month or so on, I'm curious if anybody is planning to donate differently, considering a career pivot, etc. If anybody doesn't want to share publicly but would share privately, please feel free to message me. Linking @Angelina Li's post asking how people would change their behaviour, and tagging @Toby Tremlett🔹 who might have thought about tracking this.
Meant to post this in funding diversification week. A potential source of new and consistent funds: EA researchers/orgs could run research training programs. Drawing some of the rents away from universities and keeping it in the system. These could be non-accredited but focus on publicly demonstrable skills and offer tailored letters of recommendation for a limited number of participants. Could train skills and mentor research particularly relevant to EA orgs and funders. Students (EA and non-EA) would pay for this. Universities and government training funds could also be unlocked. (More on this later, I think, I have a whole set of plans/notes).
Someone really needs to make Asterisk meetup groups a thing.
People in EA end up optimizing for EA credentials so they can virtue signal to grantmakers, but grantmakers would probably like people to scope out non-EA opportunities because that allows us to introduce unknown people to the concerns we have

Load more days