Quick takes

EA in a World Where People Actually Listen to Us

I had considered calling the third wave of EA "EA in a World Where People Actually Listen to Us". 

Leopold's situational awareness memo has become a salient example of this for me. I used to sometimes think that arguments about whether we should avoid discussing the power of AI in order to avoid triggering an arms race were a bit silly and self important because obviously defense leaders aren't going to be listening to some random internet charity nerds and changing policy as a result.

Well, they are and t... (read more)

Showing 3 of 33 replies (Click to show all)

In my post, I suggested that one possible future is that we stay at the "forefront of weirdness." Calculating moral weights, to use your example.

I could imagine though that the fact that our opinions might be read by someone with access to the nuclear codes changes how we do things.

I wish there was more debate about which of these futures is more desirable.

(This is what I was trying to get out with my original post. I'm not trying to make any strong claims about whether any individual person counts as "EA".)

8
Ben_West🔸
Maybe instead of "where people actually listen to us" it's more like "EA in a world where people filter the most memetically fit of our ideas through their preconceived notions into something that only vaguely resembles what the median EA cares about but is importantly different from the world in which EA didn't exist."
4
MichaelDickens
On that framing, I agree that that's something that happens and that we should be able to anticipate will happen.

Ten months ago I met Australia's Assistant Defence Minister about AI Safety because I sent him one email asking for a meeting. I wrote about that here. In total I sent 21 emails to Politicians and had 4 meetings. AFAICT there is still no organisation with significant funding that does this as their primary activity. AI Safety advocacy is IMO still extremely low hanging fruit. My best theory is EAs don't want to do it / fund it because EAs are drawn to spreadsheets and google docs (it isn't their comparative advantage). Hammers like nails etc.

I also think many EAs are still allergic to direct political advocacy, and that this tendency is stronger in more rationalist-ish cause areas such as AI. We shouldn’t forget Yudkowsky’s “politics is the mind-killer”!

I'd love to see an 'Animal Welfare vs. AI Safety/Governance Debate Week' happening on the Forum. The risks from AI cause has grown massively in importance in recent years, and has become a priority career choice for many in the community. At the same time, the Animal Welfare vs Global Health Debate Week demonstrated just how important and neglected the cause of animal welfare remains. I know several people (including myself) who are uncertain/torn about whether to pursue careers focused on reducing animal suffering or mitigating existential risks related t... (read more)

I would like to see this. I have considerable uncertainty about whether to prioritize (longtermism-oriented) animal welfare or AI safety.

How tractable is improving (moral) philosophy education in high schools? 


tldr: Do high school still neglect ethics / moral philosophy in their curriculums? Mine did (year 2012). Are there tractable ways to improve the situation, through national/state education policy or reaching out to schools and teachers? Has this been researched / tried before?
 

The public high school I went to in Rottweil (rural Southern Germany) was overall pretty good, probably top 2-10% globally, except for one thing: Moral philosophy. 90min/week "Christian Religion" was t... (read more)

In England, secular ethics isn't really taught until Year 9 (age 13-14) or Year 10, as part of Religious Studies classes. Even then, it might be dependent on the local council, the type of school or even the exam boards/modules that are selected by the school. And by Year 10, students in some schools can opt out of taking religious studies for their GCSEs.

Anecdotally, I got into EA (at least earlier than I would have) because my high school religious studies teacher (c. 2014) could see that I had utilitarian intuitions (e.g. in discussions about animal experimentation and assisted dying) and gave me a copy of Practical Ethics to read. I then read The Life You Can Save.

6
Joseph Lemien
I went to high school in the USA, in the 2000s, so it has been roughly twenty years. I attended a public highschool, that wasn't particularly well-funded nor impoverished. There were no ethics or philosophy courses offered. There was not education on moral philosophy, aside from that which is gained through literature in an English class (such as reading Lord of the Flies or Fahrenheit 451 or To Kill a Mockingbird). There is a Facebook group for EA Education, but my impression is that it isn't very active. My (uninformed, naïve) guess is that this isn't very tractable, because education tends to be controlled by the government and there are a lot of vested interests. The argument would basically be "why should we teach these kids about being a good person when we could instead use that time to teach them computer programming/math/engineering/language/civics?" It is a crowded space with a lot of competing interests already.
1
Charlie_Guthmann
Charter schools are a real option in many places. In Chicago if you have money and wherewithal you can open a charter school and basically teach what ever you want. The downside here is you will not be able to get the top students in the city to go to your school because there are already a select few incredible public and private schools. 

(Haven’t thought about this really, might be very wrong, but have this thought and seems good to put out there.) I feel like putting 🔸 at the end of social media names might be bad. I’m curious what the strategy was.

  • The willingness to do this might be anti-correlated with status. It might be a less important part of identity of more important people. (E.g., would you expect Sam Harris, who is a GWWC pledger, to do this?)

  • I’d guess that ideally, we want people to associate the GWWC pledge with role models (+ know that people similar to them take the p

... (read more)
lukeprog
161
16
2
19

Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:

  • Open Philanthropy (OP) and our largest funding partner Good Ventures (GV) can't be or do everything related to GCRs from AI and biohazards: we have limited funding, staff, and knowledge, and many important risk-reducing activities are impossible for us to do, o
... (read more)
Showing 3 of 16 replies (Click to show all)

[1] Several of our grantees regularly criticize leading AI companies in their official communications [2] organizations we've directed funding to regularly propose or advocate policies that ~all frontier AI companies seem to oppose

Could you give examples of these?

4
David Mathers🔸
Can you say what the "some kinds" are? 
0
Habryka
Sure, my guess is OP gets around 50%[1] of the credit for that and GV is about 20% of the funding in the pool, making the remaining portion a ~$10M/yr grant ($20M/yr for 4 years of non-GV funding[2]). GV gives out ~$600M[3] grants per year recommended by OP, so to get to >5% you would need the equivalent of 3 projects of this size per year, which I haven't seen (and don't currently think exist). Even at 100% credit, which seems like a big stretch, my guess is you don't get over 5%.  To substantially change the implications of my sentence I think you need to get closer to 10%, which I think seems implausible from my viewpoint. It seems pretty clear the right number is around 95% (and IMO it's bad form given that to just respond with a "this was never true" when it's clearly and obviously been true in some past years, and it's at the very least very close to true this year). 1. ^ Mostly chosen for schelling-ness. I can imagine it being higher or lower. It seems like lots of other people outside of OP have been involved, and the choice of area seems heavily determined by what OP could get buy-in for from other funders, seeming somewhat more constrained than other grants, so I think a lower number seems more reasonable. 2. ^ I have also learned to really not count your chickens before they are hatched with projects like this, so I think one should discount this funding by an expected 20-30% for a 4-year project like this, since funders frequently drop out and leadership changes, but we can ignore that for now 3. ^ https://www.goodventures.org/our-portfolio/grantmaking-approach/
saulius
32
2
0
19

What’s a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, incre... (read more)

Showing 3 of 16 replies (Click to show all)
6
saulius
I love the idea in your talk! I can imagine it changing the world a lot and that feels motivating. I wonder if more Founders Pledge members could be convinced to do this. 
8
David_Moss
One possible way of thinking about this, which might tie your work in smaller battles into a 'big picture', is if you believe that your work on the smaller battles is indirectly helping the wider project. e.g. by working to solve one altruistic cause you are sparing other altruistic individuals and altruistic resources from being spent on that cause, increasing the resources available for wider altruistic projects, and potentially by increasing altruistic resources available in the future.[1] Note that I'm only saying this is a possible way of thinking about this, not necessarily that you should think this (for one thing, the extent to which this is true probably varies across areas, depending on the inter-connectedness of different cause areas in different ways and their varying flowthrough effects). 1. ^ As in this passage from one of Yudkowsky's short stories:

I think eventually, working on changing the EA introductory program is important. I think it is an extremely good thing to do well, and I think it could be improved. I'm running a 6 week version right now, and I'll see if I feel the same way at the end.

1
anormative
Why do you think changing it is important? In the version that you're running right now, did you just shorten it, or did you change anything else?

I mostly shortened it, I think the main reasons I have are university level specific. I feel like there are a not insignificant number of people who would commit to a 6 week fellowship, but not 8, and there is not enough focus on the wider EA community; I feel like this should be more emphasized.

I've had a couple of organisations ask me to clarify the Donation Election's vote-brigading rules. Understandably, they want to promote the donation election amongst their supporters, but they aren't sure to what extent this is vote-brigading. The answer is- it depends. 

We want to avoid the Donation Election being a popularity contest/ favouring the candidates with bigger networks. Neither popularity, nor size of network, is perfectly correlated with impact. 

If you'd like to reach out to your audience, feel free, but please don't tell them to vot... (read more)

🎧 We've created a Spotify playlist with this years marginal funding posts. 

Posts with <30 karma don't get narrated so aren't included in the playlist.

Re: a recent quick take in which I called on OpenPhil to sue OpenAI: a new document in Musk's lawsuit mentions this explicitly (page 91)

Interesting lawsuit; thanks for sharing! A few hot (unresearched, and very tentative) takes, mostly on the Musk contract/fraud type claims rather than the unfair-competition type claims related to x.ai:

  1. One of the overarching questions to consider when reading any lawsuit is that of remedy. For instance, the classic remedy for breach of contract is money damages . . . and the potential money damages here don't look that extensive relative to OpenAI's money burn.
  2. Broader "equitable" remedies are sometimes available, but they are more discretionary and there
... (read more)

Is there a maximum effective membership size for EA?

@Joey 🔸 spoke at EAGx last night and one of my biggest take-aways was the (controversial maybe) take that more projects should decline money. 

This resonates with my experience; constraint is a powerful driver of creativity and with less constraint you do not necessarily create more creativity (or positive output). 

Does the EA movement in terms of number of people have a similar dynamic within society? What growth rate is optimal for a group of members to expand, before it becomes sub-optimal? Zillions of factors to consider of course but... something maybe fun to ponder. 

Compassion fatigue should be focused on less. 

I had it hammered into me during training as a crisis supporter and I still burnt out. 

Now I train others, have seen it hammered into them and still watch countless of them burn out. 

I think we need to switch at least 60% of compassion fatigue focus to compassion satisfaction. 

Compassion satisfaction is the warm feeling you receive when you give something meaningful to someone, if you're 'doing good work' I think that feeling (and its absence) ought to be spoken about much more. 

This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:

The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual... (read more)

Showing 3 of 4 replies (Click to show all)
25
Thomas Kwa
I want to slightly push back against this post in two ways: * I do not think longtermism is any sort of higher form of care or empathy. Many longtermist EAs are motivated by empathy, but they are also driven by a desire for philosophical consistency, beneficentrism and scope-sensitivity that is uncommon among the general public. Many are also not motivated by empathy-- I think empathy plays some role for me but is not the primary motivator? Cold utilitarianism is more important but not the primary motivator either [1]. I feel much more caring when I cook dinner for my friends than when I do CS research, and it is only because I internalize scope sensitivity more than >99% of people that I can turn empathy into any motivation whatsoever to work on longtermist projects. I think that for most longtermists, it is not more empathy, nor a better form of empathy, but the interaction of many normal (often non-empathy) altruistic motivators and other personality traits that makes them longtermists. * Longtermists make tradeoffs between other common values and helping vast future populations that most people disagree with, and without ideosyncratic EA values there is no reason that a caring person should make the same tradeoffs as longtermists. I think the EA value of "doing a lot more good matters a lot more" is really important, but it is still trading off against other values. * Helping people closer to you / in your community: many people think this has inherent value * Beneficentrism: most people think there is inherent value in being directly involved in helping people. Habitat for Humanity is extremely popular among caring and empathic people, and they would mostly not think it is better to make more of an overall difference by e.g. subsidizing eyeglasses in Bangladesh. * Justice: most people think it is more important to help one human trafficking victim than one tuberculosis victim or one victim of omnicidal AI if you create the same welfare, because they

Thanks for this reply — it does resonate with me. It actually got me thinking back to Paul Bloom's Against Empathy book, and how when I read that I thought something like: "oh yeah empathy really isn't the best guide to acting morally," and whether that view contradicts what I was expressing in my quick take above.

I think I probably should have framed the post more as "longtermism need not be totally cold and utilitarian," and that there's an emotional, caring psychological relationship we can have to hypothetical future people because we can imaginatively... (read more)

16
Tyler Johnston
Yeah, I meant to convey this in my post but framing it a bit differently — that they are real people with valid moral claims who may exist. I suppose framing it this way is just moving the hypothetical condition elsewhere to emphasize that, if they do exist, they would be real people with real moral claims, and that matters. Maybe that's confusing though. BTW, my personal views lean towards a suffering-focused ethics that isn't seeking to create happy people for their own sake. But I still think that, in coming to that view, I'm concerned with the experience of those hypothetical people in the fuzzy, caring way that utilitarians are charged with disregarding. That's my main point here. But maybe I just get off the crazy train at my unique stop. I wouldn't consider tiling the universe with hedonium to be the ultimate act of care/justice, but I suppose someone could feel that way, and thereby make an argument along the same lines. Agreed there are other issues with longtermism — just wanted to respond to the "it's not about care or empathy" critique.
Buck
30
7
1
5

Well known EA sympathizer Richard Hanania writes about his donation to the Shrimp Welfare Project.

I have some hesitations about supporting Richard Hanania given what I understand of his views and history. But in the same way I would say I support *example economic policy* of *example politician I don't like* if I believed it was genuinely good policy, I think I should also say that I found this article of Richard's quite warming.

Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you're likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.

Showing 3 of 6 replies (Click to show all)
6
Ben Millwood🔸
Strictly speaking your salary is the wrong number here. At a minimum, you want to use the cost to the org of your work, which is your salary + other costs of employing you (and I've seen estimates of the other costs at 50-100% of salary). In reality, the org of course values your work more highly than the amount they pay to acquire it (otherwise... why would they acquire it at that rate) so your value per hour is higher still. Keeping in mind that the pay for work tasks generally isn't that high, it seems pretty plausible to me that the assessment cost is primarily staff time and not money.
-2
ElliotJDavies
I see a dynamic playing out here, where a user has made a falsifiable claim, I have attempted to falsify it, and you've attempted to deny that the claim is falsifiable at all.  I recognise it's easy to stumble into these dynamics, but we must acknowledge that this is epistemically destructive. I don't think we should dismiss empirical data so quickly when it's brought to the table - that sets a bad precedent.    I can also provide empirical data on this if that is the crux here? Notice that we are discussing a concrete empirical data point, that represents a 600% difference, while you've given a theoretical upper bound of 100%. That leaves a 500% delta.  Would you be able to provide any concrete figures here? I view pointing to opportunity cost in the abstract as essentially an appeal to ignorance.    Not to say that opportunity costs do not exist, but you've failed to concretise them in a way, and that makes it hard to find the truth here. I could make similar appeals to ignorance in support of my argument, like the idea the benefit of getting a better candidate is very high, as candidate performance is fat-tailed ect. - but I believe this is similarly epistemically destructive. If I were to make a similar claim, I would likely attempt to concretise it. 

I see a dynamic playing out here, where a user has made a falsifiable claim, I have attempted to falsify it, and you've attempted to deny that the claim is falsifiable at all.

My claim is that the org values your time at a rate that is significantly higher than the rate they pay you for it, because the cost of employment is higher than just salary and because the employer needs to value your work above its cost for them to want to hire you. I don't see how this is unfalsifiable. Mostly you could falsify them by asking orgs how they think about the cost o... (read more)

For Pause AI or Stop AI to succeed, pausing / stopping needs to be a viable solution. I think some AI capabilities people who believe in existential risk may (perhaps?) be motivated by the thought that the risk of civilisational collapse is high without AI, so it's worth taking the risk of misaligned AI to prevent that outcome.

If this really is cruxy for some people, it's possible this doesn't get noticed because people take it as a background assumption and don't tend to discuss it directly, so they don't realize how much they disagree and how crucial that disagreement is.

EA tends to be anti-revolution, for a variety of reasons. The recent trump appointments have had me wondering if people here have a "line" in their head. By line I mean something like, I need to drop everything and start protesting or do something fast. 

 Like I don't think appointing RFK jr. to health secretary is that line for me, but I also realize I don't have a clear "line" in my head. If trump appointed a nazi who credibly claimed they were going to commit mass scale war crimes to the sec defense, is that enough for the people here to drop t... (read more)

Some musings about experience and coaching. I saw another announcement relating to mentorship/coaching/career advising recently. It looked like the mentors/coaches/advisors were all relatively junior/young/inexperienced. This isn't the first time I've seen this. Most of this type of thing I've seen in and around EA involves the mentors/advisors/coaches being only a few years into their career. This isn't necessarily bad. A person can be very well-read without having gone to school, or can be very strong without going to a gym, or can speak excellent Japane... (read more)

EA already skews somewhat young, but from the last EA community survey it looks like the average age was around 29. So I wonder why are the vast majority of people doing mentorship/coaching/career advising are younger than that? Maybe the older people involved in EA are disproportionality not employed for EA organizations and are thus less focused on funneling people into impactful careers?

 

I checked and people who currently work in an EA org are only slightly older on average (median 29 vs median 28).

Load more