LL

Linda Linsefors

@ AI Safety Camp
1999 karmaJoined London, UK

Bio

Hi, I am a Physicist, Effective Altruist and AI safety student/researcher/organiser
Resume - Linda Linsefors - Google Docs

Comments
200

Topic contributions
1

I don't think it's too 'woo'/new age-y. Lot's of EAs are meditators. There are literally meditation sessions happening at EAG London this week.

Also, Qualia Research Institute (qri.org) is EA or at least EA adjacent. 
(What org is or isn't EA is pretty vague)

Also, isn't enlightenment notoriously hard to reach? I.e. it takes years of lots of meditation. Most humans probably don't have both the luxury and the discipline to spend that much time. Even if it's real (I think it is), there are probably lower hanging fruit to pick. 

My guess is that helping someone to go from depressed to normal, is a bigger step in suffering reduction than from normal to enlightened. Same for lifting someone out of poverty. 

However, I have not though about this a lot. 

I know there are also a few people thinking about current human mental health, but I don't think that group is very large. 

Isn't most of the current suffering in the world animal suffering?
I'd expect most suffering focused EAs to either focus on animals or S-risk prevention. 

I agree with this comment.

If EA and ES both existed, I expect the main focus areas to be very different (e.g. political change is not a main focus area in EA, but would be in ES), but (if harmfull tribalism can be avoided) the movements don't have to be opposed to each other. 

I'm not sure why ES would be against charter cities. Are charter cities bad for unions? 

Scandinavia didn’t become wealthy and equitable through marginal charity. Societal transformation comes from uprooting oppressive power structures.

I expect a serious intellectual movement, that aims to uplift the world to Scandinavian standards, to actually learn about Scandinavian society, and what makes it work. 

“Real socialism hasn’t been tried either!” the Effective Samaritan quips back. “Every attempt has always been co-opted by ruling elites who used it for their own ends. The closest we’ve gotten is Scandinavia which now has the world’s highest standards of living, even if not entirely socialist it’s gotta count for something!”

I'm guessing that "socialism" hear means something like Marxism? Since this is the type of socialism that "has not been really tried" according to some, and also the typ of socialism that usually end up with dictatorship. 

Scandinavian socialism did not come from Marxism. 
Source: How Denmark invented Social Democracy (youtube.com)

I'm not a historian, and I have not fact checked the above video in any way. But if fits with other things I've heard, and my own experience of Swedish v.s. US attitudes. 

I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises. 

I apologise and I will try to be more careful in the future. 

One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don't think is healthy for the field. But this situation is not OpenPhils fault.

Below the story from someone who was involved. They have asked to stay anonymous, please respect this. 

The short version of the story is: (1) we applied to OP for funding, (2) late 2022/early-2023 we were in active discussions with them, (3) at some point, we received 200k USD via the SFF speculator grants, (4) then OP got back confirming that they would fund is with the amount for the "lower end" budget scenario minus those 200k.

My rough sense is similar to what e.g. Oli describes in the comments. It's roughly understandable to me that they didn't want to give the full amount they would have been willing to fund without other funding coming in. At the same time, it continues to feel pretty off to me that they let the SFF specultor grant 1:1 replace their funding, without even talking to SFF at all -- since this means that OP got to spend a counterfactual 200k on other things they liked, but SFF did not get to spend additional funding on things they consider high priority.

One thing I regret on my end, in retrospect, is not pushing harder on this, including clarifying to OP that the SFF funding we received was partially uncoined, i.e. it wasn't restricted to funding only the specific program that OP gave us (coined) funding for. But, importantly, I don't think I made that sufficiently clear to OP and I can't claim to know what they would have done if I had pushed for that more confidently.

I've asked for more information and will share what I find, as long as I have permission to do so.

Given the order of things, and the fact that you did not have use for more money, this seems indeed reasonable. Thanks for the clarification.

There are benefit of having this discussion in public, regardless of how responsive OpenPhil staff are.

By posting this publicly I already found out that they did the same to Neal Nanda. Neal though that in his case he though this was "extremely reasonable". I'm not sure why and I've just asked some follow up questions.

I get from your response that you think 45% is good response record, but that depends on how you look at it. In the reference class of major grantmakers it's not bad, and don't think OpenPhil is dong something wrong for not responding to more email. They have other important work to do. But, I also have other important work to do. I'm also not doing anything wrong by not spending extra time figuring out who at their staff to contact and send a private email, which according to your data, has a 55% chance ending up ignored.

Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn't know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X - y, which I thought was extremely reasonable.


Thanks for sharing. 
 

What the other grantmaker (the one who gave your y) though of this?

Where they aware of your OpenPhil grant when they offered you funding?

Did OpenPhil role back your grant because you did not have use for more than X or some other reason?

Load more