L

Linch

@ Rethink Priorities
19725 karmaJoined Working (6-15 years)

Bio

"To see the world as it is, rather than as I wish it to be."

I'm a Senior Researcher on the General Longtermism team at Rethink Priorities. I also volunteer as a funds manager for EA Funds' Long-term Future Fund.

Comments
2293

Were there donors who said that they benefitted from your work and/or made specific decisions based on it?

Pretty unfortunate naming, like calling a new cybersecurity policy the RELEASE VIRUS Act.

Looping back some months later, FWIW while I disagree with most of the rest of the comment (and can see a case for a ban as a result of them), I quite appreciate the point about "interpretive labor", and I've found it an interesting/useful conceptual handle in my toolkit since reading it.

(This is a high bar as most EA Forum comments do not update me nearly as much).

Yeah this sounds right. 

One thing is just that discouragement is culturally quite hard and there are strong disincentives to do so; eg I think I definitely get more flak for telling people they shouldn't do X than telling them they should (including a recent incidence which was rather personally costly). And I think I'm much more capable of diplomatic language than the median person in such situations; some of my critical or discouraging comments on this forum are popular.

I also know at least 2 different people who were told (probably wrongly) many years ago that they can't be good researchers, and they still bring it up as recently as this year. Presumably people falsely told they can be good researchers (or correctly told that they cannot) are less likely to e.g. show up at EA Global. So it's easier for people in positions of relative power or prestige to see the positive consequences of encouragement, and the negative consequences of discouragement, than the reverse.

Sometimes when people ask me about their chances, I try to give them off-the-cuff numerical probabilities. Usually the people I'm talking to appreciate it but sometimes people around them (or around me) get mad at me.

(Tbf, I have never tried scoring these fast guesses, so I have no idea how accurate they are).

I guess it makes sense that people who disagree with the norms are more likely to do underhanded things to violate them.

The Long-Term Future Fund is somewhat funding constrained. In addition, we (I) have written a number of docs and announcement that we hope to publicly release in the next 1-3 weeks. In the meantime, I recommend anti-x-risk donors who think they might want to donate to LTFF to hold off on donating until after our posts are out next month, to help them make informed decisions about where best to donate to. The main exception of course is funding time-sensitive projects from other charities.

I will likely not answer questions now but will be happy to do so after the docs are released.

(I work for the Long-term Future Fund as a fund manager aka grantmaker. Historically this has been entirely in a volunteer capacity, but I've recently started being paid as I've ramped my involvement up).

Seems like this question just reduces to the normal Fermi paradox right? "Power-seeking AGI" isn't adding any additional bits to that question. 

I like the Google Pixels. Well specifically I liked 2 and 3a but my current one (6a) is a bit of a disappointment. My house also uses Google Nest and Chromecast regularly. Tensorflow is okay. But yeah, overall certainly nothing as big as Gmail or Google Maps, never mind their core product.

Load more