C

Chi

958 karmaJoined

Bio

Independent researcher on Evidential Cooperation in Large Worlds - Looking for research collaborators.

Feel free to dm me with thoughts and questions! I try to check the EA Forum very infrequently. If you wanna get in touch, you can use my admonymous link and leave your email address there, so I can reply to you!

You can also just send me thoughts and questions anonymously!

How others can help me

Be my research collaborator! Or connect me with people who might be.

How I can help others

Ask me about my ECL research!

Or any of my background: Before doing independent research, I worked for the Center on Long-Term Risk on s-risk reduction projects (hiring, community building, and grantmaking.) Previously, I was a guest manager at the EA Infrastructure Fund (2021), did some research for 1 Day Sooner on Human Challenge Trials for Covid vaccines (2020), did the summer research fellowship at FHI writing about IDA (2019), worked a few hours a week for CEA on local groups mentoring for a few months (2018), and helped a little bit with organizing EA Oxford (2018/19). I studied PPE at Oxford (2018-2021) and psychology in Freiburg (2015-2018.)

I also have things to say about mental health and advice for taking a break from work.

Comments
52

Topic contributions
1

Chi
12
1
0

If not about "really bad" stuff then feedback should be consensual as a norm - As a community I think we should want to be opting into feedback rather than assuming everyone wants it.

I agree with this. It seems quite hard to implement well unfortunately. Asking if someone wants to hear some (negative) feedback can make it really hard for the other person to say no and already does some of the damage, so in some sense already takes away from it being truly consensual. There probably is some way to do this skillfully but it seems hard/if there is a way that just works and is easy to apply, I don't know it. That said, I think asking someone if there wanna hear some feedback and if now is a good time is usually better than nothing.

(That said, we might disagree on the details of in which cases non-consensual feedback is fine.)

People assume I want feedback a lot and frankly, I do, but some of it can be brutal. And I have pretty thick skin. I have been sad for days after EA feedback. I wouldn't want other people to be treated like this without opting into it

Interesting. I don't think I've made this experience (much - I had this kind of feedback once in 2019). Unless I just can't think of it right now (very possible, I'm very forgetful and easily miss obvious things), I don't think people give me much feedback at all. I wonder if some of the difference in what we emphasise comes from a difference in how people treat us based on demographics etc. (I'm a small woman while Nathan is a tall man. I think my conversation style also projects less perceived confidence than his. I would expect most people to expect me to be more sensitive.)

So maybe one unintuitive takeaway could be "offer marginally more feedback to people who most people based on a shallow impression wouldn't think can take it; be marginally more careful with feedback to people who most people based on a shallow impression would think can take it."

 

edit: Some context is that I wrote this post as a reaction to being frustrated over the years with concrete instances of people not sharing important negative feedback and finding it a bit crazy that people don't do so (not in they are crazy but it's crazy that the world works that way) - some of this is second-hand knowledge though.

I agree that gentle honesty usually > brutal honesty! I agree that this is important and some people would actually do better by giving (negative) feedback less often and more carefully. Thanks for clarifying!

I just wanted to ask for brutal honesty for me specifically because I wanted to lower the amount of effort necessary to give me feedback and push towards clarity whenever there's a trade-off between clarity and being gentle - but just for giving feedback to me in particular. I don't endorse universally doing that for everyone.

Answer by Chi14
2
0

My current regime:

  • Methyl B12, 1000 Mcg, Jarrow (link, not amazon) (roughly 1x week)
  • Vitamin D3 4000 IU Howard & James (amazon only has with K3 now) (roughly 2x week)
  • Omega 3 with 400mg DHA, 200mg EPA, 744mg other Omega 3, Astaxanthin 1mg, Igennus (link) (2x/3x a day)
  • Creatine, Optimum Nutrition, powder, 3g per scoop (link) (1x day) (Gives me digestion issues if I don't dissolve it with food)
  • Magnesium Glycinate, Inner Vitality, 280mg (link) (Possibly gives me digestions issues, ~1x day)
  • Calcium Citrate malate, Pure Nutrition Naturals, 1000mg (link) with D2, K, Zinc, Magnesium Oxide (half the pill ~3x week)

I think B12 is basically a must if you're vegan and D3 a no-brainer I think. Not so sure about the others. If you want to supplement Magnesium and Calcium, you should pay attention to the form, e.g. magnesium oxide does nothing apart from worsening your digestion (unless you have constipation.)

Ideally, you want to take minerals apart from each other and after meals but probably doing whatever makes you actually take stuff is best and it's easier to take everything at once/to just take a multisupplement.

 

For travel and colds only

  • Zinc Citrate (link), 10mg (take many within the first 24 hours of getting a cold. Don't chew and just let it melt.)

Dosage is very confusing here. I think the studies that found that zinc might help with the length of colds used around 90mg or something but maximum RDA is half that.

If you take zinc daily, it's important to also supplement a bit of copper

 

I also tried:

  • Acetyl L-Carnitine, Life extension, 500mg (link) with vitamin C. (Possibly gave me heartburn)
  • B-Complex, Life extension (link) (gave me digestion issues)

If you want to supplement these, they are possibly best on an empty stomach after getting up but again, probably whatever thing people actually stick to is best.

 

Things I might add/swap some of the above out for:

  • Iodine
  • Vitamin A
  • Vitamin E
  • Iron
  • A multivitamin (e.g. 4, which looks pretty good but isn't super easily available. Also considering 1 but it has a weird ingredient list I can't easily interpret)
  • Vitamin K/K4
  • Choline/some other things related to vitamin B

Iron is probably pretty important for many people. I get regular iron tests and didn't need it and it's bad to oversupplement iron, also, but FWIW, I expect that I'll have to start taking iron because I started menstruating again.

 

Things I test for ~twice a year via https://thriva.co/

  • Iron
  • Omega 3
  • B12
  • B9
  • D
  • Misc. other stuff on a rotation depending on what I feel like

 

Would be excited to hear thoughts/feedback if others have some :)

(Haven't been on this regime for long and am a bit loose with it, e.g. just travelled for 3 weeks and supplmented ~nothing during that time)

Center on Long-Term Risk (my employer) focuses on reducing s-risk (risks of astronomical suffering.)

(And AFAIK coined the term (long before my times though))

Chi
19
0
0

Many large donors (and donation advisors) do not take general applications. This includes Open Philanthropy (“In general, we expect to identify most giving opportunities via proactive searching and networking”), Longview, REG, CERR, CLR, and the new Longtermism Fund.

Grant manager at CLR here - we take general applications to the CLR Fund and would love to get more of them. Note that our grantmaking is specifically s-risk focused.*

Copy pasting another comment of mine from another post over here:

If you or someone you know are seeking funding to reduce s-risk, please send me a message. If it's for a smaller amount, you can also apply directly to CLR Fund. This is true even if you want funding for a very different type of project than what we've funded in the past.

I work for CLR on s-risk community building and on our CLR Fund, which mostly does small-scale grantmaking, but I might also be able to make large-scale funding for s-risk projects ~in the tens of $ millions (per project) happen. And if you have something more ambitious than that, I'm also always keen to hear it :)

 

 

*We also fund things that aren't specifically targeted towards s-risk reduction but still seem beneficial to s-risk reduction. Some of our grants this year that we haven't published yet are such grants. That said, we are often not in the best position to evaluate applications that aren't focused on s-risk even if they would have some s-risk-reducing side effects, especially when these side effects are not clearly spelled out in the application.

Automatically create a bibliography with all the links in a post.

Chi
23
0
0

Not OP, but I'm guessing it's at least unclear for the non-safety positions at OpenAI listed but it depends a lot on what a person would do in those positions. (I think they are not necessarily good "by default", so the people working in these positions would have to be more careful/more proactive to make it positive. Still think it could be great.) Same for many similar positions on the sheet but pointing out OpenAI since a lot of roles there are listed. For some of the roles, I don't know enough about the org to judge.

Haha, no, it took me quite a bit longer to phrase what I wrote but I didn't have dedicated non-writing thinking time, e.g. the claim about the expected ratio of future assets seems like something I could sanity check + get a better number for with a pen and pencil and a few minutes but I was too lazy to do that :)

(And I can't let false praise of me stand)

edit to also comment on the substantial part of your comment: Yes, that takeaway seems good to me!

edit edit: Although I'd caveat that s-risk is less mature than general longtermism (more "pre-paradigmatic" for people who like that word), so there might be less (obvious) to do for founders/leaders right now and that can be very frustrating. We still always want to hear about such people.

last edit?: And as in general longtermism, if somebody is interested in s-risk and has really high EtG potential, I might sometimes prefer that. Especially given what I said above about founder/leader type people. Something within an order of magnitude or two of FTX F for s-risk reduction would obviously be a huge win for the space and I don't think it's crazy to think that people could achieve that.

Chi
14
0
0

I didn't run this by anyone else in the s-risk funding space, so please don't hold others to these numbers/opinions.
 

Tl;dr: I think this is probably right in direction but with lots of caveats. In particular, it's still the case that s-risk has a lot of money (~low hundreds $m) compared to ideas/opportunities at least right now and at least possibly more so than general longtermism. I think this might change soon since I expect s-risk money to grow less than general longtermist money.

edit: I think s-risk is ideas constrained when it comes to small grants and funding (and ideas) constrained for large grants/investments.

I'd estimate s-risk to have something in the low hundreds $m in expected value (not time-discounted) of current assets specifically dedicated to it. Your question is slightly hard to answer since I'm guessing OpenPhil and FTXF would fund at least some s-risk projects if there were more proposals/more demand for money in s-risk. Also, a lot of funded people and projects who don't work directly on s-risk still care about s-risk. Maybe that should be counted somehow. Naively not counting these people and OpenPhil/FTXF money at all and comparing current total assets in general longtermism vs. s-risk:

In absolute terms: Yup, general longtermism definitely has much more money (~two orders of magnitude.) My guess is that this ratio will grow bigger over time and that it will in expectation grow bigger over time. (~70% credence for each of the claims? Again confused about how to count OpenPhil and FTX F money and how they'll decide to spend money in the future. If I stick to not counting them as s-risk money at all, then >70% credence.)

Per person working on s-risk/general longtermism: Would still say yes although I don't have a good way to count s-risk people and general longtermist people. Could be closer to even and probably not (much) more than an order of magnitude difference. Again, quick and wild guess is that the difference will in expectation grow larger over time, but less confident in this than my guess about how the ratio of absolute money will develop. (55%?)

Per quality-adjusted idea/opportunity to spend money: Unsure. I'd (much) rather have more money-eating ideas/opportunities to reduce s-risk than more money to reduce s-risk but I'm not sure if this is more or less the case compared to general longtermism (s-risk has both fewer ideas/opportunities and less money). Also don't know how this will develop. Arguably, the ratio between money and idea/opportunity also isn't a great metric because you might care more about absolutes here. I think some people might argue that s-risk is less funding constrained compared to ideas-constrained than general longtermism. This isn't exactly what you've asked for but still seems relevant. OTOH, having less absolute money does mean that the s-risk space might struggle to fund even one really expensive project.

edit: I do think if we had significantly more money right now, we would be spending more money now-ish.

Per "how much people in the EA community care about this issue": Who knows :) I'm  obviously both biased and in a position that selects for my opinion.

Funding infrastructure: Funding in s-risk is even more centralized than in general longtermism, so if you think diversification is good, more s-risk funders are good :) There are also fewer structured opportunities for funding in s-risk and I think the s-risk funding sources are generally harder to find. Although again, I assume one could easily apply with an s-risk motivated proposal to general longtermist places, so it's kind of weird to compare the s-risk funding infrastructure to the general longtermist funding infrastructure.

 

I wrote this off the cuff and in particular, might substantially revise my predictions with 15 minutes of thought.

Chi
30
0
0

If you or someone you know are seeking funding to reduce s-risk, please send me a message. If it's for a smaller amount, you can also apply directly to CLR Fund. This is true even if you want funding for a very different type of project than what we've funded in the past.

I work for CLR on s-risk community building and on our CLR Fund, which mostly does small-scale grantmaking, but I might also be able to make large-scale funding for s-risk projects ~in the tens of $ millions (per project) happen. And if you have something more ambitious than that, I'm also always keen to hear it :)

Load more