HG

Harrison Gietz

74 karmaJoined Baton Rouge, LA, USA

Comments
10

I think a common pitfall from being part of groups which appear to have better epistemics than a lot of others (i.e. EA, LW) is that being part of these groups implicitly gives a feeling of being able to let your [epistemic] guard down (e.g. to defer). 

I've noticed this in myself recently; identifying (whether consciously or not) as being more intelligent/rational than average Joe is actually is a surefire way for me to end up not thinking as clearly as I would have otherwise. (this is obvious in retrospect, but I think it's pretty important to keep in mind)

I agree with a lot of what you said, and have had similar concerns.  I appreciate you writing this and making it public!

Huge! I am so excited to see this announcement, and wish the best of luck for AoI.

Not sure what the motivation behind this post is; it would be good for you to clarify. 

I think the question isn't framed very well, since a love letter doesn't make a person happy for their entire life.  Clearly more QALYs or WELLBYs or whatever are preserved by running over the letters.

I like the general idea of this post, and I think this idea/program is something worth experimenting with. Would love to see how it goes, if you decide to run it. That being said, I have a couple thoughts.

Weak criticism: (I expect there is probably a good rebuttal to this) We might want some selection for students who have already determined that they want to make "doing good" a large part of their life. Maybe these students are more conscientious than the average individual, and this is an early signal of them being people who self reflects on their values/thoughts/beliefs more. This could mean that they will perform better at careers that take a lot of critical thinking and/or careful moral reasoning. That's not to say these kinds of thinking skills cannot be learned, but they may be picked up faster and performed better by students who already exhibit some level of personal reflection at a younger age.

Stronger criticism: If students have not internalized that doing good matters to them, and would therefore not want to join the Intro EA Program, I strongly suspect they will also not be interested in a 5-week program about their purpose and life planning. My main concern here is that outreach will be difficult (but it's easy to prove me wrong empirically, so feel free to go out and do it!)

A final thought on framing/overstepping: If I were a student first hearing about this program, I think I would be a little bit suspicious of the underlying motives. From a surface-level impression, I would think that the goal of the program would be for me "find my purpose"... then I would look more into who is running the program and ask myself "who are these EA people, and why do they care about my purpose?", after which I would quickly find out that they want me to join their organization.
 

I think the main concern that I want to bring up is that I think this program could easily be turned into a sort of bait-and-switch. Finding one's purpose and life goals is a very individual process, and I wouldn't want this process to be "hijacked" by an EA program that directs people in a very specific direction. I.e. my concern is that the program will presented as if it is encouraging people to find their values and purpose, but in reality it's  just trying to incept them into following an EA career path. 

Not sure if/how this can be avoided, except for being really up-front about the motivation of the program with applicants. I also might be misunderstanding something, so feel free to correct me.

You mention a few times that EV calculations are susceptible to motivated reasoning. But this conflicts with my understanding, which is that EV calculations are useful partly (largely) because they help to prevent motivated reasoning from guiding our decisions too heavily 

(e.g. You can imagine a situation where charity Y performs an intervention that is more cost effective than charity X. By following on EV calculation, one might switch their donation from charity X to charity Y, despite that charity X sounds intuitively better.)

Maybe you could include some examples/citations of where you think this "EV motivated reasoning" has occurred. Otherwise I find it hard to believe that EV calculations are worse than the alternative, from a "susceptible-to-motivated-reasoning" perspective (here, the alternative is not using EV calculations).

+1 here; looks like this is a  vestige from the previous version, and should probably be corrected

This post makes the case that warning shots won't change the picture in policy  much, but I could imagine a world where  some warning shot makes the leading AI labs decide to focus more on safety, or agree to slow down their deployment, without policy change occurring. Maybe this could buy a couple of years time for safety researchers?

This isn't a well developed thought, just something that came to mind while reading.

Do you have any more info on these "epistemic infrastructure" projects or the people working on them? I would be super curious to look into this more.

Load more