Bio

Participation
5

While I'm currently doing AI Governance RA work, I would generally describe myself as a modern day renaissance man, having slaughtered pigs on my family farm and become a vegan, having done HVAC work and academic research, having been a member of both the Republican and Democratic clubs at my university. I've sought to experience much in life, and on the horizon I wish to deepen my experiences within EA, currently having gone from a fellow to a facilitator to a prospective employee looking for work in the space while simultaneously doing personal cause prioritization work. Ill include below a list of some of my deep interests, and would be happy to connect over any of these areas, specifically as they may intersect with EA.

Philosophy, Psychology, Music (have deep interests in all genres but especially electronic and indie), Politics (especially American), Drugs (mostly psychs), Gaming (mostly League these days), Cooking (have been a head chef),  Photography, Meditation (specifically mindfulness).

How others can help me

I'm in the process right now of decided which cause area to focus my future work on (general longtermism research, EA community building, nuclear, AI governance, and mental health) so any compelling reasons to go (or not to go) into any of these would be really helpful at this point.

How I can help others

While I can't really offer any expertise in EA related things, I have deep knowledge in Philosophy, Psychology and Meditation, and can potentially help with questions generally related to these disciplines. I would say the best thing I can offer is a strong desire to dive deeper into EA, preferably with others who are also interested. 

Comments
84

Ah okay cool, a skeptic that has really engaged with the material. I won't ask you your reasons because I'm sure I can find them on your substack, but I would love to know, do you have rough percentages for chance of catastrophic risk and x-risk from AI? You can restrict the estimate to the next century if that would help.

Fair enough, maybe I was less skeptical than I thought at first and having a really good explainer was enough to dispel the little skepticism I did have. You mention Human Compatible, but also don't really seem convinced by it, is there any convincing work you've found, or have you remained unconvinced through all you've read?

Ah okay that seems like a step towards a more solid metric to me: is what I'm doing (some thing that necessitates breaking the law) truly of potential extraordinary impact? 

This of course would need further definition because extraordinary can be relative, but combine this requirement with placing greater weight on avoiding corrosion at a organizational and community level, and it will probably work out where you will effectively never break the law, but doesn't completely shut the doors. 

A further question would be if you think the chance you get caught should factor in? Say this person thinks there is a 0.0001% chance they get caught doing this, is that enough to override the more caution oriented principle above? Do you think its still not worth it because its corrosive effects don't just come from getting caught but also just undertaking the action at all?

What situations do think could be warranted if you think that "never" is not the appropriate reaction? 

I have no idea as to how to model the potentially corrosive effects, beyond pointing to some potential consequences in name and imagining they could be quite bad, it would seem like thumbing the no button would just become a perpetual pressing the no until I learned more. Which maybe is fine? But then again, if there are some situations where it is okay, how would you resolve them with any confidence?

Not too sure about this one. I think "good" is a bit strong of phrasing, but ask me "What's the better world?" and I think that the pretend tourist world might be, so I agreevoted. It's risky, to be sure, and the negatives certainly should be on the table and considered. 

But on the other side, even the process for temporary work visas is a nightmare, and while I would much prefer to try to change legislation rather than try to go around it, I think communicating clearly both the negatives and positives and allowing the person to take their decision into their own hands seems plausibly like a good answer here. 

But of course, I am considering this locally and not factoring in potential reputation negatives or any of that. But I suppose, given that I think some people with short timelines aren't crazy, and that there is a lot of important work to be done now, it could be worth the tradeoff. That is to say, it probably isn't the decision I'd make, but I can understand how another might sensibly come to making it. 

You could probably remove this now as you have the finished product as part of the sequence? 

You could always try this out by going through Nonlinear?

Any update on where you have landed now, on the other side of this process?

Load more