I hope you've smiled today :)
I really want to experience and learn about as much of the world as I can, and pride myself on working to become a sort of modern day renaissance man, a bridge builder between very different people if you will. Some not-commonly-seen-in-the-same-person things: I've slaughtered pigs on my family farm and become a vegan, done HVAC (manual labor) work and academic research, been a member of both the Republican and Democratic clubs at my university.
Discovering EA has been one of the best things to happen to me in my life. I think I likely share something really important with all the people that consider themselves under this umbrella. EA can be a question, sure, but I hope more than that that EA can be a community, one that really works towards making the world a little better than it was.
Below are some random interests of mine. I'm happy to connect over any of them, and over anything EA, please feel free to book a time whenever is open on my calendly.
I've done some RA work in AI Policy now, so I'd be eager to try to continue that moving forward in a more permanent position (or at least a longer period funded) and any help better myself (e.g. how can I do research better?) or finding a position like that would be much appreciated. Otherwise I'm on the look for any good opportunities in the EA Community Building or General Longtermism Research space, so again any help upskilling or breaking into those spaces would be wonderful.
Of a much lower importance, I'm still not for sure on what cause area I'd like to go into, so if you have any information on the following, especially as to a career in it, I'd love to hear about it: general longtermism research, EA community building, nuclear, AI governance, and mental health.
I don't have domain expertise by any means, but I have thought a good bit about AI policy and next best steps that I'd be happy to share about (i.e. how bad is risk from AI misinformation really?). Beyond EA related things, I have deep knowledge in Philosophy, Psychology and Meditation, and can potentially help with questions generally related to these disciplines. I would say the best thing I can offer is a strong desire to dive deeper into EA, preferably with others who are also interested. I can also offer my experience with personal cause prioritization, and help others on that journey (as well as connect with those trying to find work).
This is a solid data point so thanks for mentioning it. It is maybe worth mentioning that, as much as Emile and you may be "critical of EA", Emile was formerly quite friendly and you and I are having this conversation on the forum.
I think you're likely both "more EA" than the average person, and definitely more EA than the average detractor that I have in mind. What it means to "be EA" is amorphous and uncertain here, but many people who would consider themselves EAs are also critical of it sometimes.
I'd be interested to see how much Timnit donates, or any of those who wrote the typical SBF articles, but I highly doubt their numbers would look like those above.
This was an absolutely beautiful read, thank you so much for taking the time to write it. I wrote something myself I just put out with some similar thoughts, and just wanted to mark that I found it remarkable to read this after writing it, finding much of the wisdom already contained here.
Thanks for the kind message Jon :) I actually have the third one sitting as a draft right now, and hadn't put the last little bit in because I wasn't sure of the value, but I'll go ahead and finish that one up. I'd love to continue reviewing, but I'd have to see if they ship to other countries, I picked up the first three at an EAGx. Highly encourage you to leave your thoughts on them as a comment here, think that could be helpful if you're interested!
As a random datapoint, I'm only just getting into the AI Governance space, but I've found little engagement with (some) (of[1]) (the) (resources) I've shared and have just sort of updated to think this is either not the space for it or I'm just not yet knowledgeable enough about what would be valuable to others.
I was especially disappointed with this one, because this was a project I worked on with a team for some time, and I still think it's quite promising, but it didn't receive the proportional engagement I would have hoped for. Given I optimized some of the project for putting out this bit of research specifically, I wouldn't do the same now and would have instead focused on other parts of the project.