My name is Kendrea Beers. Currently, I'm a MS student and Graduate Research Assistant in the Artificial Intelligence program at Oregon State University, where I advocate for safety and ethics as the co-president of the AI Graduate Student Association. I did my HBS in Philosophy (with a focus on Buddhist philosophy) with minors in Math and Computer Science.
I've been involved in EA for most of a decade now, attending my first EA Global in 2017. I figure my career is for AI safety/ethics and my money is for animal welfare and global poverty alleviation.
I plan to look for full-time positions in AI safety/ethics starting in mid-late 2024. Ideally, I'd like to serve as a bridge between technical researchers, ethicists, and decision-makers. Ideas welcome!
Reach out to me if you're curious about any topic in my bio!
Hi! I relate so much to you. I'm seven years older than you and I'm pretty happy with how my life is going, so although I'm no wise old sage, I think I can share some good advice.
I've also been involved in EA, Buddhism, veganism, minimalism, sustainable fashion, etc. from a young age, plus I was part of an Orthodox Christian community as a teenager (as I assume you are in Greece).
So, here's my main advice.
The philosophies of EA, Buddhism, etc. are really really morally demanding. Working from the basic principles of these philosophies, it is difficult to find reasons to prioritize your own wellbeing; there are only pragmatic reasons such as "devote time and money to your own health so that you can work more effectively to help others". Therefore, if you predominantly engage in these communities through the philosophy, you will be exhausted.
So, instead of going down internet rabbit holes and reading serious books, engage with the people in these communities. Actual EAs goof around at parties and write stories. Actual Buddhists have silly arguments at nice restaurants and go on long treks through the mountains. While good philosophies are optimized to be hard to argue with, good communities are optimized to be healthy and sustainable.
I'm guessing you don't have strong EA and Buddhist communities near you, though. Same here. In that case, primarily engage in other communities instead. When I was your age (ha that sounds ridiculous), I was deeply involved in choir. Would highly recommend! Having fun is so important to balance out the philosophies that can consume your life if you let them.
In non-EA non-Buddhist communities, it might feel like you're the only one who takes morality seriously, and that can be lonely. Personally, I gravitate toward devout religious friends, because they're also trying to confront selfishness. Just make sure that you don't go into depressing rabbit holes together.
Of course, there are nice virtual EA and Buddhist communities too. They can't fully replace in-person communities, though. Also, people in virtual communities are more likely to only show their morally intense side.
I hope this helps! You're very welcome to DM me about anything. I'll DM you first to get the conversation going.
P. S. You've got soooo much time to think about monasticism, so there's no reason to be concerned about the ethics of it for now, especially since the world could change so much by the time we retire! Still, just for the philosophical interest of it, I'm happy to chat about Buddhist monasticism if you like. Having lived at a monastery for several months and written my undergrad thesis on a monastic text, I've got some thoughts :)
General information about people in low-HDI countries to humanize them in the eyes of the viewer.
Similar for animals (except not “humanizing” per se!). Spreading awareness that e.g. pigs act like dogs may be a strong catalyst for caring about animal welfare. Would need to consult an animal welfare activism expert.
My premise here: it is valuable for EAs to viscerally care about others (in addition to cleverly working toward a future that sounds neat).
I'll just continue my anecdote! As it happens, the #1 concern that my friend has about EA is that EAs work sinisterly hard to convince people to accept the narrow-minded longtermist agenda. So, the frequency of ads itself increases his skepticism of the integrity of the movement. (Another manifestation of this pattern is that many AI safety researchers see AI ethics researchers as straight-up wrong about what matters in the broader field of AI, and therefore need to be convinced rather than collaborated with.)
(Edit: the above paragraph is an anecdote, and I'm speaking generally in the following paragraphs)
I think it is quite fair for someone with EA tendencies, who is just hearing of EA for the first time through these ads, to form a skeptical first impression of a group that invests heavily in selling an unintuitive worldview.
I strongly agree that it's a good sign if a person investigates such things instead of writing them off immediately, indicating a willingness to take unusual ideas seriously. However, the mental habit of openness/curiosity is also unusual and is often developed through EA involvement; we can't expect everyone to come in with full-fledged EA virtues.
These are excellent answers, thanks so much!
As more and more students get interested in AI safety, and AI-safety-specific research positions fail to open up proportionally, I expect that many of them (like me) will end up as graduate students in mainstream ethical-AI research groups. Resources like these are helping me to get my bearings.
Thanks very much, that helps!
Adding more not to defend myself, but to keep the conversation going:
I think that many Enlightenment ideas are great and valid regardless of their creators' typical-for-their-time ideas.
Education increasingly includes rather radical components of critical race theory. Students are taught that if someone is racist, then all of their political and philosophical views are tainted. By extension, many people learn that the Enlightenment itself is tainted. Like Charles, I think that this "produces misguided perspectives".
I'm--apparently badly--trying to communicate the following. These students, who have been taught that the Enlightenment is tainted by association with racism, who (reasonably!) haven't bothered to thoroughly research this particular historical movement to come to their own conclusions, who may totally make great EAs, would initially be turned off.
It's quite plausible that it shouldn't be the case that Enlightenment aesthetics might turn people off. But I think this is the case, and I argue that it's likely more important to make a good first impression than to take a stand in favor of a particular historical movement.
Hope that makes sense!
I'm having an ongoing discussion with a couple professors and a PhD candidate in AI about "The Alignment Problem from a Deep Learning Perspective" by @richard_ngo, @Lawrence Chan, and @SoerenMind. They are skeptical of "3.2 Planning Towards Internally-Represented Goals," "3.3 Learning Misaligned Goals," and "4.2 Goals Which Motivate Power-Seeking Would Be Reinforced During Training". Here's my understanding of some of their questions: