Bio

Participation
5

I'm interested in effective altruism and longtermism broadly. The topics I'm interested in change over time; they include existential risks, climate change, wild animal welfare, alternative proteins, and longtermist global development.

A comment I've written about my EA origin story

Pronouns: she/her

Legal notice: I hereby release under the Creative Commons Attribution 4.0 International license all contributions to the EA Forum (text, images, etc.) to which I hold copyright and related rights, including contributions published before 1 December 2022.

"It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale. Understanding others, the other elements, and the other nations will help you become whole." —Uncle Iroh

Sequences
8

Philosophize This!: Consciousness
Mistakes in the moral mathematics of existential risk - Reflective altruism
EA Public Interest Tech - Career Reviews
Longtermist Theory
Democracy & EA
How we promoted EA at a large tech company
EA Survey 2018 Series
EA Survey 2019 Series

Comments
802

Topic contributions
136

I can speak for myself: I want AGI, if it is developed, to reflect the best possible values we have currently (i.e. liberal values[1]), and I believe it's likely that an AGI system developed by an organization based in the free world (the US, EU, Taiwan, etc.) would embody better values than one developed by one based in the People's Republic of China. There is a widely held belief in science and technology studies that all technologies have embedded values; the most obvious way values could be embedded in an AI system is through its objective function. It's unclear to me how much these values would differ if the AGI were developed in a free country versus an unfree one, because a lot of the AI systems that the US government uses could also be used for oppressive purposes (and arguably already are used in oppressive ways by the US).

Holden Karnofsky calls this the "competition frame" - in which it matters most who develops AGI. He contrasts this with the "caution frame", which focuses more on whether AGI is developed in a rushed way than whether it is misused. Both frames seem valuable to me, but Holden warns that most people will gravitate toward the competition frame by default and neglect the caution one.

Hope this helps!

  1. ^

    Fwiw I do believe that liberal values can be improved on, especially in that they seldom include animals. But the foundation seems correct to me: centering every individual's right to life, liberty, and the pursuit of happiness.

Thank you for posting this! I've been frustrated with the EA movement's cautiousness around media outreach for a while. I think that the overwhelmingly negative press coverage in recent weeks can be attributed in part to us not doing enough media outreach prior to the FTX collapse. And it was pointed out back in July that the top Google Search result for "longtermism" was a Torres hit piece.

I understand and agree with the view that media outreach should be done by specialists - ideally, people who deeply understand EA and know how to talk to the media. But Will MacAskill and Toby Ord aren't the only people with those qualifications! There's no reason they need to be the public face of all of EA - they represent one faction out of at least three. EA is a general concept that's compatible with a range of moral and empirical worldviews - we should be showcasing that epistemic diversity, and one way to do that is by empowering an ideologically diverse group of public figures and media specialists to speak on the movement's behalf. It would be harder for people to criticize EA as a concept if they knew how broad it was.

Perhaps more EA orgs - like GiveWell, ACE, and FHI - should have their own publicity arms that operate independently of CEA and promote their views to the public, instead of expecting CEA or a handful of public figures like MacAskill to do the heavy lifting.

Answer by Eevee🔹26
0
0

I've gotten more involved in EA since last summer. Some EA-related things I've done over the last year:

  • Attended the virtual EA Global (I didn't register, just watched it live on YouTube)
  • Read The Precipice
  • Participated in two EA mentorship programs
  • Joined Covid Watch, an organization developing an app to slow the spread of COVID-19. I'm especially involved in setting up a subteam trying to reduce global catastrophic biological risks.
  • Started posting on the EA Forum
  • Ran a birthday fundraiser for the Against Malaria Foundation. This year, I'm running another one for the Nuclear Threat Initiative.

Although I first heard of EA toward the end of high school (slightly over 4 years ago) and liked it, I had some negative interactions with EA community early on that pushed me away from the community. I spent the next 3 years exploring various social issues outside the EA community, but I had internalized EA's core principles, so I was constantly thinking about how much good I could be doing and which causes were the most important. I eventually became overwhelmed because "doing good" had become a big part of my identity but I cared about too many different issues. A friend recommended that I check out EA again, and despite some trepidation owing to my past experiences, I did. As I got involved in the EA community again, I had an overwhelmingly positive experience. The EAs I was interacting with were kind and open-minded, and they encouraged me to get involved, whereas before, I had encountered people who seemed more abrasive.

Now I'm worried about getting burned out. I check the EA Forum way too often for my own good, and I've been thinking obsessively about cause prioritization and longtermism. I talk about my current uncertainties in this post.

Great piece! I noticed that you wrote "Goldstein" a few times.

I think this is a really thoughtful and constructive critique – thanks for sharing! I think you raise a great point about varying the depth of due diligence needed for donors based on the volume of their donations.

Larks, I’ve noticed that your comments on this post focus on criticizing the specific rules in the CC donor policy document that I shared as an example for other EA organizations to consider. You rightly point out that some of these rules might be impractical for EA orgs to follow. However, I feel frustrated because I think your responses have missed the point of my original post, which was to highlight the value of having a written donor screening policy, and to offer the CC document as a resource that other orgs can legally copy and adapt to their own needs (under the CC-BY license), rather than to recommend that others adopt the policy verbatim.

I would appreciate it if we could keep the conversation focused on discussing the broader idea of implementing donor screening policies or processes for EA orgs, rather than getting bogged down in the specifics of this document tearing down this specific policy without suggesting alternatives. By focusing on the bigger picture, we can ensure that EA organizations develop systems that promote transparency and trust, especially in light of past controversies such as FTX. I think this approach would help build more confidence in the community, while still allowing for flexibility in implementation.

If you have thoughts on how these policies or processes could be better adapted to fit the needs of EA organizations, I’d be glad to discuss them here. I hope we can continue this conversation in a collaborative and constructive way.

Also, what makes PopVax "high-risk" according to CC's criteria, in your opinion?

I just posted the CC policy as an example of a donor screening policy, and by posting it I don't necessarily endorse its exact contents. As you helpfully pointed out, the terms of this policy could be overbroad and burdensome, and would have to be adapted to the different context in which EA orgs operate.

"a $10k donation from a Palantir employee would simply be totally prohibited"

First, many EAs donate a lot more money ($1-10k/year) than the average non-EA donor, so $10k would likely be on the high end for a gift to CC but typical for an EA org. So I think the threshold for an EA org to scrutinize a donor should be much higher than CC's threshold - $50-100k might be appropriate.

Second, by "totally prohibited" I think you are referring to the section on exclusionary criteria. The most relevant criteria for your Palantir example might be these ones:

the Donor’s involvement in:

  1. The manufacture or sale of arms, including any direct or indirect involvement in the manufacture or sale of illegal or controversial weapons;
  2. International crimes; this encompasses International crimes as defined in treaty and customary law, which include violations of International Humanitarian Law (IHL) and  violations of International Human Rights Law (IHRL), such as crimes against humanity, genocide, torture; as well as other international crimes such as piracy, transnational organized crime, human trafficking, financing of terrorism, amongst others;

Even if taken literally, this would only apply to Palantir employees who work directly on weapons systems or any software that is being used to commit violations of international law (such as war crimes). Palantir has branches that work with the private sector, and most employees there would likely be exempt.

That said, if it applied to literally all employees who worked on a particular project at Palantir, it's probably too broad. Taking money from rank-and-file employees of Palantir (or any large tech company) is unlikely to bring CC into disrepute, but taking money from a senior executive might. (That said, there is a group trying to disparage the events startup Partiful based on its founding employees' connections to Palantir, though it's hard to say how much influence they will have.)

Decisions are taken after thorough examination of such Donation [emphasis added]

As @huw suggested in another comment, this could be outsourced to an organization or team that can do centralized donor vetting and risk scoring for the whole EA community. Impact Ops comes to mind (and in fact does this).

Our impact is not just what we create ourselves, but how we influence others. In part due to our progress⁠, there’s vibrant⁠ competition⁠ in the space, from commercial products similar to ChatGPT to open source LLMs, and vigorous innovation⁠ on⁠ safety⁠.

Color me cynical. OpenAI cites its own, Anthropic's, and DeepMind's approaches to AI safety. However, Meta has also expressed an ambition to build AGI but doesn't seem to prioritize safety in the same way.

Load more