B

BrownHairedEevee

Funemployed
5119 karmaJoined Working (0-5 years)New York, NY, USAsunyshore.substack.com

Bio

Participation
5

I'm interested in effective altruism and longtermism broadly. The topics I'm interested in change over time; they include existential risks, climate change, wild animal welfare, alternative proteins, and longtermist global development.

A comment I've written about my EA origin story

Pronouns: she/her

"It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale. Understanding others, the other elements, and the other nations will help you become whole." —Uncle Iroh

Sequences
8

Philosophize This!: Consciousness
Mistakes in the moral mathematics of existential risk - Reflective altruism
EA Public Interest Tech - Career Reviews
Longtermist Theory
Democracy & EA
How we promoted EA at a large tech company
EA Survey 2018 Series
EA Survey 2019 Series

Comments
766

Topic contributions
131

I can speak for myself: I want AGI, if it is developed, to reflect the best possible values we have currently (i.e. liberal values[1]), and I believe it's likely that an AGI system developed by an organization based in the free world (the US, EU, Taiwan, etc.) would embody better values than one developed by one based in the People's Republic of China. There is a widely held belief in science and technology studies that all technologies have embedded values; the most obvious way values could be embedded in an AI system is through its objective function. It's unclear to me how much these values would differ if the AGI were developed in a free country versus an unfree one, because a lot of the AI systems that the US government uses could also be used for oppressive purposes (and arguably already are used in oppressive ways by the US).

Holden Karnofsky calls this the "competition frame" - in which it matters most who develops AGI. He contrasts this with the "caution frame", which focuses more on whether AGI is developed in a rushed way than whether it is misused. Both frames seem valuable to me, but Holden warns that most people will gravitate toward the competition frame by default and neglect the caution one.

Hope this helps!

  1. ^

    Fwiw I do believe that liberal values can be improved on, especially in that they seldom include animals. But the foundation seems correct to me: centering every individual's right to life, liberty, and the pursuit of happiness.

Thank you for posting this! I've been frustrated with the EA movement's cautiousness around media outreach for a while. I think that the overwhelmingly negative press coverage in recent weeks can be attributed in part to us not doing enough media outreach prior to the FTX collapse. And it was pointed out back in July that the top Google Search result for "longtermism" was a Torres hit piece.

I understand and agree with the view that media outreach should be done by specialists - ideally, people who deeply understand EA and know how to talk to the media. But Will MacAskill and Toby Ord aren't the only people with those qualifications! There's no reason they need to be the public face of all of EA - they represent one faction out of at least three. EA is a general concept that's compatible with a range of moral and empirical worldviews - we should be showcasing that epistemic diversity, and one way to do that is by empowering an ideologically diverse group of public figures and media specialists to speak on the movement's behalf. It would be harder for people to criticize EA as a concept if they knew how broad it was.

Perhaps more EA orgs - like GiveWell, ACE, and FHI - should have their own publicity arms that operate independently of CEA and promote their views to the public, instead of expecting CEA or a handful of public figures like MacAskill to do the heavy lifting.

I've gotten more involved in EA since last summer. Some EA-related things I've done over the last year:

  • Attended the virtual EA Global (I didn't register, just watched it live on YouTube)
  • Read The Precipice
  • Participated in two EA mentorship programs
  • Joined Covid Watch, an organization developing an app to slow the spread of COVID-19. I'm especially involved in setting up a subteam trying to reduce global catastrophic biological risks.
  • Started posting on the EA Forum
  • Ran a birthday fundraiser for the Against Malaria Foundation. This year, I'm running another one for the Nuclear Threat Initiative.

Although I first heard of EA toward the end of high school (slightly over 4 years ago) and liked it, I had some negative interactions with EA community early on that pushed me away from the community. I spent the next 3 years exploring various social issues outside the EA community, but I had internalized EA's core principles, so I was constantly thinking about how much good I could be doing and which causes were the most important. I eventually became overwhelmed because "doing good" had become a big part of my identity but I cared about too many different issues. A friend recommended that I check out EA again, and despite some trepidation owing to my past experiences, I did. As I got involved in the EA community again, I had an overwhelmingly positive experience. The EAs I was interacting with were kind and open-minded, and they encouraged me to get involved, whereas before, I had encountered people who seemed more abrasive.

Now I'm worried about getting burned out. I check the EA Forum way too often for my own good, and I've been thinking obsessively about cause prioritization and longtermism. I talk about my current uncertainties in this post.

Cheaper yes, but buses (even bus rapid transit, or BRT) don't scale as well to large numbers of passengers. This 2009 study compares BRT, regular buses, light rail, and mass rapid transit (MRT), and finds that MRT has the lowest cost per thousand passenger-mile:

In concluding, the author concludes that, on average, “BRT can outperform LRT in providing a moderate to high level of service capacity at a moderate level of capital and operating costs in neighborhoods with moderate population and job densities.” While MRT are the most expensive to build, they can achieve over five times the capacity of BRT or LRT, and are associated with the largest positive impact on property values in the vicinity of stations.

Also you have to actually implement BRT to reap most of the benefits, which means no sharing lanes with other vehicles.

Somewhat relatedly, the forum terms of use addendum currently does not mention the ForumMagnum software, which is GPL-v3. CEA itself might have obligations under the GPL to other contributors to the forum software (i.e. anyone who contributed to it and was not a CEA employee), like informing forum users of the terms and conditions of the GPL and not imposing further restrictions on them. I suggest looking into this to see if the EA Forum terms of service need to be modified in order to comply with the GPL. [edit: reworded to avoid being interpreted as legal advice]

I think the EA Forum software does a poor job at communicating the license terms of forum posts. (For context, all new forum content published on or after 1 December 2022 has been licensed under Creative Commons Attribution 4.0.)

The current license statement is buried in the forum terms of use which can be reached from the "How to use the Forum" page in the navigation sidebar, so many readers may be unaware of the license terms if they have not registered on the forum and clicked through the license agreement. By contrast, many sites that use CC licenses, like Wikipedia and Stack Overflow, display a link to the license in the footer:

Site design / logo © 2024 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2024.7.9.12232

I suggest adding a footer to each new post or the sidebar stating something like the following:

EA Forum user content posted on or after 1 December 2022 is licensed under Creative Commons Attribution 4.0. This license only covers content created by Forum users; third-party content may be subject to other copyright restrictions.

The second part of this statement is relevant for content such as linkposts and posts that incorporate quotes, where there has been confusion about whether the CC license would apply to third-party content incorporated into forum posts (the license text explicitly states that it does not grant rights that the licensor does not control).

Note that rel="license" should be added to the license link. If it's necessary to explain more, you could link to another page, like how Stack Overflow links to this page stating that content from different periods of time are subject to different versions of CC BY-SA.

Thanks for everything you've done, Austin! I'm especially grateful to the Manifold community for having raised $1,203 for Shrimp Welfare Project (to date); it's been one of the most popular charities on the platform.

The Insect Institute (as Insect Welfare Project) got a $45k movement grant from ACE in June 2022. Shrimp Welfare Project got a $40k grant in the same round.

(I could be wrong but I think Insect Welfare Project was the working name of the Insect Institute prior to February 2023?)

Your second statement is basically right, though my personal view is they impose costs on the movement/EA brand and not just us personally.... I hope to see everything funded by a more diverse group of actors, so that their dollar and non-dollar costs are more distributed.

Do you think that these "PR" costs would be mitigated if there were more large (perhaps more obscure) donors? Also, do you think that "weird" stuff like artificial sentience should be funded at all or just not by Good Ventures?

[edit: see this other comment by Dustin]

Is this separate from Insect Institute? The title of the post made me think that Insect Institute was rebranding to Arthropoda Foundation.

Load more