Quick takes

Worth having some sort of running and contributable-to tab for open questions? Can also encourage people to flag open questions they see in posts.

Calling all Lithuanians!

I'm on the lookout for people who are interested in effective altruism / rationality and living in Lithuania.  

If you happen to know anyone like that, let me know, so I could invite them to apply to the upcoming EAGxNordics conference.

For context, I am on the organising team for EAGx Nordics and one of our goals is to grow the smaller EA communities in the region. Most notably Lithuania, which is the largest country in the Baltics, but has the smallest EA presence. My hope is that the conference will help connect existing EA-aligned individuals living in Lithuania, who might not know each other.

Update: Created a list of Lithuanians who seem interested in EA. https://docs.google.com/spreadsheets/d/1GMz-f2vvaSxXyEDyFuxFlRJ3_4lfSbk47SUwzRgjI58/edit?gid=0#gid=0

Would you consider adding your ideas for 2 minutes?  - Creating an comprehensive overview of AI x-risk reduction strategies
------

Motivation: To identify the highest impact strategies for reducing the existential risk from AI, it’s important to know what options are available in the first place.

I’ve just started creating an overview and would love for you to take a moment to contribute and build on it with the rest of us!

Here is the work page: https://workflowy.com/s/making-sense-of-ai-x/NR0a6o7H79CQpLYw

Some thoughts on how we collaborate:

  • Please don’t
... (read more)

Brazil has been dealing with massive criminal wildfires for the last few weeks, and the air quality is record-breakingly bad. Besides other obvious issues (ineffective government response in going after the criminals setting fires, climate change making everything worse), hardly anyone is talking about how to deal with the immediate air quality problem. It's a bit bizarre.

People aren't widely adopting PFF2 masks and air purifiers. These remain somewhat niche topics even though pretty much everyone is suffering. To be fair, there are occasional media report... (read more)

8
core_admiral
Thank you for writing this - I'm working on a post going over how much cheaper someone could make air purifiers for and it surprises me that it's not a more common topic of discussion. Some food for thought while I finish it up: 1. Indoor air quality affects so many people to at least some extent - consider air pollution, viruses, allergies etc.  2. Making air purifiers even slightly cheaper vastly increases the number of people globally who can afford one, and directly increases the cost effectiveness of any intervention which involves paying for them.  3. Noise is a common reason for people under-utilising air purifiers and the affordable end of consumer hardware hasn't solved for this yet. We know this because best-in-class clean air delivery rate (CADR) at a given noise level can be achieved with what is essentially a box with 2-4 air filters and some computer fans on the side (computer fans have become remarkably capable at low noise levels in recent times). These kits can be bought but minimal competition in the space means no one is anywhere close to the reasonable price floor. 4. Competition in the air purifier market has partially been on features which are not necessary when the goal is optimizing CADR/$. Ionization, timers, remote control, app connectivity, odour removal etc. can be done away with for the purpose of achieving "one billion air filters in this decade" or anything of similar scale. It almost seems too simple: the many things floating around in the air cause a huge amount of death, illness and general discomfort. If you push enough air through a fine enough filter you remove the stuff in the air. If you make the filters cheap and quiet enough, people will be able to buy them and we can send people more of them for the same price. Of course the air quality problem with respect to pollution is obviously something much more difficult to solve than simply chucking air filters everywhere since people also have to be outside for much of thei

There's a really cool start up I believe in India having integrated HEPA filtration in bike helmets. So many people there ride 2 wheelers and are stuck in abysmal air quality at least one or two hours every day.

[crossposted from my blog; some reflections on developing different problem-solving tools]

When all you have is a hammer, everything sure does start to look like a nail. This is not a good thing.

I've spent a lot of my life variously
1) Falling in love with physics and physics fundamentalism (the idea that physics is the "building block" of our reality)
2) Training to "think like a physicist"
3) Getting sidetracked by how "thinking like a physicist" interacts with how real people actually do physics in practice
4) Learning a bunch of different skills to tackle i... (read more)

2
Said Bouziane
I like this suggestion, I feel like the big solution we need to find in order to implement something like this really successfully is to increase tolerance for discomfort and disagreement.  I see dialogue shut down far too quickly in intellectual space.  I heard once that the old philosophers in Greece needed to state one another's position well enough that the the person they debated with actually agreed 'yes you understand my position, I have nothing to add'.  Only after that did debate take place.  Not sure if this is true or not but the spirit of the anecdote feels like it's really missing from most media and discussion out there in the mainstream.  What ways could interdisciplinary cross-pollination be more cultivated? 

Oh, if you read some of Plato's dialogues it seems very untrue...Plato was really into strawmanning his opponents' arguments unfortunately :)

Anyway. To try and answer your (very thoughtful) question:

  • Get people from different disciplines together in the same physical space on a regular basis. Maybe you put the software engineers next to the literary critics and get them to have lunch together regularly, or something. People are easier to relate to up close.
  • Get people to work together on big interdisciplinary problems such as satellite imagery for conservati
... (read more)

Someone needs to be doing mass outreach about AI Safety to techies in the Bay Area.

I'm generally more of a fan of niche outreach over mass outreach, but Bay Area tech culture influences how AI is developed. If SB 1047 is defeated, I wouldn't be surprised if the lack of such outreach ended up being a decisive factor.

There's now enough prominent supporters of AI Safety and AI is hot enough that public lectures or debates could draw a big crowd. Even though a lot of people have been exposed to these ideas before, there's something about in-person events that make ideas seem real.

Moderation updates

Showing 3 of 80 replies (Click to show all)
14
richard_ngo
Narrowing in even further on the example you gave, as an illustration: I just had an uncomfortable conversation about age of consent laws literally yesterday with an old friend of mine. Specifically, my friend was advocating that the most important driver of crime is poverty, and I was arguing that it's cultural acceptance of crime. I pointed to age of consent laws varying widely across different countries as evidence that there are some cultures which accept behavior that most westerners think of as deeply immoral (and indeed criminal). Picturing some responses you might give to this: 1. That's not the sort of uncomfortable claim you're worried about 1. But many possible continuations of this conversation would in fact have gotten into more controversial territory. E.g. maybe a cultural relativist would defend those other countries having lower age of consent laws. I find cultural relativism kinda crazy (for this and related reasons) but it's a pretty mainstream position. 2. I could have made the point in more sensitive ways 1. Maybe? But the whole point of the conversation was about ways in which some cultures are better than others. This is inherently going to be a sensitive claim, and it's hard to think of examples that are compelling without being controversial. 3. This is not the sort of thing people should be discussing on the forum 1. But EA as a movement is interested in things like: 1. Criminal justice reform (which OpenPhil has spent many tens of millions of dollars on) 2. Promoting women's rights (especially in the context of global health and extreme poverty reduction) 3. What factors make what types of foreign aid more or less effective 4. More generally, the relationship between the developed and the developing world So this sort of debate does seem pretty relevant. The important point is that we didn't know in advance which kinds of discomfort were of crucial importance. The re
11
NickLaing
Do you have an example of the kind of early EA conversation which you think was really important which helped came up with core EA tenets might be frowned upon or censored on the forum now? I'm still super dubious about whether leaving out a small number of specific topics really leaves much value on the table. And I really think conversations can be had in more sensitive ways. In the the case of the original banned post, just as good a philosophical conversation could be had without explicitly talking about killing people. The conversation already was being had on another thread "the meat eater problem" And as a sidebar yeah I wouldn't have any issue with that above conversation myself because we just have to practically discuss that with donors and internally when providing health care and getting confronted with tricky situations. Also (again sidebar) it's interesting that age of marriage/consent conversations can be where classic left wing cultural relativism and gender safeguarding collide and don't know which way to swing. We've had to ask that question practically in our health centers, to decide who to give family planning to and when to think of referring to police etc. Super tricky.

My point is not that the current EA forum would censor topics that were actually important early EA conversations, because EAs have now been selected for being willing to discuss those topics. My point is that the current forum might censor topics that would be important course-corrections, just as if the rest of society had been moderating early EA conversations, those conversations might have lost important contributions like impartiality between species (controversial: you're saying human lives don't matter very much!), the ineffectiveness of developmen... (read more)

Reflections on a decade of trying to have an impact

Next month (September 2024) is my 10th anniversary of formally engaging with EA. This date marks 10 years since I first reached out to the Foundational Research Institute about volunteering, at least as far as I can tell from my emails.

Prior to that, I probably had read a fair amount of Peter Singer, Brian Tomasik, and David Pearce, who might all have been considered connected to EA, but I hadn’t actually actively tried engaging with the community. I’d been engaged with the effective animal advocacy commun... (read more)

Showing 3 of 24 replies (Click to show all)

After some discussions with someone offline that were clarifying, I want to clarify my decrease in confidence in the statement, "Farmed vertebrate welfare should be an EA focus".

I think my view is slightly more complicated than this implies. I think that given that OpenPhil and non-EA donors are basically able to fund what seem like the entirety of the good opportunities in this space, I don't think these groups are that talent constrained, and it seems like the best bets (e.g. corporate campaigns) will continue to have decreasing cost-effectiveness, new a... (read more)

6
NickLaing
I note that these risks hardly apply to GHD work ;). Can you explain how FTX harm could plausible outweigh good done by EA? I can't fathom a scenario where this is the case myself.
4
abrahamrowe
Yeah, I think there are probably parts of EA that will look robustly good in the long run, and part of the reason I think that it's less likely EA as a whole will be less likely to be positive (and more likely to be neutral or negative) are that actions in other areas of EA could impact those areas negatively. Though this could cut both in favor of or against GHD work. I think just having a positive impact is quite hard, even more so when doing a bunch of uncorrelated things when some of them have major downside risks. I think it is pretty unlikely that FTX harm outweighs good done by EA on its own, but it seems easy enough to imagine that conditional on EA's net benefit being barely above neutral (which for other reasons mentioned above seems pretty possible to me, along with EA increasingly working on GCRs which directly increases the likelihood EA work ends up being net-negative or neutral, even if in expectation that shift is positive value), that the scale of the stress / financial harm caused by EA via FTX, outweighs that remaining benefit. And then there is brand damage to effective giving, etc. But yeah, I agree that my original statement above seems a lot less likely than FTX just contributing to an overall portfolio of harm or work that doesn't matter in the longrun from EA.

Ilya's Safe Superintelligence Inc. has raised $1B.

Showing 3 of 8 replies (Click to show all)
4
yanni kyriacos
Don’t worry Nick, I’ll never stop.
2
NickLaing
I'll try a bit more too. 23 votes and 6 karma now - looks like the forum is split on the low effort humor front ;).

lol someone has to write a post "How to make an upvoted joke on the forum that isn't cringe"

I think more EAs should consider operations/management/doer careers over research careers, and that operations/management/doer careers should be higher status within the community.

I get a general vibe that in EA (and probably the world at large), that being a "deep thinking researcher"-type is way higher status than being an "operations/management/doer"-type. Yet the latter is also very high impact work, often higher impact than research (especially on the margin).

I see many EAs erroneously try to go into research and stick to research despite having very ... (read more)

Showing 3 of 32 replies (Click to show all)
2
abrahamrowe
I mean something like directly implementing an intervention vs finance/HR/legal/back office roles, so ops just in the nonprofit sense.

In that case I suspect there’s not disagreement, and you’re just each using ops to mean somewhat different things?

1
Péter Drótos
Is there a proposed/proven way of coordinating on the prioritization? Without a good feedback loop I can imagine the majority of the people just jump on the same path which could then run into diminishing returns if there isn’t sufficient capacity. It would be intersting to see at least the number of people at different career stages on a given path. I assume some data should be available from regular surveys. And maybe also some estimates on the capacity of different paths. And I assume the career coaching services likely have an even more detailed picture including missing talent/skills/experience that they can utilize for more personalized advice.

The original website for Students for High Impact Charities (SHIC) at https://shicschools.org is down (You can find it in the Wayback Machine), but the program scripts and slides they used in high schools are still available at their google drive link at https://drive.google.com/drive/folders/0B_2KLuBlcCg4QWtrYW43UGcwajQ

Could potentially be a valuable EA community building resource

I feel like being exposed early on to longer form GovAI-type reports has made me set the bar high for writing my thoughts out in short form, which really sucks in terms of an output standpoint.

Nonprofit organizations should make their sources of funding really obvious and clear: How much money you got from which grantmakers, and approximately when. Any time I go on some org's website and can't find information about their major funders, it's a big red flag. At a bare minimum you should have a list of funders, and I'm confused why more orgs don't do this.

Showing 3 of 10 replies (Click to show all)
2
sawyer🔸
There are many examples of organizations with high funding transparency, including BERI (which I run), ACE, and MIRI (transparency page and top contributors page).
12
Arepo
Even when that's true, the org could specify all the other sources of funding, and separate out 'anonymous donations' into either one big slice or one-slice-per-donor.

Yep! Something like this is probably unavoidable, and it's what all of my examples below do (BERI, ACE, and MIRI).

Has anyone talked with/lobbied the Gates Foundation on factory farming? I was concerned to read this in Gates Notes.

"On the way back to Addis, we stopped at a poultry farm established by the Oromia government to help young people enter the poultry industry. They work there for two or three years, earn a salary and some start-up money, and then go off to start their own agriculture businesses. It was a noisy place—the farm has 20,000 chickens! But it was exciting to meet some aspiring farmers and businesspeople with big dreams."

It seems a disaster that the ... (read more)

I used to frequently come across a certain acronym in EA, used in a context like "I'm working on ___" or "looking for other people who also use ___". I flagged it mentally as a curiosity to explore later, but ended up forgetting what the acronym was. I'm thinking it might be CFAR, which seems to have meant CFAR workshops? If so, 1) what happened to them, and 2) was it common for people to work through the material themselves, self-paced?

The copyright banner at the bottom of their site extends to 2024 and the Google form for workshop applications hasn't been deactivated.

I got a copy of the CFAR handbook in late 2022 and the intro had an explicit reference to self study - along the lines of 'we have only used this in workshops, we don't know what the results of self study of this material does and it wasn't written for self study'

So I assume self study wasn't common but I may be wrong

Please people, do not treat Richard Hannania as some sort of worthy figure who is a friend of EA. He was a Nazi, and whilst he claims he moderated his views, he is still very racist as far as I can tell.

Hannania called for trying to get rid of all non-white immigrants in the US, and the sterilization of everyone with an IQ under 90 indulged in antisemitic attacks on the allegedly Jewish elite, and even post his reform was writing about the need for the state to harass and imprison Black people specifically ('a revolution in our culture or form of governmen... (read more)

Showing 3 of 51 replies (Click to show all)

Just to expand on the above, I've written a new blog post - It's OK to Read Anyone - that explains (i) why I won't personally engage in intellectual boycotts [obviously the situation is different for organizations, and I'm happy for them to make their own decisions!], and (ii) what it is in Hanania's substack writing that I personally find valuable and worth recommending to other intellectuals.

10
ZachWeems
Un-endorsed for two reasons.  * Manifold invited people based on having advocated for prediction markets, which is a much stricter criterion than being a generic public speaker that feels positively about your organization. With a smaller pool of speakers, it is not trivially cheap to apply filters, so it is not as clear cut as I claimed. (I could have found out this detail before writing, and I feel embarrassed that I didn't.) * Despite having an EA in a leadership role and ample EA-adjacent folks that associate with it, Manifold doesn't consider itself EA-aligned. It sucks that potential EA's will sometimes mistake non-EA's for EA's, but it is important to respect it when a group tells the wider EA community that we aren't their real dad and can't make requests. (This does not appear to have been common knowledge so I feel less embarrassed about this one.)
7
Thomas Kwa
Given the Guardian piece, inviting Hannania to Manifest seems like an unforced error on the part of Manifold and possibly Lightcone. This does not change because the article was a hitpiece with many inaccuracies. I might have more to say later.

London folks - I'm going to be running the EA Taskmaster game again at the AIM office on the afternoon of Sunday 8th September. 

It's a fun, slightly geeky, way to spend a Sunday afternoon. Check out last year's list of tasks for a flavour of what's in store 👀

Sign up here 
(Wee bit late in properly advertising so please do spread the word!)

I’m looking for podcasts, papers, or reviews on fish sentience.

Specifically:

  • Long form interview about the moral weight of fish.
  • Papers which estimate their moral weight
  • Information on the long term damage of fish hooks, being out of water, or the moral harm of fishing.

I would also like to know if there are practical methods to reduce the amount of harm done if you are fishing.

Rethink priorities had their moral weights report which placed salmon at 0.056, I’m not sure I understood completely what that figure meant. I think this means they have 5% of the... (read more)

Showing 3 of 6 replies (Click to show all)
4
Toby Tremlett🔹
There actually was an EA-adjacent podcast about this in 2018, from Future Perfect. It discusses a japanese method called ikejime, which instantly kills the fish and renders it immobile.  Alternatively, just get into (legal) magnet fishing instead. No harm done. 
6
NickLaing
I trout fish, and I can assure you that the fish I have caught are far too stressed to eat so that wouldn't work for trout fishing at least.

What are your thoughts about catch and release fishing on bigger game fish? Have you seen any methods for doing it that seem safe?

Linch
91
3
0
1
2

The Economist has an article about China's top politicians on catastrophic risks from AI, titled "Is Xi Jinping an AI Doomer?"

 

Western accelerationists often argue that competition with Chinese developers, who are uninhibited by strong safeguards, is so fierce that the West cannot afford to slow down. The implication is that the debate in China is one-sided, with accelerationists having the most say over the regulatory environment. In fact, China has its own AI doomers—and they are increasingly influential.

[...]

China’s accelerationists want to keep th

... (read more)
Showing 3 of 7 replies (Click to show all)
9
Ben Millwood
I think this might merit a top-level post instead of a mere shortform

(I will do this if Ben's comment has 6+ agreevotes)

5
Steven Byrnes
One thing I like is checking https://en.wikipedia.org/wiki/2024 once every few months, and following the links when you're interested.
Load more