Calling all Lithuanians!
I'm on the lookout for people who are interested in effective altruism / rationality and living in Lithuania.
If you happen to know anyone like that, let me know, so I could invite them to apply to the upcoming EAGxNordics conference.
For context, I am on the organising team for EAGx Nordics and one of our goals is to grow the smaller EA communities in the region. Most notably Lithuania, which is the largest country in the Baltics, but has the smallest EA presence. My hope is that the conference will help connect existing EA-aligned individuals living in Lithuania, who might not know each other.
Would you consider adding your ideas for 2 minutes? - Creating an comprehensive overview of AI x-risk reduction strategies
------
Motivation: To identify the highest impact strategies for reducing the existential risk from AI, it’s important to know what options are available in the first place.
I’ve just started creating an overview and would love for you to take a moment to contribute and build on it with the rest of us!
Here is the work page: https://workflowy.com/s/making-sense-of-ai-x/NR0a6o7H79CQpLYw
Some thoughts on how we collaborate:
Brazil has been dealing with massive criminal wildfires for the last few weeks, and the air quality is record-breakingly bad. Besides other obvious issues (ineffective government response in going after the criminals setting fires, climate change making everything worse), hardly anyone is talking about how to deal with the immediate air quality problem. It's a bit bizarre.
People aren't widely adopting PFF2 masks and air purifiers. These remain somewhat niche topics even though pretty much everyone is suffering. To be fair, there are occasional media report...
[crossposted from my blog; some reflections on developing different problem-solving tools]
When all you have is a hammer, everything sure does start to look like a nail. This is not a good thing.
I've spent a lot of my life variously
1) Falling in love with physics and physics fundamentalism (the idea that physics is the "building block" of our reality)
2) Training to "think like a physicist"
3) Getting sidetracked by how "thinking like a physicist" interacts with how real people actually do physics in practice
4) Learning a bunch of different skills to tackle i...
Oh, if you read some of Plato's dialogues it seems very untrue...Plato was really into strawmanning his opponents' arguments unfortunately :)
Anyway. To try and answer your (very thoughtful) question:
Someone needs to be doing mass outreach about AI Safety to techies in the Bay Area.
I'm generally more of a fan of niche outreach over mass outreach, but Bay Area tech culture influences how AI is developed. If SB 1047 is defeated, I wouldn't be surprised if the lack of such outreach ended up being a decisive factor.
There's now enough prominent supporters of AI Safety and AI is hot enough that public lectures or debates could draw a big crowd. Even though a lot of people have been exposed to these ideas before, there's something about in-person events that make ideas seem real.
My point is not that the current EA forum would censor topics that were actually important early EA conversations, because EAs have now been selected for being willing to discuss those topics. My point is that the current forum might censor topics that would be important course-corrections, just as if the rest of society had been moderating early EA conversations, those conversations might have lost important contributions like impartiality between species (controversial: you're saying human lives don't matter very much!), the ineffectiveness of developmen...
Reflections on a decade of trying to have an impact
Next month (September 2024) is my 10th anniversary of formally engaging with EA. This date marks 10 years since I first reached out to the Foundational Research Institute about volunteering, at least as far as I can tell from my emails.
Prior to that, I probably had read a fair amount of Peter Singer, Brian Tomasik, and David Pearce, who might all have been considered connected to EA, but I hadn’t actually actively tried engaging with the community. I’d been engaged with the effective animal advocacy commun...
After some discussions with someone offline that were clarifying, I want to clarify my decrease in confidence in the statement, "Farmed vertebrate welfare should be an EA focus".
I think my view is slightly more complicated than this implies. I think that given that OpenPhil and non-EA donors are basically able to fund what seem like the entirety of the good opportunities in this space, I don't think these groups are that talent constrained, and it seems like the best bets (e.g. corporate campaigns) will continue to have decreasing cost-effectiveness, new a...
I think more EAs should consider operations/management/doer careers over research careers, and that operations/management/doer careers should be higher status within the community.
I get a general vibe that in EA (and probably the world at large), that being a "deep thinking researcher"-type is way higher status than being an "operations/management/doer"-type. Yet the latter is also very high impact work, often higher impact than research (especially on the margin).
I see many EAs erroneously try to go into research and stick to research despite having very ...
The original website for Students for High Impact Charities (SHIC) at https://shicschools.org is down (You can find it in the Wayback Machine), but the program scripts and slides they used in high schools are still available at their google drive link at https://drive.google.com/drive/folders/0B_2KLuBlcCg4QWtrYW43UGcwajQ
Could potentially be a valuable EA community building resource
Nonprofit organizations should make their sources of funding really obvious and clear: How much money you got from which grantmakers, and approximately when. Any time I go on some org's website and can't find information about their major funders, it's a big red flag. At a bare minimum you should have a list of funders, and I'm confused why more orgs don't do this.
Has anyone talked with/lobbied the Gates Foundation on factory farming? I was concerned to read this in Gates Notes.
"On the way back to Addis, we stopped at a poultry farm established by the Oromia government to help young people enter the poultry industry. They work there for two or three years, earn a salary and some start-up money, and then go off to start their own agriculture businesses. It was a noisy place—the farm has 20,000 chickens! But it was exciting to meet some aspiring farmers and businesspeople with big dreams."
It seems a disaster that the ...
I used to frequently come across a certain acronym in EA, used in a context like "I'm working on ___" or "looking for other people who also use ___". I flagged it mentally as a curiosity to explore later, but ended up forgetting what the acronym was. I'm thinking it might be CFAR, which seems to have meant CFAR workshops? If so, 1) what happened to them, and 2) was it common for people to work through the material themselves, self-paced?
The copyright banner at the bottom of their site extends to 2024 and the Google form for workshop applications hasn't been deactivated.
I got a copy of the CFAR handbook in late 2022 and the intro had an explicit reference to self study - along the lines of 'we have only used this in workshops, we don't know what the results of self study of this material does and it wasn't written for self study'
So I assume self study wasn't common but I may be wrong
Please people, do not treat Richard Hannania as some sort of worthy figure who is a friend of EA. He was a Nazi, and whilst he claims he moderated his views, he is still very racist as far as I can tell.
Hannania called for trying to get rid of all non-white immigrants in the US, and the sterilization of everyone with an IQ under 90 indulged in antisemitic attacks on the allegedly Jewish elite, and even post his reform was writing about the need for the state to harass and imprison Black people specifically ('a revolution in our culture or form of governmen...
Just to expand on the above, I've written a new blog post - It's OK to Read Anyone - that explains (i) why I won't personally engage in intellectual boycotts [obviously the situation is different for organizations, and I'm happy for them to make their own decisions!], and (ii) what it is in Hanania's substack writing that I personally find valuable and worth recommending to other intellectuals.
London folks - I'm going to be running the EA Taskmaster game again at the AIM office on the afternoon of Sunday 8th September.
It's a fun, slightly geeky, way to spend a Sunday afternoon. Check out last year's list of tasks for a flavour of what's in store 👀
Sign up here
(Wee bit late in properly advertising so please do spread the word!)
I’m looking for podcasts, papers, or reviews on fish sentience.
Specifically:
I would also like to know if there are practical methods to reduce the amount of harm done if you are fishing.
Rethink priorities had their moral weights report which placed salmon at 0.056, I’m not sure I understood completely what that figure meant. I think this means they have 5% of the...
The Economist has an article about China's top politicians on catastrophic risks from AI, titled "Is Xi Jinping an AI Doomer?"
...Western accelerationists often argue that competition with Chinese developers, who are uninhibited by strong safeguards, is so fierce that the West cannot afford to slow down. The implication is that the debate in China is one-sided, with accelerationists having the most say over the regulatory environment. In fact, China has its own AI doomers—and they are increasingly influential.
[...]
China’s accelerationists want to keep th
Worth having some sort of running and contributable-to tab for open questions? Can also encourage people to flag open questions they see in posts.