All posts

New & upvoted

Today, 14 April 2024
Today, 14 Apr 2024

No posts for April 14th 2024

Quick takes

Have Will MacAskill, Nick Beckstead, or Holden Karnofsky responded to the reporting by Time that they were warned about Sam Bankman-Fried's behaviour years before the FTX collapse?

Saturday, 13 April 2024
Sat, 13 Apr 2024

Quick takes

Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on the front page. I think it should be clarified that they are April Fools jokes so people can differentiate EA weird stuff from EA weird stuff that's a joke more easily. Sure, if you check the details you'll see that things don't add up, but we all know most people just read the title or first few paragraphs.
Could it be more important to improve human values than to make sure AI is aligned? Consider the following (which is almost definitely oversimplified):   ALIGNED AI MISALIGNED AI HUMANITY GOOD VALUES UTOPIA EXTINCTION HUMANITY NEUTRAL VALUES NEUTRAL WORLD EXTINCTION HUMANITY BAD VALUES DYSTOPIA EXTINCTION For clarity, let’s assume dystopia is worse than extinction. This could be a scenario where factory farming expands to an incredibly large scale with the aid of AI, or a bad AI-powered regime takes over the world. Let's assume neutral world is equivalent to extinction. The above shows that aligning AI can be good, bad, or neutral. The value of alignment exactly depends on humanity’s values. Improving humanity’s values however is always good.  The only clear case where aligning AI beats improving humanity’s values is if there isn’t scope to improve our values further. An ambiguous case is whenever humanity has positive values in which case both improving values and aligning AI are good options and it isn’t immediately clear to me which wins. The key takeaway here is that improving values is robustly good whereas aligning AI isn’t - alignment is bad if we have negative values. I would guess that we currently have pretty bad values given how we treat non-human animals and alignment is therefore arguably undesirable. In this simple model, improving values would become the overwhelmingly important mission. Or perhaps ensuring that powerful AI doesn't end up in the hands of bad actors becomes overwhelmingly important (again, rather than alignment). This analysis doesn’t consider the moral value of AI itself. It also assumed that misaligned AI necessarily leads to extinction which may not be accurate (perhaps it can also lead to dystopian outcomes?). I doubt this is a novel argument, but what do y’all think?
The TV show Loot, in Season 2 Episode 1, introduces a SBF-type character named Noah Hope DeVore, who is a billionaire wonderkid who invents "analytic altruism", which uses an algorithm to determine "the most statistically optimal ways" of saving lives and naturally comes up with malaria nets. However, Noah is later arrested by the FBI for wire fraud and various other financial offenses.
I would like to estimate how effective free hugs are. Can anyone help me?

Friday, 12 April 2024
Fri, 12 Apr 2024

Frontpage Posts

Quick takes

Many organizations I respect are very risk-averse when hiring, and for good reasons. Making a bad hiring decision is extremely costly, as it means running another hiring round, paying for work that isn't useful, and diverting organisational time and resources towards trouble-shooting and away from other projects. This leads many organisations to scale very slowly. However, there may be an imbalance between false positives (bad hires) and false negatives (passing over great candidates). In hiring as in many other fields, reducing false positives often means raising false negatives. Many successful people have stories of being passed over early in their careers. The costs of a bad hire are obvious, while the costs of passing over a great hire are counterfactual and never observed. I wonder  whether, in my past hiring decisions, I've properly balanced the risk of rejecting a potentially great hire against the risk of making a bad hire. One reason to think we may be too risk-averse, in addition to the salience of the costs, is that the benefits of a great hire could grow to be very large, while the costs of a bad hire are somewhat bounded, as they can eventually be let go.
6
Ramiro
2d
0
Anyone else consders  the case of Verein KlimaSeniorinnen Schweiz and Others v. Switzerland (application no. 53600/20) of the European Court of Human Rights a possibly useful for GCR litigation?
Within EA, work on x-risk is very siloed by type of threat: There are the AI people, the bio people, etc. Is this bad, or good? Which of these is the correct analogy? 1. "Biology is to science as AI safety is to x-risk," or  2. "Immunology is to biology as AI safety is to x-risk" EAs seem to implicitly think analogy 1 is correct: some interdisciplinary work is nice (biophysics) but most biologists can just be biologists (i.e. most AI x-risk people can just do AI). The "existential risk studies" model (popular with CSER, SERI, and lots of other non-EA academics) seems to think that analogy 2 is correct, and that interdisciplinary work is totally critical—immunologists alone cannot achieve a useful understanding of the entire system they're trying to study, and they need to exchange ideas with other subfields of medicine/biology in order to have an impact, i.e. AI x-risk workers are missing critical pieces of the puzzle when they neglect broader x-risk studies.
I am planning to write post about happiness guilt. I think many of EA would have it. Can you share resources or personal experiences?
Would love for orgs running large-scale hiring rounds (say 100+ applicants) to provide more feedback to their (rejected) applicants. Given that in most cases applicants are already being scored and ranked on their responses, maybe just tell them their scores, their overall ranking and what the next round cutoff would have been - say: prompt 1 = 15/20, prompt 2 = 17.5/20, rank = 156/900, cutoff for work test at 100. Since this is already happening in the background (if my impression here is wrong please lmk), why not make the process more transparent and release scores - with what seems to be very little extra work required (beyond some initial automation). 

Thursday, 11 April 2024
Thu, 11 Apr 2024

Frontpage Posts

Quick takes

The latest episode of the Philosophy Bites podcast is about Derek Parfit.[1] It's an interview with his biographer (and fellow philosopher) David Edmonds. It's quite accessible and only 20 mins long. Very nice listening if you fancy a walk and want a primer on Parfit's work. 1. ^ Parfit was a philosopher who specialised in personal identity, rationality, and ethics. His work played a seminal role in the development of longtermism. He is widely considered one of the most important and influential moral philosophers of the late 20th and early 21st centuries.
In July 2022, Jeff Masters wrote an article (https://yaleclimateconnections.org/2022/07/the-future-of-global-catastrophic-risk-events-from-climate-change/) summarizing findings from a United Nations report on the increasing risks of global catastrophic risk (GCR) events due to climate change. The report defines GCRs as catastrophes that kill over 10 million people or cause over $10 trillion in damage. It warned that by increasingly pushing beyond safe planetary boundaries, human activity is boosting the odds of climate-related GCRs. The article argued that societies are more vulnerable to sudden collapse when multiple environmental shocks occur, and that the combination of climate change impacts poses a serious risk of total societal collapse if we continue business as usual. Although the article and report are from mid-2022, the scientific community has been messaging that climate change effects are increasing faster than models predicted. So I'm curious - what has the EA community been doing over the past year to understand, prepare for and mitigate these climate-related GCRs? Some questions I have: * What new work has been done in EA on these risks since mid-2022, and what are the key open problems? * How much intellectual priority and resources is the EA community putting towards climate GCRs compared to other GCRs? Has this changed in the past year, and is it enough given the magnitude of the risks? I see this as different than investing in interventions that address GHGs and warming.  * How can we ensure these risks are getting adequate attention? I'm very interested to hear others' thoughts. While a lot of great climate-related work is happening in EA, I worry that climate GCRs remain relatively neglected compared to other GCRs. 
Resolved unresolved issues  One of the things I find difficult about discussing problem solving with people is that they often fall back on shallow causes. For example, if politician A's corruption is the problem, you can kick him out. easy. Problem solved! This is the problem. Of course, the problem was solved, but the problem was not solved. The natural assumption is that politician B will cause a similar problem again. In the end, that's the advice people give. “Kick A out!!” Whatever it was. Whether it's your weird friends, your bad grades, or your weight. Of course, this is a personal problem, but couldn't it be expanded to a general problem of decision-making? Maybe it would have been better to post it on lesswrong. Still, I'd like to hear your opinions.
In conversations of x-risk, one common mistake seems to be to suggest that we have yet to invent something that kills all people and so the historical record is not on the side of "doomers." The mistake is survivorship bias, and Ćirković, Sandberg, and Bostrom (2010) call this the Anthropic Shadow. Using base rate frequencies to estimate the probability of events that reduce the number of people (observers), will result in bias.  If there are multiple possible timelines and AI p(doom) is super high (and soon), then we would expect a greater frequency of events that delay the creation of AGI (geopolitical issues, regulation, maybe internal conflicts at AI companies, other disaster, etc.). It might be interesting to see if super forecasters consistently underpredict events that would delay AGI. Although, figuring out how to actually interpret this information would be quite challenging unless it's blatantly obvious. I guess more likely is that I'm born in a universe with more people and everything goes fine anyway. This is quite speculative and roughly laid out, but something I've been thinking about for a while.

Topic Page Edits and Discussion

Wednesday, 10 April 2024
Wed, 10 Apr 2024

Frontpage Posts

Quick takes

Mini EA Forum Update You can now subscribe to be notified when posts are added to a sequence. You can see more details in GitHub here. We’ve also made it a bit easier to create and edit sequences, including allowing users to delete sequences they’ve made. I've been thinking a bit about how to improve sequences, so I'd be curious to hear: 1. How you use them 2. What you'd like to be able to do with them 3. Any other thoughts/feedback
unfortunately when you are inspired by everyone else's April Fool's posts, it is already too late to post your own I will comfort myself by posting my unseasonal ideas as comments on this post
Thoughts on project or research auction. It is very cumbersome to apply for funds one by one from Openphil or EA fund. Wouldn't it be better for a major EA organization to auction off the opportunity to participate in a project and let others buy it? It will be similar to a tournament, but you will be able to sell a lot more projects at a lower price and reduce the amount of resources wasted on having many people competing for the same project.

Tuesday, 9 April 2024
Tue, 9 Apr 2024

Quick takes

Given that effective altruism is "a project that aims to find the best ways to help others, and put them into practice"[1] it seems surprisingly rare to me that people actually do the hard work of: 1. (Systematically) exploring cause areas 2. Writing up their (working hypothesis of a) ranked or tiered list, with good reasoning transparency 3. Sharing their list and reasons publicly.[2] The lists I can think of that do this best are by 80,000 Hours, Open Philanthropy's, and CEARCH's list. Related things I appreciate, but aren't quite what I'm envisioning: * Tools and models like those by Rethink Priorities and Mercy For Animals, though they're less focused on explanation of specific prioritisation decisions. * Longlists of causes by Nuno Sempere and CEARCH, though these don't provide ratings, rankings, and reasoning. * Various posts pitching a single cause area and giving reasons to consider it a top priority without integrating it into an individual or organisation's broader prioritisation process. There are also some lists of cause area priorities from outside effective altruism / the importance, neglectedness, tractability framework, although these often lack any explicit methodology, e.g. the UN, World Economic Forum, or the Copenhagen Consensus. If you know of other public writeups and explanations of ranked lists, please share them in the comments![3] 1. ^ Of course, this is only one definition. But my impression is that many definitions share some focus on cause prioritisation, or first working out what doing the most good actually means. 2. ^ I'm a hypocrite of course, because my own thoughts on cause prioritisation are scattered across various docs, spreadsheets, long-forgotten corners of my brain... and not at all systematic or thorough. I think I roughly: - Came at effective altruism with a hypothesis of a top cause area based on arbitrary and contingent factors from my youth/adolescence (ending factory farming),  - Had that hypothesis worn down by various information and arguments I encountered and changed my views on the top causes - Didn't ever go back and do a systemic cause prioritisation exercise from first principles (e.g. evaluating cause candidates from a long-list that includes 'not-core-EA™-cause-areas' or based on criteria other than ITN). I suspect this is pretty common. I also worry people are deferring too much on what is perhaps the most fundamental question of the EA project. 3. ^ Rough and informal explanations welcome. I'd especially welcome any suggestions that come from a different methodology or set of worldviews & assumptions to 80k and Open Phil. I ask partly because I'd like to be able to share multiple different perspectives when I introduce people to cause prioritisation to avoid creating pressure to defer to a single list.
AI Summary of the "Quick Update on Leaving the Board of EV" Thread (including comments): Rebecca Kagan's resignation from the board of Effective Ventures (EV) due to disagreements regarding the handling of the FTX crisis has sparked an intense discussion within the Effective Altruism (EA) community. Kagan believes that the EA community needs an external, public investigation into its relationship with FTX and its founder, Sam Bankman-Fried (SBF), to address mistakes and prevent future harm. She also calls for clarity on EA leadership and their responsibilities to avoid confusion and indirect harm. The post generated extensive debate, with many community members echoing the call for a thorough, public investigation and postmortem. They argue that understanding what went wrong, who was responsible, and what structural and cultural factors enabled these mistakes is crucial for learning, rebuilding trust, and preventing future issues. Some point to the concerning perception gap between those who had early concerns about SBF and those who seemingly ignored or downplayed these warnings. However, others raise concerns about the cost, complexity, and legal risks involved in conducting a comprehensive investigation. They worry about the potential for re-victimizing those negatively impacted by the FTX fallout and argue that the key facts may have already been uncovered through informal discussions. Alternative suggestions include having multiple individuals with relevant expertise conduct post-mortems, focusing on improving governance and organizational structures, and mitigating the costs of speaking out by waiving legal obligations or providing financial support for whistleblowers. The thread also highlights concerns about recent leadership changes within EA organizations. Some argue that the departure of individuals known for their integrity and thoughtfulness regarding these issues raises questions about the movement's priorities and direction. Others suggest that these changes may be less relevant due to factors such as the impending disbanding of EV or reasons unrelated to the FTX situation. Lastly, the discussion touches on the concept of "naive consequentialism" and its potential role in the FTX situation and other EA decisions. The OpenAI board situation is also mentioned as an example of the challenges facing the EA community beyond the FTX crisis, suggesting that the core issues may lie in the quality of governance rather than a specific blind spot. Overall, the thread reveals a community grappling with significant trust and accountability issues in the aftermath of the FTX crisis. It underscores the urgent need for the EA community to address questions of transparency, accountability, and leadership to maintain its integrity and continue to positively impact the world. What are the most surprising things that emerged from the thread? Based on the summaries, a few surprising or noteworthy things emerged from the "Quick Update on Leaving the Board of EV" thread: 1. The extent of disagreement and concern within the EA community regarding the handling of the FTX crisis, as highlighted by Rebecca Kagan's resignation from the EV board and the subsequent discussion. 2. The revelation of a significant perception gap between those who had early concerns about Sam Bankman-Fried (SBF) and those who seemingly ignored or downplayed these warnings, suggesting a lack of effective communication and information-sharing within the community. 3. The variety of perspectives on the necessity and feasibility of conducting a public investigation into the EA community's relationship with FTX and SBF, with some advocating strongly for transparency and accountability, while others raised concerns about cost, complexity, and potential legal risks. 4. The suggestion that recent leadership changes within EA organizations may have been detrimental to reform efforts, with some individuals known for their integrity and thoughtfulness stepping back from their roles, raising questions about the movement's priorities and direction. 5. The mention of the OpenAI board situation as another example of challenges facing the EA community, indicating that the issues extend beyond the FTX crisis and may be rooted in broader governance and decision-making processes. 6. The discussion of "naive consequentialism" and its potential role in the FTX situation and other EA decisions, suggesting a need for the community to re-examine its philosophical foundations and decision-making frameworks. 7. The emotional weight and urgency conveyed by many community members regarding the need for transparency, accountability, and reform, underscoring the significance of the FTX crisis and its potential long-term impact on the EA movement's credibility and effectiveness. These surprising elements highlight the complex nature of the challenges facing the EA community and the diversity of opinions within the movement regarding the best path forward.

Monday, 8 April 2024
Mon, 8 Apr 2024

Frontpage Posts

1
· 6d ago · 1m read

Quick takes

Im intrigued where people stand on the threshold where farmed animal lives might become net positive? I'm going to share a few scenarios i'm very unsure about and id love to hear thoughts or be pointed towards research on this. 1. Animals kept in homesteads in rural Uganda where I live. Often they stay inside with the family at night, then are let out during the day to roam free along the farm or community. The animals seem pretty darn happy most of the time for what it's worth, playing and galavanting around. Downsides here include poor veterinary care so sometimes parasites and sickness are pretty bad and often pretty rough transport and slaughter methods (my intuition net positive). 2. Grass fed sheep in New Zealand, my birth country. They get good medical care, are well fed on grass and usually have large roaming areas (intuition net positive) 3. Grass fed dairy cows in New Zealand. They roam fairly freely and will have very good vet care, but have they calves taken away at birth, have constantly uncomfortably swollen udders and are milked at least twice daily. (Intuition very unsure) 4. Free range pigs. Similar to the above except often space is smaller but they do get little houses. Pigs are far more intelligent than cows or sheep and might have more intellectual needs not getting met. (Intuition uncertain) Obviously these kind of cases make up a small proportion of farmed animals worldwide, with the predominant situation - factory farmed animals likely having net negative lives. I know that animals having net positive lives far from justifies farming animals on it's own, but it seems important for my own decision making and for standing on solid ground while talking with others about animal suffering. Thanks for your input.
https://forum.effectivealtruism.org/events/cJnwCKtkNs6hc2MRp/panel-discussion-how-can-the-space-sector-overcome  This event is now open to virtual attendees! It is happening today at 6:30PM BST. The discussion will focus on how the space sector can overcome international conflicts, inspired by the great power conflict and space governance 80K problem profiles. 

Topic Page Edits and Discussion

Sunday, 7 April 2024
Sun, 7 Apr 2024

Quick takes

Please advertise applications at least 4 weeks before closing! (more for fellowships!) I've seen a lot of cool job postings, fellowships, or other opportunities that post that applications are open the forum or on 80k ~10 days before closing.  Because many EA roles or opportunities often get cross-posted to other platforms or newsletters, and there's a built in lag-time between the original post and the secondary platform, this is especially relevant to EA. For fellowships or similar training programs, where so much work has gone into planning and designing the program ahead of time, I would really encourage to open applications ~2 months before closing.  Keep in mind that most forum posts don't stay on the frontpage very long, so "posting something on the forum" does not equal "the EA community has seen this". As someone who runs a local group and a newsletter, opportunities with short application times are almost always missed by my community, since there's not enough turnaround time between when we see the original post, the next newsletter, and time for community members to apply.

Friday, 5 April 2024
Fri, 5 Apr 2024

Frontpage Posts

Quick takes

Please people, do not treat Richard Hannania as some sort of worthy figure who is a friend of EA. He was a Nazi, and whilst he claims he moderated his views, he is still very racist as far as I can tell. Hannania called for trying to get rid of all non-white immigrants in the US, and the sterilization of everyone with an IQ under 90 indulged in antisemitic attacks on the allegedly Jewish elite, and even post his reform was writing about the need for the state to harass and imprison Black people specifically ('a revolution in our culture or form of government. We need more policing, incarceration, and surveillance of black people' https://en.wikipedia.org/wiki/Richard_Hanania).  Yet in the face of this, and after he made an incredibly grudging apology about his most extreme stuff (after journalists dug it up), he's been invited to Manifiold's events and put on Richard Yetter Chappel's blogroll.  DO NOT DO THIS. If you want people to distinguish benign transhumanism (which I agree is a real thing*) from the racist history of eugenics, do not fail to shun actual racists and Nazis. Likewise, if you want to promote "decoupling" factual beliefs from policy recommendations, which can be useful, do not duck and dive around the fact that virtually every major promoter of scientific racism ever, including allegedly mainstream figures like Jensen, worked with or published with actual literal Nazis (https://www.splcenter.org/fighting-hate/extremist-files/individual/arthur-jensen).  I love most of the people I have met through EA, and I know that-despite what some people say on twitter- we are not actually a secret crypto-fascist movement (nor is longtermism specifically, which whether you like it or not, is mostly about what its EA proponents say it is about.) But there is in my view a disturbing degree of tolerance for this stuff in the community, mostly centered around the Bay specifically. And to be clear I am complaining about tolerance for people with far-right and fascist ("reactionary" or whatever) political views, not people with any particular personal opinion on the genetics of intelligence. A desire for authoritarian government enforcing the "natural" racial hierarchy does not become okay, just because you met the person with the desire at a house party and they seemed kind of normal and chill or super-smart and nerdy.  I usually take a way more measured tone on the forum than this, but here I think real information is given by getting shouty.  *Anyone who thinks it is automatically far-right to think about any kind of genetic enhancement at all should go read some Culture novels, and note the implied politics (or indeed, look up the author's actual die-hard libertarian socialist views.) I am not claiming that far-left politics is innocent, just that it is not racist. 
Here’s a puzzle I’ve thought about a few times recently: The impact of an activity (I) is due to two factors, X and Y. Those factors combine multiplicatively to produce impact. Examples include: * The funding of an organization and the people working at the org * A manager of a team who acts as a lever on the work of their reports * The EA Forum acts as a lever on top of the efforts of the authors * A product manager joins a team of engineers Let’s assume in all of these scenarios that you are only one of the players in the situation, and you can only control your own actions. From a counterfactual analysis, if you can increase your contribution by 10%, then you increase the impact by 10%, end of story. From a Shapley Value perspective, it’s a bit more complicated, but we can start with a prior that you split your impact evenly with the other players. Both these perspectives have a lot going for them! The counterfactual analysis has important correspondences to reality. If you do 10% better at your job the world gets 0.1I better. Shapley Values prevent the scenario where the multiplicative impact causes the involved agents to collectively contribute too much. I notice myself feeling relatively more philosophically comfortable running with the Shapely Value analysis in the scenario where I feel aligned with the other players in the game. And potentially the Shapley Value approach downsides go down if I actually run the math (Fake edit: I ran a really hacky guess as to how I’d calculate this using this calculator and it wasn’t that helpful). But I don’t feel 100% bought-in to the Shapley Value approach, and think there’s a value in paying attention to the counterfactuals. My unprincipled compromise approach would be to take some weighted geometric mean and call it a day. Interested in comments.

Load more days