All posts

New & upvoted

Today, 22 October 2024
Today, 22 Oct 2024

Frontpage Posts

Quick takes

Me and a working group at CEA have started scoping out improvements for effectivealtruism.org. Our main goals are: 1. Improve understanding of what EA is (clarify and simplify messaging, better address common misconceptions, showcase more tangible examples of impact, people, and projects) 2. Improve perception of EA (show more of the altruistic and other-directedness parts of EA alongside the effective, pragmatic, results-driven parts, feature more testimonials and impact stories from a broader range of people, make it feel more human and up-to-date) 3. Increase high-value actions (improve navigation, increase newsletter and VP signups, make it easier to find actionable info) For the first couple of weeks, I’ll be testing how the current site performs against these goals, then move on to the redesign, which I’ll user-test against the same goals. If you’ve visited the current site and have opinions, I’d love to hear them. Some prompts that might help: * Do you remember what your first impression was? * Have you ever struggled to find specific info on the site? * Is there anything that annoys you? * What do you think could be confusing to someone who hasn't heard about EA before? * What’s been most helpful to you? What do you like? If you prefer to write your thoughts anonymously you can do so here, although I’d encourage you to comment on this quick take so others can agree or disagree vote (and I can get a sense of how much the feedback resonates).
I've previously written a little bit about recognition in relation to mainanence/prevention, and this passage from Everybody Matters: The Extraordinary Power of Caring for Your People Like Family stood out to me as a nice reminder: Overall, the Everybody Matters could is the kind of book that could have been an article. I wouldn't recommend spending the time to read it if you are already superficially familiar with the fact that an organization can choose to treat people well (although maybe that would be revelatory for some people). It was on my  to-read list due to it's mention in the TED Talk Why good leaders make you feel safe.
Anthropic has just launched "computer use". "developers can direct Claude to use computers the way people do". https://www.anthropic.com/news/3-5-models-and-computer-use

Topic Page Edits and Discussion

Monday, 21 October 2024
Mon, 21 Oct 2024

Frontpage Posts

Quick takes

Donation opportunities for restricting AI companies: * Pause AI: protests and lobbying * Stop AI: is barricading OpenAI * For Humanity: a podcast on risks. * FoxGlove: a legal non-profit targeting big tech scaling. * Disruption Network Institute: whistleblowers and researchers reporting on the AI military misuses. * Distributed AI Research Institute:  AI ethics researchers giving general AI advocates hell and advocating for specialised models serving communities. * European Guild for AI Regulation:  lobbying in the EU against the data scraping that makes large models possible. * Concept Art Association:  lobbying in the EU against the data scraping that makes large models possible.   In my pipeline:   * funding a 'horror documentary' against AI by an award-winning documentary maker (got a speculation grant of $50k) * funding lawyers in the EU for some high-profile lawsuits and targeted consultations with EU AI Office.   If you're a donor, I can give you details on their current activities. I worked with staff in each of these organisations. DM me.
I noticed the most successful people, in the sense of advancing their career and publishing papers, I meet at work have a certain belief in themselves. What is striking, no matter their age/career stage, it is like they are already taking certain their success and where to go in the future.   I also noticed this is something that people from non-working class backgrounds manage to do.   Second point. They are good at finishing projects and delivering results in time.   I noticed that this was somehow independent from how smart is someone.   While I am very good at single tasks, I have always struggled with long term academic performance. I know it is true for some other people too.   What kind of knowledge/mentality am I missing? Because I feel stuck.

Sunday, 20 October 2024
Sun, 20 Oct 2024

Quick takes

There is a world that needs to be saved. Saving the world is a team sport.  All we can do is to contribute our part of the puzzle, whatever that may be and no matter how small, and trust in our companions to handle the rest. There is honor in that, no matter how things turn out in the end.

Topic Page Edits and Discussion

Saturday, 19 October 2024
Sat, 19 Oct 2024

Quick takes

I think there hasn't been enough research on iota-carageenan nasal sprays for prevention of viral infection for things more infectious than common colds. There was one study aimed at COVID-19 prophylaxis with it in hospital workers which was really promising: "The incidence of COVID-19 differs significantly between subjects receiving the nasal spray with I-C (2 of 196 [1.0%]) and those receiving placebo (10 of 198 [5.0%]). Relative risk reduction: 79.8% (95% CI 5.3 to 95.4; p=0.03). Absolute risk reduction: 4% (95% CI 0.6 to 7.4)." There was one clinical trial afterwards which set out to test the same thing but I can't tell what's going on with it now, the last update was posted over a year ago. So we have one study which looks great but could be a fluke, and there's no replication in sight. The good thing about carageenan-based products is that they're likely to be safe, since they're extensively studied due to their use as food additives and in other things. From Wikipedia: "Carrageenans or carrageenins [...] are a family of natural linear sulfated polysaccharides. [...] Carrageenans are widely used in the food industry, for their gelling, thickening, and stabilizing properties." See this section of the article for more. If it really does work for COVID and is replicated with existing variants, that's already a huge public health win - there's still a large amount of disability, death and suffering coming from it. With respect to influenza, theres's some evidence for efficacy in mice and the authors of that paper say that it "should be tested for prevention and treatment of influenza A in clinical trials in humans."  If it has broad-spectrum antiviral properties then it's also a potential tool for future pandemics. Finally, it's generic and not patented so you'd expect a lack of research funding for it relative to pharmaceutical drugs.
I tried asking ChatGPT, Gemini, and Claude to come up with a formula that converts between correlation space to probability space while preserving the relationship 0 = 1/n. I came up with such a formula a while back, so I figure it shouldn't be hard. They all offered formulas, all of which were shown to be very much wrong when I actually graphed them to check.
Is it obvious that( and how) massages reduce stress? Are studies like https://www.semanticscholar.org/paper/Effects-of-Scalp-Massage-on-Physiological-and-Shimada-Tsuchida/9e3a7bc9745469a9333ebe493e79a44220111d0c and https://www.semanticscholar.org/paper/The-Effect-of-Self-Scalp-Massage-on-Adult-Stress-Kim-Choi/99d1999aa8d8776e55461882cc06c06905ca77b1 rare and mostly ignored? What actions would measurably promote their conclusions?( I mean more like what strategies would promote actions of more massaging for more wellbeing.)

Friday, 18 October 2024
Fri, 18 Oct 2024

Quick takes

We're really excited to announce the following sessions for EA Global: Boston, which kicks off in just two weeks time: - Fireside chat with Iqbal Dhaliwal, Global Executive Director of JPAL.  - Rachel Silverman Bonnifield, Senior Fellow at the Center for Global Development, on the current state of the global movement to eliminate childhood lead poisoning. - A workshop on Anthropic's Responsible Scaling Policy, led by Zac Hatfield-Dodds, Technical Staff at Anthropic. Applications close Sunday! More info and how to apply on our website.
Are there estimates for the different per animal unit suffering from animal consumption in different countries (holding animal constant)? 
1
tobyj
4d
0
I wanted to get some perspective on my life so I wrote my own obituary (in a few different ways). They ended up being focussed my relationship with ambition. The first is below and may feel relatable to some here! See my other auto-obituaries here :)

Topic Page Edits and Discussion

Thursday, 17 October 2024
Thu, 17 Oct 2024

Frontpage Posts

Quick takes

I'm the co-founder and one of the main organizers of EA Purdue. Last fall, we got four signups for our intro seminar; this fall, we got around fifty. Here's what's changed over the last year: * We got officially registered with our university. Last year, we were an unregistered student organization, and as a result lacked access to opportunities like the club fair and were not listed on the official Purdue extracurriculars website. After going through the registration process, we were able to take advantage of these opportunities. * We tabled at club fairs. Last year, we did not attend club fairs, since we weren't yet eligible for them. This year, we were eligible and attended, and we added around 100 people to our mailing list and GroupMe. This is probably the most directly impactful change we made. * We had a seminar sign-up QR code at the club fairs. This item actually changed between the club fairs, since we were a bit slow to get the seminar sign-up form created. A majority of our sign-ups came from the one club fair where we had the QR code, despite the other club fair being ~10-50x larger. * We held our callout meeting earlier. Last year, I delayed the first intro talk meeting until the middle of the third week of school, long after most clubs finished their callouts. This led to around 10 people showing up, which was still more than I expected, but not as much as I had hoped. This year, we held the callout early the second week of school, and ended up getting around 30-35 attendees. We also gave those attendees time to fill out the seminar sign-up form at the callout, and this accounted for most of the rest of our sign-ups. * We brought food to the callout. People are more likely to attend meetings at universities if there is food, especially if they're busy and can skip a long dining court line by listening to your intro talk. I highly recommend bringing food to your regular meetings too - attendance at our general meetings doubled last year after I s
We're thinking of moving the Forum digest, and probably eventually the EA Newsletter to Substack. We're at least planning to try this out, hopefully starting with the next digest issue on the 23rd. Here's an internal doc with our reasoning behind this (not tailored for public consumption, but you should be able to follow the thread). I'm interested in any takes people have on this. I'm not super familiar with Substack from an author perspective so if you have any crucial considerations about how the platform works that would be very helpful. General takes and agree/disagree (with the decision to move the digest to Substack) votes are also appreciated.
NotebookLM is basically magic. Just take whatever Forum post you can't be bothered reading but know you should and use NotebookLM to convert it into a podcast. It seems reasonable that in 6 - 12 months there will be a button inside each Forum post that converts said post into a podcast (i.e. you won't need to visit NotebookLM to do it).
Simple Forecasting Metrics? I've been thinking about the simplicity of explaining certain forecasting concepts versus the complexity of others. Take calibration, for instance: it's simple to explain. If someone says something is 80% likely, it should happen about 80% of the time. But other metrics, like the Brier score, are harder to convey: What exactly does it measure? How well does it reflect a forecaster's accuracy? How do you interpret it? All of this requires a lot of explanation for anyone not interested in the science of Forecasting.  What if we had an easily interpretable metric that could tell you, at a glance, whether a forecaster is accurate? A metric so simple that it could fit within a tweet or catch the attention of someone skimming a report—someone who might be interested in platforms like Metaculus. Imagine if we could say, "When Metaculus predicts something with 80% certainty, it happens between X and Y% of the time," or "On average, Metaculus forecasts are off by X%". This kind of clarity could make comparing forecasting sources and platforms far easier.  I'm curious whether anyone has explored creating such a concise metric—one that simplifies these ideas for newcomers while still being informative. It could be a valuable way to persuade others to trust and use forecasting platforms or prediction markets as reliable sources. I'm interested in hearing any thoughts or seeing any work that has been done in this direction.

Wednesday, 16 October 2024
Wed, 16 Oct 2024

Frontpage Posts

Quick takes

45
Jason
6d
3
Some thoughts on future Debate Week topics: I would prefer that the next topic move away from financial allocation between cause areas, so maybe something like: 1. There are 100 young, smart, flexible recent university graduates who are open to ~any kind of work. What is the optimal allocation of those graduates between object-level work, meta work, earning to give, or something else? 2. Should EA move directionally toward being a more r-selected (higher growth, less investment in each offspring) or K-selected movement, [1]or stay roughly where it is? Two advantages of these sorts of topics, vis-a-vis a financial cause-prio debate:        A. I think these kinds of issues are generally more likely to be action-relevant for Forum users. Even I won a billion-dollar lottery prize and established a trust to give $50MM to effective animal welfare charities, the net effect on cause prio might be far less than $50MM because OP might reduce its spend by almost that amount. While there are niches in which this effect is absent or less pronounced, structuring a debate week with broad participation around them may be challenging.         B. These kinds of issues should be more accessible to those from a variety of cause perspectives. For various reasons, the last Debate Week was set up to have a predominant focus on a single cause area (AW). Cf. this discussion. That's not a bad thing, but I don't think all or most Weeks should be set up like that. Other questions may not have this effect -- for instance, I expect that the answers to questions 1 & 2 would differ substantially due to cause prio. So there's value in authoring discussion of these questions from a GH perspective, from an AW perspective, from a GCR perspective, and so on.[2] More generally, it might be helpful to plan a Debate Season well in advance -- a "season" of (e.g.) one week each on a topic that is either specifically within a major cause area or for which it is expected to predominate, plus one or mo
@Toby Tremlett🔹 @Will Howard🔹  Where can i see the debate week diagram if I want to look back at it?
A thought about AI x-risk discourse and the debate on how "Pascal's Mugging"-like AIXR concerns are, and where this causes confusion between those concerned and sceptical. I recognise a pattern where a sceptic will say "AI x-risk concerns are like Pascal's wager/are Pascalian and not valid" and then an x-risk advocate will say "But the probabilities aren't Pascalian. They're actually fairly large"[1], which usually devolves into a "These percentages come from nowhere!" "But Hinton/Bengio/Russell..." "Just useful idiots for regulatory capture..." discourse doom spiral. I think a fundamental miscommunication here is that, while the sceptic is using/implying the term "Pascallian" they aren't concerned[2] with the percentage of risk being incredibly small but high impact, they're instead concerned about trying to take actions in the world - especially ones involving politics and power - on the basis of subjective beliefs alone.  In the original wager, we don't need to know anything about the evidence record for a certain God existing or not, if we simply Pascal's framing and premisses then we end up with the belief that we ought to believe in God. Similarly, when this term comes up, AIXR sceptics are concerned about changing beliefs/behaviour/enact law based on arguments from reason alone that aren't clearly connected to an empirical track record. Focusing on which subjective credences are proportionate to act upon is not likely to be persuasive compared to providing the empirical goods, as it were. 1. ^ Let's say x>5% in the rest of the 21st century for sake of argument 2. ^ Or at least it's not the only concern, perhaps the use of EV in this way is a crux, but I think it's a different one

Tuesday, 15 October 2024
Tue, 15 Oct 2024

Frontpage Posts

Personal Blogposts

Quick takes

I think people working on animal welfare have more incentives to post during debate week than people working on global health. The animal space feels (when you are in it) very funding constrained, especially compared to working in the global health and development space (and I expect gets a higher % of funding from EA / EA-adjacent sources). So along comes debate week and all the animal folk are very motivated to post and make their case and hopefully shift a few $. This could somewhat bias the balance of the debate. (Of course the fact that one side of the debate feels they needs funding so much more is in itself relevant to the debate.) 
A hack to multiply your donations by up to 102% Disclaimer: I'm a former PayPal employee. The following statements are my opinion alone and do not reflect PayPal's views. Also, this information is accurate as of 2024-10-14 and may become outdated in the future. More donors should consider using PayPal Giving Fund to donate to charities. To do so, go to this page, search for the charity you want, and donate through the charity's page with your PayPal account. (For example, this is GiveDirectly's page.) PayPal covers all processing fees on charitable donations made through their giving website, so you don't have to worry about the charity losing money to credit card fees. If you use a credit card that gives you 1.5 or 2% cash back (or 1.5-2x points) on all purchases, your net donation will be multiplied by ~102%. I don't know of any credit cards that offer elevated rewards for charitable donations as a category (like many do for restaurants, groceries, etc.), so you most likely can't do better than a 2% card for donations (unless you donate stocks). For political donations, platforms like ActBlue and Anedot charge the same processing fees to organizations regardless of what payment method you use.[1] So you should also donate using your 1.5-2% card. 1. ^ ActBlue: 3.95% on all transactions. Anedot: For non-501(c)(3) organizations, 4% + 30¢ on all transactions except Bitcoin and 1% on Bitcoin transactions. 501(c)(3) organizations are charged a much lower rate for ACH transactions.
The problem with AI safety policy is that if we don't specify and attempt to answer the technical concerns then someone else will and safety wash the concerns away. CSOs need to understand what they themselves mean when they say "explainable" and "algorithmic transparency."
If i found a place that raised cows that had predictably net positive lives, what would be the harm in eating beef from this farm? I've been ostrovegan for ~ 7 years but open to changing my mind with new information.

Topic Page Edits and Discussion

Sunday, 13 October 2024
Sun, 13 Oct 2024

Frontpage Posts

Quick takes

21
mic
9d
5
The plant-based foods industry should make low-phytoestrogen soy products. Soy is an excellent plant-based protein. It's also a source of the phytoestrogen isoflavone, which men online are concerned has feminizing properties (cf. soy boy). I think the effect of isoflavones is low for moderate consumption (e.g., one 3.5 oz block of tofu per day), but could be significant if the average American were to replace the majority of their meat consumption with soy-based products. Fortunately, isoflavones in soy don't have to be an issue. Low-isoflavone products are around, but they're not labeled as such. I think it would be a major win for animal welfare if the plant-based foods industry could transition soy-based products to low-isoflavone and execute a successful marketing campaign to quell concerns about phytoestrogens (without denigrating higher-isoflavone soy products). More speculatively, soy growers could breed or bioengineer soy to be low in isoflavones, like other legumes. One model for this development would be how normal lupin beans have bitter, toxic alkaloids and need days of soaking. But in the 1960s, Australian sweet lupins were bred with dramatically lower alkaloid content and are essentially ready to eat. Isoflavone content varies dramatically depending on the processing and growing conditions. This chart from Examine shows that 100 g of tofu can have anywhere from 3 to 142 mg of isoflavones, and 100 mg soy protein isolate can have 46 to 200 mg of isoflavones.
I went to a large event, and the organizers counted the number of attendees present and then ordered chicken for everyone's meal. Unfortunately I didn't have a chance to request a vegetarian alternative. What's the most efficient way to offset my portion of the animal welfare harm, and how much will it cost? I'm looking for information such as "XYZ is the current best charity for reducing animal suffering, and saves chickens for $xx each", but I'm open to donating to something that helps other animals - doesn't necessarily have to be chickens, if I can offset the harm more effectively or do more good per dollar elsewhere.

Load more days