All posts

New & upvoted

Today, 25 January 2025
Today, 25 Jan 2025

Quick takes

Larry Ellison, who will invest tens of billions in Stargate said uberveillance via AGI will be great because  then police and the populace would always have to be on their best behaviour. It is best to assume the people pushing 8 billion of us into the singularity have psychopathy (or similar disorders). This matters because we need to know who we're going up against: there is no rationalising with these people. They aren't counting the QALYs! Footage of Larry’s point of view starts around 12.00 on Matt Wolf’s video  

Friday, 24 January 2025
Fri, 24 Jan 2025

Frontpage Posts

Quick takes

19
Austin
1d
4
Anthropic's donation program seems to have been recently pared down? I recalled it as 3:1, see eg this comment on Feb 2023. But right now on https://www.anthropic.com/careers: > Optional equity donation matching at a 1:1 ratio, up to 25% of your equity grant Curious if anyone knows the rationale for this -- I'm thinking through how to structure Manifund's own compensation program to tax-efficiently encourage donations, and was looking at the Anthropic program for inspiration. I'm also wondering if existing Anthropic employees still get the 3:1 terms, or the program has been changed for everyone going forward. Given the rumored $60b raise, Anthropic equity donations are set to be a substantial share of EA giving going forward, so the precise mechanics of the giving program could change funding considerations by a lot. One (conservative imo) ballpark: * If founders + employees broadly own 30% of outstanding equity * 50% of that has been assigned and vested * 20% of employees will donate * 20% of their equity within the next 4 years then $60b x 0.3 x 0.5 x 0.2 x 0.2 / 4 = $90m/y. And the difference between 1:1 and 3:1 match is the difference between $180m/y of giving and $360m/y.
Quick thoughts on investing for transformative AI (TAI) Some EAs/AI safety folks invest in securities that they expect to go up if TAI happens. I rarely see discussion of the future scenarios where it makes sense to invest for TAI, so I want to do that. My thoughts aren't very good, but I've been sitting on a draft for three years hoping I develop some better thoughts and that hasn't happened, so I'm just going to publish what I have. (If I wait another 3 years, we might have AGI already!) When does investing for TAI work? Scenarios where investing doesn't work: 1. Takeoff happens faster than markets can react, or takeoff happens slowly but is never correctly priced in. 2. Investment returns can't be spent fast enough to prevent extinction. 3. TAI creates post-scarcity utopia where money is irrelevant. 4. It turns out TAI was already correctly priced in. Scenarios where investing works: 1. Slow takeoff, market correctly anticipates TAI after we do but before it actually happens, and there's a long enough time gap that we can productively spend the earnings on AI safety. 2. TAI is generally good, but money still has value and there are still a lot of problems in the world that can be fixed with money. (Money seems much more valuable in scenario #5 than #6.) What is the probability that we end up in a world where investing for TAI turns out to work? I don't think it's all that high (maybe 25%, although I haven't thought seriously about this). You also need to be correct about your investing thesis, which is hard. Markets are famously hard to beat. Possible investment strategies 1. Hardware makers (e.g. NVIDIA)? Anecdotally this seems to be the most popular thesis. This is the most straightforward idea but I am suspicious that a lot of EA support for investing in AI looks basically indistinguishable from typical hype-chasing retail investor behavior. NVIDIA already has a P/E of 56. There is a 3x levered long NVIDIA ETP. That is not the sort of thin
Not a proper quick take and perhaps off-topic on this forum. But given that I know some people here are into health I give it a shot. I would be very grateful if someone could point me to some excellent doctors around Europe, website, or some sort of diet that can be good to improve my health. I have been having some health issues: * getting sick frequently, feeling feverish most of the time * Difficulty breathing, with some pain sometime when I am also sick * It turns out I have recently developed asthma too. Initially a couple of doctors thought it was anxiety, and after a year and a half I really pushed to get a proper test. * A burning sensation on my left chest, since I took the covid vaccine * Slight hypertension * And most importantly a continuous head confusion (similar to when I used to get a fever) since two years ago! Sometimes it has head tingling, with hands and feet tingling too, and sometimes a headache. * I am fit, and try to workout when I can.[1]   All the doctors that I have met do not have a single clue, and do not seem interested into solving the problem or investigate.[2] I am not even looking for a cure now. Just a diagnosis. I am not rich so my budget is limited.  But life has become so difficult.  1. ^ With my breathing problems I can do a 10km at around 4:40 km/min, but suddenly not on my top of my game (4:10 km/min). 2. ^ If it is not an easy ibuprofen or similar, they quickly give up. I have a big suspicion that doctors are not trained well, so they might be all effectively incompetent at dealing with situations that are not solved by the usual medicines they prescribe. I plan one day to write a post, as methods of rationality and AI might help in diagnosing situations.

Thursday, 23 January 2025
Thu, 23 Jan 2025

Frontpage Posts

Quick takes

The RSPCA is holding a "big conversation", culminating in a citizens' assembly. If you have opinions about how animals in the UK are treated (which you probably do), you can contribute your takes here. A lot of the contributions are very low quality, so I think EA voices have a good chance of standing out and having their opinions shared with a broader audience. 
Someone filled out my anonymous contact form earlier this week asking to talk, but didn't leave their contact info. If this was you, please let me know how to reach you!

Wednesday, 22 January 2025
Wed, 22 Jan 2025

Quick takes

Both Sam and Dario saying that they now believe they know how to build AGI seems like an underrated development to me. To my knowledge, they only started saying this recently. I suspect they are overconfident, but still seems like a more significant indicator than many people seem to be tracking.
There's probably something that I'm missing here, but: * Given that the dangerous AI capabilities are generally stated to emerge from general-purpose and agentic AI models, why don't people try to shift AI investment into narrower AI systems? Or try to specifically regulate those systems? Possible reasons:  * This is harder than it sounds * General-purpose and agentic systems are inevitably going to outcompete other systems * People are trying to do this, and I just haven't noticed, because I'm not really an AI person * Something else Which is it?
I see a contradiction in EA thinking on AI and politics. Common EA beliefs are that  1. AI will be a revolutionary technology that affects nearly every aspect of society. 1. Somehow, if we just say the right words, we can stop the issue of AI from becoming politically polarised.  I’m sorry to say, but EA really doesn’t have that much of a say on the matter. The AI boosters have chosen their side, and it’s on the political right. Which means that the home for anti-AI action will end up on the left, a natural fit for anti-big business, pro-regulation ideas. If EA doesn’t embrace this reality, probably some other left-wing anti-AI movement is going to pop up, and it’s going to leave you in the dust. 

Tuesday, 21 January 2025
Tue, 21 Jan 2025

Frontpage Posts

Quick takes

I just learned that Trump signed an executive order last night withdrawing the US from the WHO; this is his second attempt to do so.  WHO thankfully weren't caught totally unprepared. Politico reports that last year they "launched an investment round seeking some $7 billion “to mobilize predictable and flexible resources from a broader base of donors” for the WHO’s core work between 2025 and 2028. As of late last year, the WHO said it had received commitments for at least half that amount". Full text of the executive order below: 
Today was a pretty bad day for American democracy IMO. The guy below me got downvoted and yea his comment wasn't the greatest but I directionally agree with him. Pardons are out of control: Biden starts the day pardoning people he thinks might be caught in the political crossfire (Fauci, Milley, others) and more of his family members. Then Trump follows it up by pardoning close to all the Jan 6 defendants. The ship has sailed on whatever "constraints" pardons supposedly had, although you could argue Trump already made that true 4 years ago.  Ever More Executive Orders: Trump signed ~25 executive orders (and even more "executive actions" - don't worry about the difference unless you like betting markets). This included withdrawing from WHO and ending birthright citizenship, though the latter is unlikely to stick since it's probably unconstitutional. I haven't had time to wade through all the EOs but like the pardons, this seems to be a cancerous growth of executive encroachment on the other branches with no clear end in sight.  Pre$idential $hitcoins: To be fair, the $TRUMP coin happened a few days ago, with $MELANIA following more recently. I'm not sure if people remember this, but it was a genuine scandal when Trump didn't release his tax returns in 2016. Now, 8 years later, he is at best using his office to scam American citizens. Less charitably he has created a streamlined bribery pipeline. It was a blip in the news cycle. 
AI Safety has less money, talent, political capital, tech and time. We have only one distinct advantage: support from the general public. We need to start working that advantage immediately.
Are you or someone you know: 1) great at building (software) companies 2) care deeply about AI safety 3) open to talk about an opportunity to work together on something If so, please DM with your background. If someone comes to mind, also DM. I am looking thinking of a way to build companies in a way to fund AI safety work.

Monday, 20 January 2025
Mon, 20 Jan 2025

Quick takes

I'm interesting in chatting to any civil servants, ideally in the UK, who are keen on improving decision making in their teams/area - potentially through forecasting techniques and similar methods. If you'd be interested in chatting, please DM me!
"wE sHoULd PaNdEr mOrE tO cOnsErvatives" Not 5 minutes in office, and they are already throwing the Nazi salutes. Congratulations, Edelweiss was not just a netflix show, it's reality. And a great reminder, apart from the jews, there were slavic, roma, gay and disabled people in the camps as well. We can't sit and just scoff at this, we need to fight back.

Topic Page Edits and Discussion

Sunday, 19 January 2025
Sun, 19 Jan 2025

Frontpage Posts

Quick takes

45
bruce
7d
6
Reposting from LessWrong, for people who might be less active there:[1] TL;DR * FrontierMath was funded by OpenAI[2] * This was not publicly disclosed until December 20th, the date of OpenAI's o3 announcement, including in earlier versions of the arXiv paper where this was eventually made public. * There was allegedly no active communication about this funding to the mathematicians contributing to the project before December 20th, due to the NDAs Epoch signed, but also no communication after the 20th, once the NDAs had expired. * OP claims that "I have heard second-hand that OpenAI does have access to exercises and answers and that they use them for validation. I am not aware of an agreement between Epoch AI and OpenAI that prohibits using this dataset for training if they wanted to, and have slight evidence against such an agreement existing." Tamay's response: * Seems to have confirmed the OpenAI funding + NDA restrictions * Claims OpenAI has "access to a large fraction of FrontierMath problems and solutions, with the exception of a unseen-by-OpenAI hold-out set that enables us to independently verify model capabilities." * They also have "a verbal agreement that these materials will not be used in model training." Edit (19/01): Elliot (the project lead) points out that the holdout set does not yet exist (emphasis added):  Edit (24/01): Tamay tweets an apology (possibly including the timeline drafted by Elliot). It's pretty succinct so I won't summarise it here! Blog post version for people without twitter. Perhaps the most relevant point: Nat from OpenAI with an update from their side: ============ Some quick uncertainties I had: * What does this mean for OpenAI's 25% score on the benchmark? * What steps did Epoch take or consider taking to improve transparency between the time they were offered the NDA and the time of signing the NDA? * What is Epoch's level of confidence that OpenAI will keep to their verbal agreement to not use these mat
It seems that part of the reason communism is so widely discredited is the clear contrast between neighboring countries that pursued more free-market policies. This makes me wonder— practicality aside, what would happen if effective altruists concentrated all their global health and development efforts into a single country, using  similar neighboring countries as the comparison group? Given that EA-driven philanthropy accounts for only about 0.02% of total global aid, perhaps the influence EA's approach could have by definitively proving its impact would be greater than trying to maximise the good it does directly.
Sofya Lebedeva has been so wonderfully kind and helpful and today she suggested changing the plethora of links to a linktree. I was expecting a very difficult set up process and a hefty cost but the lifetime free plan took me 5 min to set up and I'd say it works amazingly to keep it all in one place. https://linktr.ee/sofiiaf I would say societies (eg EA uni groups) may benefit, and perhaps even the cost (around £40 a year) to be able to advertise events on Linktree may be helpful.

Topic Page Edits and Discussion

Saturday, 18 January 2025
Sat, 18 Jan 2025

Quick takes

EA Awards 1. I feel worried that the ratio of the amount of criticism that one gets for doing EA stuff to the amount of positive feedback one gets is too high 2. Awards are a standard way to counteract this 3. I would like to explore having some sort of awards thingy 4. I currently feel most excited about something like: a small group of people solicit nominations and then choose a short list of people to be voted on by Forum members, and then the winners are presented at a session at EAG BA 5. I would appreciate feedback on: 1. whether people think this is a good idea 2. How to frame this - I want to avoid being seen as speaking on behalf of all EAs 6. Also if anyone wants to volunteer to co-organize with me I would appreciate hearing that

Friday, 17 January 2025
Fri, 17 Jan 2025

Frontpage Posts

Quick takes

EAG Bay Area Application Deadline extended to Feb 9th – apply now! We've decided to postpone the application deadline by one week from the old deadline of Feb 2nd. We are receiving more applications than in the past two years, and we have a goal of increasing attendance at EAGs which we think this will help. If you've already applied, tell your friends! If you haven't — apply now! Don't leave it till the deadline! You can find more information on our website.
I love how I come here, have a quick take about slave labor, something I have directly experienced, and something I fought hard against, and having neo-liberal westerners down-vote me because they think I am talking out of my ass.  For the record, I know of worker rights violations, that were squashed because a judge got a hefty payment, never proven because the right people were greased. For hell's sake, I as an activist get threats on the daily, stop invalidating my experience when dealing with corruption.

Thursday, 16 January 2025
Thu, 16 Jan 2025

Frontpage Posts

Quick takes

18
Joseph
9d
2
Best books I've read in 2024 (I want to share, but this doesn't seem relevant enough to EA to justify making a standard forum post. So I'll do it as a quick take instead.) People who know me know that I read a lot, and this is the time of year for retrospectives.[1] Of all the books I read in 2024, I’m sharing the ones that I think an EA-type person would be most interested in, would benefit the most from, etc.  Animal-Focused  There were several animal-focused books I read in 2024. This is the direct result of being a part of an online Animal Advocacy Book Club. I created the book club about a year ago, and it has been helpful in nudging me to read books that I otherwise probably wouldn’t have gotten around to.[2] * Reading Compassion, by the Pound: The Economics of Farm Animal Welfare was a bit of a slog, but I loved that there were actual data and frameworks and measurements, rather than handwavy references to suffering. The authors provided formulas, the provided estimates and back-of-the-envelope calculations, and did an excellent job looking at farm animal welfare like economists and considering tradeoffs, with far less bias than anything else I’ve ever read on animals. They created and references measurements for pig welfare, cow welfare, and chicken welfare that I hadn’t encountered anywhere else. I haven’t even seen other people attempt to put together measurements to evaluate what the overall cost and benefit would be to enact a particular change in how farm animals are treated. * Every couple of pages in An Immense World: How Animal Senses Reveal the Hidden Realms Around Us I felt myself thinking “whoa, that is so cool.” Part of the awe and pleasure in reading this book was a bunch of factoids about how different species of animals perceive the world in incredibly different ways, ranging from the familiar (sight, hearing, touch) to the exotic (vibration detection, taste buds all over the body, electrolocation, and more). The author does a great jo
One of my main frustrations/criticisms with a lot of current technical AI safety work is that I'm not convinced it will generalize to the critical issues we'll have at our first AI catastrophes ($1T+ damage). From what I can tell, most technical AI safety work is focused on studying previous and current LLMs. Much of this work is very particular to specific problems and limitations these LLMs have. I'm worried that the future decisive systems won't look like "single LLMs, similar to 2024 LLMs." Partly, I think it's very likely that these systems will be ones made up of combinations of many LLMs and other software. If you have a clever multi-level system, you get a lot of opportunities to fix problems of the specific parts. For example, you can have control systems monitoring LLMs that you don't trust, and you can use redundancy and checking to investigate outputs you're just not sure about. (This isn't to say that these composite systems won't have problems - just that the problems will look different to those of the specific LLMs). Here's an analogy: Imagine that researchers had 1960s transistors but not computers, and tried to work on cybersecurity, in preparation of future cyber-disasters in the coming decades. They want to be "empirical" about it, so they go along investigating all the failure modes of 1960s transistors. They successfully demonstrate that in extreme environments transistors fail, and also that there are some physical attacks that could be done on the transistor level. But as we know now, almost all of this has either been solved on the transistor level, or on levels shortly above the transistors that do simple error management. Intentional attacks on the transistor level are possible, but incredibly niche compared to all of the other cybersecurity capabilities. So just as understanding 1960s transistors really would not get you far towards helping at all with future cybersecurity challenges, it's possible that understanding 2024 LLM details

Load more days