All posts

New & upvoted

Today, 21 January 2025
Today, 21 Jan 2025

No posts for January 21st 2025

Monday, 20 January 2025
Mon, 20 Jan 2025

Quick takes

I'm interesting in chatting to any civil servants, ideally in the UK, who are keen on improving decision making in their teams/area - potentially through forecasting techniques and similar methods. If you'd be interested in chatting, please DM me!
"wE sHoULd PaNdEr mOrE tO cOnsErvatives" Not 5 minutes in office, and they are already throwing the Nazi salutes. Congratulations, Edelweiss was not just a netflix show, it's reality. And a great reminder, apart from the jews, there were slavic, roma, gay and disabled people in the camps as well. We can't sit and just scoff at this, we need to fight back.

Topic Page Edits and Discussion

Sunday, 19 January 2025
Sun, 19 Jan 2025

Frontpage Posts

Quick takes

29
bruce
2d
2
Reposting from LessWrong, for people who might be less active there:[1] TL;DR * FrontierMath was funded by OpenAI[2] * This was not publicly disclosed until December 20th, the date of OpenAI's o3 announcement, including in earlier versions of the arXiv paper where this was eventually made public. * There was allegedly no active communication about this funding to the mathematicians contributing to the project before December 20th, due to the NDAs Epoch signed, but also no communication after the 20th, once the NDAs had expired. * OP claims that "I have heard second-hand that OpenAI does have access to exercises and answers and that they use them for validation. I am not aware of an agreement between Epoch AI and OpenAI that prohibits using this dataset for training if they wanted to, and have slight evidence against such an agreement existing." Tamay's response: * Seems to have confirmed the OpenAI funding + NDA restrictions * Claims OpenAI has "access to a large fraction of FrontierMath problems and solutions, with the exception of a unseen-by-OpenAI hold-out set that enables us to independently verify model capabilities." * They also have "a verbal agreement that these materials will not be used in model training." Edit: Elliot (the project lead) points out that the holdout set does not yet exist (emphasis added):  ============ Some quick uncertainties I had: * What does this mean for OpenAI's 25% score on the benchmark? * What steps did Epoch take or consider taking to improve transparency between the time they were offered the NDA and the time of signing the NDA? * What is Epoch's level of confidence that OpenAI will keep to their verbal agreement to not use these materials in model training, both in some technically true sense, and in a broader interpretation of an agreement? (see e.g. bottom paragraph of Ozzi's comment). 1. ^ Epistemic status: quickly summarised + liberally copy pasted with ~0 additional fact checking given Tama
Sofya Lebedeva has been so wonderfully kind and helpful and today she suggested changing the plethora of links to a linktree. I was expecting a very difficult set up process and a hefty cost but the lifetime free plan took me 5 min to set up and I'd say it works amazingly to keep it all in one place. https://linktr.ee/sofiiaf I would say societies (eg EA uni groups) may benefit, and perhaps even the cost (around £40 a year) to be able to advertise events on Linktree may be helpful.

Topic Page Edits and Discussion

Saturday, 18 January 2025
Sat, 18 Jan 2025

Quick takes

EA Awards 1. I feel worried that the ratio of the amount of criticism that one gets for doing EA stuff to the amount of positive feedback one gets is too high 2. Awards are a standard way to counteract this 3. I would like to explore having some sort of awards thingy 4. I currently feel most excited about something like: a small group of people solicit nominations and then choose a short list of people to be voted on by Forum members, and then the winners are presented at a session at EAG BA 5. I would appreciate feedback on: 1. whether people think this is a good idea 2. How to frame this - I want to avoid being seen as speaking on behalf of all EAs 6. Also if anyone wants to volunteer to co-organize with me I would appreciate hearing that

Friday, 17 January 2025
Fri, 17 Jan 2025

Frontpage Posts

Quick takes

EAG Bay Area Application Deadline extended to Feb 9th – apply now! We've decided to postpone the application deadline by one week from the old deadline of Feb 2nd. We are receiving more applications than in the past two years, and we have a goal of increasing attendance at EAGs which we think this will help. If you've already applied, tell your friends! If you haven't — apply now! Don't leave it till the deadline! You can find more information on our website.
I love how I come here, have a quick take about slave labor, something I have directly experienced, and something I fought hard against, and having neo-liberal westerners down-vote me because they think I am talking out of my ass.  For the record, I know of worker rights violations, that were squashed because a judge got a hefty payment, never proven because the right people were greased. For hell's sake, I as an activist get threats on the daily, stop invalidating my experience when dealing with corruption.

Thursday, 16 January 2025
Thu, 16 Jan 2025

Frontpage Posts

Quick takes

Best books I've read in 2024 (I want to share, but this doesn't seem relevant enough to EA to justify making a standard forum post. So I'll do it as a quick take instead.) People who know me know that I read a lot, and this is the time of year for retrospectives.[1] Of all the books I read in 2024, I’m sharing the ones that I think an EA-type person would be most interested in, would benefit the most from, etc.  Animal-Focused  There were several animal-focused books I read in 2024. This is the direct result of being a part of an online Animal Advocacy Book Club. I created the book club about a year ago, and it has been helpful in nudging me to read books that I otherwise probably wouldn’t have gotten around to.[2] * Reading Compassion, by the Pound: The Economics of Farm Animal Welfare was a bit of a slog, but I loved that there were actual data and frameworks and measurements, rather than handwavy references to suffering. The authors provided formulas, the provided estimates and back-of-the-envelope calculations, and did an excellent job looking at farm animal welfare like economists and considering tradeoffs, with far less bias than anything else I’ve ever read on animals. They created and references measurements for pig welfare, cow welfare, and chicken welfare that I hadn’t encountered anywhere else. I haven’t even seen other people attempt to put together measurements to evaluate what the overall cost and benefit would be to enact a particular change in how farm animals are treated. * Every couple of pages in An Immense World: How Animal Senses Reveal the Hidden Realms Around Us I felt myself thinking “whoa, that is so cool.” Part of the awe and pleasure in reading this book was a bunch of factoids about how different species of animals perceive the world in incredibly different ways, ranging from the familiar (sight, hearing, touch) to the exotic (vibration detection, taste buds all over the body, electrolocation, and more). The author does a great jo
One of my main frustrations/criticisms with a lot of current technical AI safety work is that I'm not convinced it will generalize to the critical issues we'll have at our first AI catastrophes ($1T+ damage). From what I can tell, most technical AI safety work is focused on studying previous and current LLMs. Much of this work is very particular to specific problems and limitations these LLMs have. I'm worried that the future decisive systems won't look like "single LLMs, similar to 2024 LLMs." Partly, I think it's very likely that these systems will be ones made up of combinations of many LLMs and other software. If you have a clever multi-level system, you get a lot of opportunities to fix problems of the specific parts. For example, you can have control systems monitoring LLMs that you don't trust, and you can use redundancy and checking to investigate outputs you're just not sure about. (This isn't to say that these composite systems won't have problems - just that the problems will look different to those of the specific LLMs). Here's an analogy: Imagine that researchers had 1960s transistors but not computers, and tried to work on cybersecurity, in preparation of future cyber-disasters in the coming decades. They want to be "empirical" about it, so they go along investigating all the failure modes of 1960s transistors. They successfully demonstrate that in extreme environments transistors fail, and also that there are some physical attacks that could be done on the transistor level. But as we know now, almost all of this has either been solved on the transistor level, or on levels shortly above the transistors that do simple error management. Intentional attacks on the transistor level are possible, but incredibly niche compared to all of the other cybersecurity capabilities. So just as understanding 1960s transistors really would not get you far towards helping at all with future cybersecurity challenges, it's possible that understanding 2024 LLM details

Wednesday, 15 January 2025
Wed, 15 Jan 2025

Frontpage Posts

Quick takes

A minor personal gripe I have with EA is that it seems like the vast majority of the resources are geared towards what could be called young elites, particularly highly successful people from top universities like Harvard and Oxford. For instance, opportunities listed on places like 80,000 Hours are generally the kind of jobs that such people are qualified for, i.e. AI policy at RAND, or AI safety researcher at Anthropic, or something similar that I suspect less than the top 0.001% of human beings would be remotely relevant for. Someone like myself, who graduated from less prestigious schools, or who struggles in small ways to be as high functioning and successful, can feel like we're not competent enough to be useful to the cause areas we care about. I personally have been rejected in the past from both 80,000 Hours career advising, and the Long-Term Future Fund. I know these things are very competitive of course. I don't blame them for it. On paper, my potential and proposed project probably weren't remarkable. The time and money should go to the those who are most likely to make a good impact. I understand this. It just, I guess I just feel like I don't know where I should fit into the EA community. Even just many people on the forum seem incredibly intelligent, thoughtful, kind, and talented. The people at the EA Global I atttended in 2022 were clearly brilliant. In comparison, I just feel inadequate. I wonder if others who don't consider themselves exceptional also find themselves intellectually intimidated by the people here. We do probably need the best of the best to be involved first and foremost, but I think we also need the average, seemingly unremarkable EA sympathetic person to be engaged in some way if we really want to be more than a small community, to be as impactful as possible. Though, maybe I'm just biased to believe that mass movements are historically what led to progress. Maybe a small group of elites leading the charge is actually what i
My donation strategy: It seems that we have some great donation opportunities in at least some cases such as AI Safety. This has made me wonder what donation strategies I prefer. Here are some thoughts, also influenced by Zvi Mowshowitz's: 1. Attracting non-EA funding to EA causes: I prefer donating to opportunities that may bring external or non-EA funding to some causes that EA may deem relevant. 2. Expanding EA funding and widening career paths: Similarly, if possible fund opportunities that could increase the funds or skills available to the community in the future. For this reason, I feel highly supportive of Ambitious Impact project to create onramps for careers with impact in earning to give, for instance. This is in contrast to incubating new charities (Charity Entrepreneurship), which is slightly harder to motivate unless you have strong reasons to believe your impact is more cost-effective than typical charities. I am a bit wary that uncertainty might be too large to clearly distinguish charities in the frontier. 3. Fill in the gap left by others: Aim to fund charities that are medium-sized between their 2nd to 5th years of life: they are not small and young enough that they can rely on Charity Entrepreneurship seed funding. But they are also not large enough to get funding from large funders. One could similarly argue that you should fund causes that non-EAs are less likely to fund (e.g. animal welfare), though I find this argument more strongly if non-EA funding was close to fully funding those other causes (e.g. global health) or if the full support of the former (animal welfare) fully depends on the EA community. 4. Value stability for people running charities: By default and unless there are clearly better opportunities, keep donating to the same charities as previously done, and do so with unrestricted funds. This allows some stability for charities, which is very much welcomed for the charities. Also, do not push too hard on the marginal cost-
I was confronted with the fact that EA is not as big as I think it is and that Agriculture as well as systemic changes are not directly possible by EA, with that I agree in some part. What is available to EA, at least in terms of underdeveloped rural agricultural economies. Knowledge, we have knowledge, and I believe the transfer of knowledge is crucial when conversing with farmers, so how do we transfer this knowledge we have to the farmers? Mini-courses? No, mini-courses would work for people who have internet connection, maybe we could conduct large scale mini-courses where a local could help us devise a classroom type of learning setting and where we could engage a large population of people, while keeping costs down. Yes. On the topic of mini-courses, the most beneficial way to go is to divide them in two types of production, animal and plant production, because a lot of the people who I read about today were in either of the two. I believe with good practices we could address both poverty in the rural populations but also increase the comfort of the animals, proper feed, proper water and things alike that contribute to animal wellbeing. I have a lot of things to write about, but I'll keep it short. I'll make a more defined outline on how we can do this, and maybe you (the community can help me guide my efforts). Also I think EA should focus a bit more on agronomy as a whole because food production is a large and unaddressed topic.
Protesting its slow death to the bitter end, Bing launched its AI-assisted search engine in 2023, hoping to carve out a use case against Google. In 2024, Google hit back, integrating Gemini into its search function. Arguably, Gemini is now the front page of the internet. Much of the time now when I shoot out a google query, Gemini’s answer pops up at the top. In fact, if I want to find an answer written by a human, I have to scroll down. Gemini’s answer occupies my entire screen. I have an incling about what is motivating this choice architecture: for now, there is no ad placement, but surely soon there will be. For now, attention is being directed away from websites that host their own ads, and towards the Google's own Gemini box. This is a little concerning - Nora Lindemann (https://tinyurl.com/chatbotsearch) writes on chatbots as search engines, introducing the term "sealed knowledges": she is getting at how a question can have a plurality of answers that all are meaningful, something that a chatbot doesn't convey when it gives a short, structured answer written in a hyper plausible tone. There are questions with simple answers, and there are those that warrant struggle and rumination. Well-packaged chatbot answers make me less likely to accidentally learn things as I try to answer my non-linear question. I wonder, will websites lose revenue? Do chat-bot search engines help or hinder learning? Mediated by chatbots, will we relate to information more objectively?

Tuesday, 14 January 2025
Tue, 14 Jan 2025

Quick takes

12
TsviBT
7d
1
"The Future Loves You: How and Why We Should Abolish Death" by Dr Ariel Zeleznikow-Johnston is now available to buy. I haven't read it, but I expect it to be a definitive anti-deathist monograph. https://www.amazon.com/Future-Loves-You-Should-Abolish-ebook/dp/B0CW9KTX76
https://www.sciencedirect.com/science/article/pii/S2451902224003811#bib121 ^enhancing equanimity through tFUS is very low-hanging fruit for improving QALYs and overall human lived experience (and is upstream of better decision making and all other good outcomes) @NickCammarata is also v. into this, says that helping people there out >>> undergrad research just to get into neuro positions

Monday, 13 January 2025
Mon, 13 Jan 2025

Frontpage Posts

Quick takes

Many heads are more utilitarian than one by Anita Keshmirian et al is an interesting paper I found via Gwern's site. Gwern's summary of the key points:  Abstract:  I wonder if this means that individual EAs might find EA principles more emotionally challenging than group-level surveys might suggest. It also seems a bit concerning that group judgments may naturally skew utilitarian simply by virtue of being groups, rather than through improved moral reasoning (and I say this as someone for whom utilitarianism is the largest "party" in my moral parliament). 
You westerners have no idea how much corruption there is in the East. Like seriously. 

Sunday, 12 January 2025
Sun, 12 Jan 2025

Quick takes

While reading the economist yesterday, an article in their fantastic "The Africa gap" series felt strangely familiar - I'd read these ideas last year in @Karthik Tadepalli's fantastic series on economic growth in LMIC's. I appreciated this section Instead of many large firms with salaried staff, Africa has lots of micro-enterprises and informal workers. More than 80% of employment in Africa is informal, according to the International Labour Organisation. Roughly half of informal workers in cities are self-employed, doing everything from crafting Instagram advertising to fixing roofs. Many Africans mix formal work with informal hustles, which are often poorly paid. Most would love a steady job. Mr Tadepalli suggests that many of the “self-employed” may just be the unemployed “in disguise” I shouldn't have been surprised to see Karthik's quotes and research directly referred to in the article itself! Nice work Karthik and great to see your work get recognised in the mainstream as well as on the neglected global development corners of the EA forum ;).
Commented on an article but expanding to a  (very) quick take:   the absolute rabbit holes I've gotten into from "hmm, I should check about diseases in dogs to keep an eye on" to "wow, mosquito borne diseases are very high" to " oh my goodness, why do I have so many papers saved on impeding the ventral nerves in mosquitos to test blood hunting mechanism inhibitions..." have nearly all converted to genetically engineered mosquitos with Ago2 gene disactivation or susceptibility to infection symptoms. The fears of genetic engineering to evade diseases by almost the flip side of making the vector susceptible strikes me with the same ethical, biological and genetic risks, plus the huge issues with bioweaponory and double use tech.   Seems as if the media stories of non technical nature of 30 mainstream sources (e.g. BBC news, Times, Guardian) from all sides of the spectrum are favourable of genetic engineering to prevent spread but make no mention of the information hazards or dual use. Wonder if that's just journalism doesn't favour nuance but also perhaps maybe some intentional silence....
THE GENTRIFIED EA ORGS DOWNVOTED ME, seriosly tho, have you seen someone that is not top 10 attendee in an EA organization? Or someone really heavy on the volunteering experiences? Also if you disagree with me, and downvote me, come in the comments, call me a dumbass, I don't bite, I just like being provocative, I get the people going.

Load more days