Quick takes

Have Will MacAskill, Nick Beckstead, or Holden Karnofsky responded to the reporting by Time that they were warned about Sam Bankman-Fried's behaviour years before the FTX collapse?

I would like to estimate how effective free hugs are. Can anyone help me?

5
Joseph Lemien
13h
Haha. Well, I guess I would first ask effective at what? Effective at giving people additional years of healthy & fulfilling life? Effective at creating new friendships? Effective at making people smile? I haven't studied it at all, but my hypothesis that it is the kind of intervention that is  similar to "awareness building," but it doesn't have any call to action (such as a donation). So it is probably effective in giving people a nice experience for a few seconds, and maybe improving their mood for a period of time, but it probably doesn't have longer-lasting effects. From a cursory glance at Google Scholar, it looks like there hasn't been much research on free hugs.

Hmm, I'm a little confused. If I cook a meal for someone, it doesn't seem to mean much. But if no one is cooking for someone, it is a serious problem and we need to help. Of course, I'm not sure if we're suffering from that kind of "skinship hunger."

Would love for orgs running large-scale hiring rounds (say 100+ applicants) to provide more feedback to their (rejected) applicants. Given that in most cases applicants are already being scored and ranked on their responses, maybe just tell them their scores, their overall ranking and what the next round cutoff would have been - say: prompt 1 = 15/20, prompt 2 = 17.5/20, rank = 156/900, cutoff for work test at 100.

Since this is already happening in the background (if my impression here is wrong please lmk), why not make the process more transparent and release scores - with what seems to be very little extra work required (beyond some initial automation). 

Showing 3 of 6 replies (Click to show all)

(I run hiring rounds with ~100-1000 applicants) agree with Jamie here. However, if someone was close to a cutoff, I do specifically include "encourage you to apply to future roles" in my rejection email. I also always respond when somebody asks for feedback proactively.

Is revealing scores useful to candidates for some other reason not covered by that? It seems to me the primary reason (since it sounds like you aren't asking for qualitative feedback to also be provided) would be to inform candidates as to whether applying for future similar roles is worth the effort.

2
John Salter
10h
I view our hiring process as a constant work in progress, and we look back at the application process of everyone after their time with us, potatoes and gems alike, and try figure out how we could have told ahead of time. Part of that is writing up notes. We use chatgpt to make the notes more sensitive and send them to the applicant.  Caveat: We only do this for people who show some promise of future admission. 
2
Joseph Lemien
14h
Jamie, I've been contemplating writing up a couple of informal "case study"-type reports of different hiring practices. My intention/thought process would be to allow EA orgs to learn about how several different orgs do hiring, to highlight some best practices, and generally to allow/encourage organizations to improve their methods. How would you feel about writing up a summary or having a call with me to allow me to understand how you tried giving feedback and what specific aspects caused challenges?

The TV show Loot, in Season 2 Episode 1, introduces a SBF-type character named Noah Hope DeVore, who is a billionaire wonderkid who invents "analytic altruism", which uses an algorithm to determine "the most statistically optimal ways" of saving lives and naturally comes up with malaria nets. However, Noah is later arrested by the FBI for wire fraud and various other financial offenses.

I wonder if anyone else will getting a thinly veiled counterpart -- given that the lead character of the show seems somewhat based on MacKenzie Scott, this seems to be maybe a thing for the show.

What are some historical examples of a group (like AI Safety folk) getting something incredibly wrong about an incoming Technology? Bonus question: what led to that group getting it so wrong? Maybe there is something to learn here.

Showing 3 of 8 replies (Click to show all)

This is probably a good exercise. I do want to point out a common bias about getting existential risks wrong. If someone was right about doomsday, we would not be here to discuss it. That is a huge survivorship bias. Even catestrophic events which lessen the number of people are going to be systemically underestimated. This phenomenon is the anthropic shadow which is relevant to an analysis like this. 

2
Habryka
16h
Do you have links to people being very worried about gray goo stuff? (Also, the post you link to makes this clear, but this was a prediction from when Eliezer was a teenager, or just turned 20, which does not make for a particularly good comparison, IMO)
0
yanni kyriacos
1d
Thanks!

Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on the front page. I think it should be clarified that they are April Fools jokes so people can differentiate EA weird stuff from EA weird stuff that's a joke more easily. Sure, if you check the details you'll see that things don't add up, but we all know most people just read the title or fi... (read more)

3
Ian Turner
13h
The posts do have the “April Fool’s Day” tag right at the beginning?

I think [April Fools] added to the title might be a good addition since the tag is hard to see. 

Many organizations I respect are very risk-averse when hiring, and for good reasons. Making a bad hiring decision is extremely costly, as it means running another hiring round, paying for work that isn't useful, and diverting organisational time and resources towards trouble-shooting and away from other projects. This leads many organisations to scale very slowly.

However, there may be an imbalance between false positives (bad hires) and false negatives (passing over great candidates). In hiring as in many other fields, reducing false positives often means ... (read more)

Showing 3 of 4 replies (Click to show all)

It looks like there are two people who voted disagree with this. I'm curious as to what they disagree with. Do they disagree with the claim that some organizations are "very risk-averse when hiring"? Do they disagree with the claim that "reducing false positives often means raising false negatives"? That this has a causal effect with organisations scale slowly? Or perhaps that "the costs of a bad hire are somewhat bounded"? I would love for people who disagree vote to share information regarding what it is they disagree with.

2
Joseph Lemien
16h
Forgive my rambling. I don't have much to contribute here, but I generally want to say A)I am glad to see other people thinking about this, and B) I sympathize with the difficulty The "reducing false positives often means raising false negatives" is one of the core challenges in hiring. Even the researchers who investigate the validity of various methods and criteria in hiring don't have a great way to deal with it this problem. Theoretically we could randomly hire 50% of the applicants and reject 50% of them, and then look at how the new hires perform compared to the rejects one year later. But this is (of course) infeasible. And of course, so much of what we care about is situationally specific: If John Doe thrives in Organizational Culture A performing Role X, that doesn't necessarily mean he will thrive in Organizational Culture B performing Role Y. I do have one suggestion, although it isn't as good of a suggestion as I would like. Ways to "try out" new staff (such as 6-month contacts, 12-month contracts, internships, part-time engagements, and so on) can let you assess with greater confidence how the person will perform in your organization in that particular role much better than a structured interview, a 2-hour work trial test, or a carefully filled out application form. But if you want to have a conversation with some people that are more expert in this stuff I could probably put you in touch with some Industrial Organizational Psychologists who specialize in selection methods. Maybe a 1-hour consultation session would provide some good directions to explore? I've shared this image[1] with many people, as I think it is a fairly good description of the issue. I generally think of one of the goals of hiring to be "squeezing" this shape to get as much off the area as possible in the upper right and lower left, and to have as little as possible in the upper left and lower right. We can't squeeze it infinitely thin, and there is a cost to any squeezing, but t
3
Sarah Levin
2d
This depends a lot on what "eventually" means, specifically. If a bad hire means they stick around for years—or even decades, as happened in the organization of one of my close relatives—then the downside risk is huge.  OTOH my employer is able to fire underperforming people after two or three months, which means we can take chances on people who show potential even if there are some yellow flags. This has paid off enormously, e.g. one of our best people had a history of getting into disruptive arguments in nonprofessional contexts, but we had reason to think this wouldn't be an issue at our place... and we were right, as it turned out, but if we lacked the ability to fire relatively quickly, then I wouldn't have rolled those dice.  The best advice I've heard for threading this needle is "Hire fast, fire fast". But firing people is the most unpleasant thing a leader will ever have to do, so a lot of people do it less than they should.

Could it be more important to improve human values than to make sure AI is aligned?

Consider the following (which is almost definitely oversimplified):

 

ALIGNED AI

MISALIGNED AI

HUMANITY GOOD VALUES

UTOPIA

EXTINCTION

HUMANITY NEUTRAL VALUES

NEUTRAL WORLD

EXTINCTION

HUMANITY BAD VALUES

DYSTOPIA

EXTINCTION

For clarity, let’s assume dystopia is worse than extinction. This could be a scenario where factory farming expands to an incredibly large scale with the aid of AI, or a bad AI-powered regime takes over the world. Let's assume neutral world is equivalent to extinc... (read more)

Within EA, work on x-risk is very siloed by type of threat: There are the AI people, the bio people, etc. Is this bad, or good?

Which of these is the correct analogy?

  1. "Biology is to science as AI safety is to x-risk," or 
  2. "Immunology is to biology as AI safety is to x-risk"

EAs seem to implicitly think analogy 1 is correct: some interdisciplinary work is nice (biophysics) but most biologists can just be biologists (i.e. most AI x-risk people can just do AI).

The "existential risk studies" model (popular with CSER, SERI, and lots of other non-EA academics) ... (read more)

Anyone else consders  the case of Verein KlimaSeniorinnen Schweiz and Others v. Switzerland (application no. 53600/20) of the European Court of Human Rights a possibly useful for GCR litigation?

I am planning to write post about happiness guilt. I think many of EA would have it. Can you share resources or personal experiences?

Detach the grim-o-meter comes to mind. I think that post helped me a little bit.

Thoughts on project or research auction. It is very cumbersome to apply for funds one by one from Openphil or EA fund. Wouldn't it be better for a major EA organization to auction off the opportunity to participate in a project and let others buy it? It will be similar to a tournament, but you will be able to sell a lot more projects at a lower price and reduce the amount of resources wasted on having many people competing for the same project.

2
harfe
4d
I think this requires more elaboration how exactly the suggested system is supposed to work.

I wrote the post

Nobody interested in resuming EA and LW post summaries? Quitting was a very unfortunate choice, I think.

In July 2022, Jeff Masters wrote an article (https://yaleclimateconnections.org/2022/07/the-future-of-global-catastrophic-risk-events-from-climate-change/) summarizing findings from a United Nations report on the increasing risks of global catastrophic risk (GCR) events due to climate change. The report defines GCRs as catastrophes that kill over 10 million people or cause over $10 trillion in damage. It warned that by increasingly pushing beyond safe planetary boundaries, human activity is boosting the odds of climate-related GCRs.

The article argued that ... (read more)

I feel like this would be a good post. It might get unfairly buried as a quick take.

A friend asked me which projects in EA I thought deserved more money, especially ones that seemed to be held back by insufficient charisma of the founders. After a few names he encouraged me to write it up. This list is very off the cuff and tentative: in most cases I have pretty minimal information on the project, and they’re projects I incidentally encountered on EAF. If you have additions I encourage you to comment with them. 




 

The main list


 

The bar here is “the theory of change seems valuable, and worse projects are regularly funded”.&nbs... (read more)

Showing 3 of 21 replies (Click to show all)
3
anormative
3d
Can you elaborate on what you mean by “the EA-offered money comes with strings?”

Not well. I only have snippets of information, and it's private (Habryka did sign off on that description). 

I don't know if this specifically has come up in regards to Lightcone or Lighthaven, but I know Haybrka has been steadfastly opposed to the kind of slow, cautious, legally-defensive actions coming out of EVF. I expect he would reject funding that demanded that approach (and if he accepted it, I'd be disappointed in him, given his public statements). 

8
Saul Munn
4d
i've been working at manifund for the last couple months, figured i'd respond where austin hasn't (yet) here's a grant application for the meta charity funders circle that we submitted a few weeks ago, which i think is broadly representative of who we are & what we're raising for. tldr of that application: * core ops * staff salaries * misc things (software, etc) * programs like regranting, impact certificates, etc, for us to run how we think is best[1] additionally, if a funder was particularly interested in a specific funding program, we're also happy to provide them with infrastructure. e.g. we're currently facilitating the ACX grants, we're probably (70%) going to run a prize round for dwarkesh patel, and we'd be excited about building/hosting the infrastructure for similar funding/prize/impact cert/etc programs. this wouldn't really look like [funding manifund core ops, where the money goes to manifund], but rather [running a funding round on manifund, where the funding mostly[2] goes to object-level projects that aren't manifund]. i'll also add that we're a less funding-crunched than when austin first commented; we'll be running another regranting round, for which we'll be paid another $75k in commission. this was new info between his comment and this comment. (details of this are very rough/subject to change/not firm.) 1. ^ i'm keeping this section intentionally vague. what we want is [sufficient funding to be able to run the programs we think are best, iterate & adjust quickly, etc] not [this specific particular program in this specific particular way that we're tying ourselves down to]. we have experimentation built into our bones, and having strings attached breaks our ability to experiment fast. 2. ^ we often charge a fee of 5% of the total funding; we've been paid $75k in commission to run the $1.5mm regranting round last year.

In conversations of x-risk, one common mistake seems to be to suggest that we have yet to invent something that kills all people and so the historical record is not on the side of "doomers." The mistake is survivorship bias, and Ćirković, Sandberg, and Bostrom (2010) call this the Anthropic Shadow. Using base rate frequencies to estimate the probability of events that reduce the number of people (observers), will result in bias. 

If there are multiple possible timelines and AI p(doom) is super high (and soon), then we would expect a greater frequency o... (read more)

The latest episode of the Philosophy Bites podcast is about Derek Parfit.[1] It's an interview with his biographer (and fellow philosopher) David Edmonds. It's quite accessible and only 20 mins long. Very nice listening if you fancy a walk and want a primer on Parfit's work.

  1. ^

    Parfit was a philosopher who specialised in personal identity, rationality, and ethics. His work played a seminal role in the development of longtermism. He is widely considered one of the most important and influential moral philosophers of the late 20th and early 21st centuries.

... (read more)

Given that effective altruism is "a project that aims to find the best ways to help others, and put them into practice"[1] it seems surprisingly rare to me that people actually do the hard work of:

  1. (Systematically) exploring cause areas
  2. Writing up their (working hypothesis of a) ranked or tiered list, with good reasoning transparency
  3. Sharing their list and reasons publicly.[2]

The lists I can think of that do this best are by 80,000 Hours, Open Philanthropy's, and CEARCH's list.

Related things I appreciate, but aren't quite what I'm envisioning:

  • Tools and m
... (read more)
Showing 3 of 9 replies (Click to show all)

Sorry, it wasn't clear. The reference class I had in mind was cause prio focussed resources on the EA forum.

2
Jamie_Harris
3d
Thank you! I understand the reasons for ranking relative to a given cost-effectiveness bar (or by a given cost-effectiveness metric). That provides more information than constraining the ranking to a numerical list so I appreciate that. Btw, if you had 5-10 mins spare I think it'd be really helpful to add explanation notes to the cells in the top row of the spreadsheet. E.g. I don't know what "MEV" stands for, or what the "cost-effectiveness" or "cause no." columns are referring to. (Currently these things mean that I probably won't share the spreadsheet with people because I'd need to do a lot of explaining or caveating to them, whereas I'd be more likely to share it if it was more self-explanatory.)
4
Joel Tan
3d
Hi Jaime, I've updated to clarify that the "MEV" column is just "DALYs per USD 100,000". Have hidden some of the other columns (they're just for internal administrative/labelling purposes).

Resolved unresolved issues

 One of the things I find difficult about discussing problem solving with people is that they often fall back on shallow causes. For example, if politician A's corruption is the problem, you can kick him out. easy. Problem solved! This is the problem. Of course, the problem was solved, but the problem was not solved. The natural assumption is that politician B will cause a similar problem again. In the end, that's the advice people give. “Kick A out!!” Whatever it was. Whether it's your weird friends, your bad grades, or your... (read more)

Mini EA Forum Update

You can now subscribe to be notified when posts are added to a sequence. You can see more details in GitHub here.

We’ve also made it a bit easier to create and edit sequences, including allowing users to delete sequences they’ve made.

I've been thinking a bit about how to improve sequences, so I'd be curious to hear:

  1. How you use them
  2. What you'd like to be able to do with them
  3. Any other thoughts/feedback

It could be useful to have some sort of "sequence of sequences", similar to a basic version of https://forum.effectivealtruism.org/handbook

For the intro program, I used to link people to https://forum.effectivealtruism.org/users/ea-italy and tell them to scroll down and start from "1. La mentalità dell'efficacia" but many people got confused and started from the latest posts. (So I moved to sending the first sequence directly for the first week)

Edit: I've been told they don't use this anymore and switched to Google Docs

Load more