E

Elizabeth

4730 karmaJoined

Comments
402

I still think the question of "who is the job board aimed at?" is relevant here, and would like to hear your answer.

I don't think the dishonesty entirely rules out working at OpenAI. Whether or not OpenAI safety positions should be on the 80k job board depends on the exact mission of the job board. I have my models, but let me ask you: who is it you think will have their plans changed for the better by seeing OpenAI safety positions[1] on 80k's board?

  1. ^

    I'm excluding IS positions from this question because it seems possible someone skilled in IS would not think to apply to OpenAI. I don't see how anyone qualified  for OpenAI safety positions could need 80k to inform them the positions exist. 

No argument from me that it's sometimes worth it to take low paying or miserable jobs. But low pay isn't a surprise fact you learn years into working for a company, it's written right on the tin[1]. The issue for me isn't that OpenAI paid undermarket rates, it's that it lied about material facts of the job. You could put up a warning that OpenAI equity is ephemeral, but the bigger issue is that OpenAI can't be trusted to hold to any deal.

 

  1. ^

    The power PIs hold can be a surprise, and I'm disappointed 80k's article on PhDs doesn't cover that issue. 

Alignment concerns aside, I think a job board shouldn't host companies that have taken already-earned compensation hostage. Especially without noting this fact. That's a primary thing about good employers, they don't retroactively steal stock they already gave you.


 

The arguments you give all sound like reasons OpenAI safety positions could be beneficial. But I find them completely swamped by all the evidence that they won't be, especially given how much evidence OpenAI has hidden via NDAs.

But let's assume we're in a world where certain people could do meaningful safety work an OpenAI. What are the chances those people need 80k to tell them about it? OpenAI is the biggest, most publicized AI company in the world; if Alice only finds out about OpenAI jobs via 80k that's prima facie evidence she won't make a contribution to safety. 

What could the listing do? Maybe Bob has heard of OAI but is on the fence about applying. An 80k job posting might push him over the edge to applying or accepting. The main way I see that happening is via a halo effect from 80k. The mere existence of the posting implies that the job is aligned with EA/80k's values. 

I don't think there's a way to remove that implication with any amount of disclaimers. The job is still on the board. If anything disclaimers make the best case scenarios seem even better, because why else would you host such a dangerous position?

So let me ask: what do you see as the upside to highlighting OAI safety jobs on the job board? Not of the job itself, but the posting. Who is it that would do good work in that role, and the 80k job board posting is instrumental in them entering it?

Off the top of my head: in maybe half the cases I already had the contact info. In one or two cases cases one of beta readers passed on the info. For the remainder it was maybe <2m per org, and it turns out they all use info@domain.org so it would be faster next time. 

Your post reflects a general EA attitude that emphasizes the negative aspects [...]

 

Something similar has been on my mind for the last few months. It's much easier to criticize than to do, and criticism gets more attention than praise. So criticism is oversupplied and good work is undersupplied. I tried to avoid that in this post by giving positive principles and positive examples, but sounds like it still felt too negative for you. 

Given that, I'd like to invite you to be the change you wish to see in the world by elaborating on what you find positive and who is implementing it[1].

  1. ^

    this goes for everyone- even if you agree with the entire post it's far from comprehensive

EA organizations frequently ask for people to run criticism by them ahead of time. I’ve been wary of the push for this norm. My big concerns were that orgs wouldn’t comment until a post was nearly done, and that it would take a lot of time. My recent post  mentioned a lot of people and organizations, so it seemed like useful data.

I reached out to 12 email addresses, plus one person in FB DMs and one open call for information on a particular topic.  This doesn’t quite match what you see in the post because some people/orgs were used more than once, and other mentions were cut. The post was in a fairly crude state when I sent it out.

Of those 14: 10 had replied by the start of next day. More than half of those replied within a few hours. I expect this was faster than usual because no one had more than a few paragraphs relevant to them or their org, but is still impressive.

It’s hard to say how sending an early draft changed things. One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time). I could maybe have saved myself one stressful interaction if I’d realized I was going to cut an example ahead of time

Only 80,000 Hours, Anima International, and GiveDirectly failed to respond before publication (7 days after I emailed them). Of those, only 80k's mention was negative.

I didn’t keep as close track of changes, but at a minimum replies led to 2 examples being removed entirely, 2 clarifications and some additional information that made the post better. So overall I'm very glad I solicited comments, and found the process easier than expected. 

My model is that at least one of the following must be true: you're one factor among many that caused the change, the change is not actually that big, or attrition will be much higher than standard pledge takers. 

Which is fine. Accepting the framing around influencing others[1]: you will be one of many factors, but your influence will extend past one person. But I think it's good to acknowledge the complexity. 

  1. ^

    I separately question whether the pledge is the best way to achieve this goal. Why lock in a decision for your entire life instead of, say, taking a lesson in how to talk about your donations in ways that make people feel energized instead of judged?

Assigns 100% of their future impact to you, not counting their own contribution and the other sources that caused this change. It's the same kind of simplification as "every blood donation saves 3 lives", when what they mean is "your blood will probably go to three people, each of whom will receive donations from many people."

Assumes perfect follow up. This isn't realistic for a median pledger, but we might expect people who were tipped into pledging by a single act by a single person to have worse follow-up than people who find it on their own. You could argue it isn't actually one action, there were lots of causes and that makes it stickier, but then you run into #1 even harder. 

Reifies signing the pledge as the moment everything changes, while vibing that this is a small deal you can stop when you feel like it. 

Assumes every pledger you recruit makes exactly the same amount. Part of me thinks this is a nit pick. You could assume people recruit people who on average earn similar salaries, or think it's just not worth doing the math on likely income of secondary recruitment. Another part thinks it's downstream of the same root cause as the other issues, and any real fix to those will fix this as well. 

The word "effective" is doing a lot of work. What if they have different tastes than I do? What if they think PlayPumps are a great idea? .

Treating the counterfactual as 0. 


As I write this out I'm realizing my objection isn't just the bad math. It's closer to treating pledge-takiers as the unit of measurement, with all pledges or at least all dollars donated being interchangeable. People who are recruited/inspired by a single person are likely to have different follow through and charitable targets than people inspired by many people over time, who are different than people driven to do this themselves. ?

Load more