CG

Charlie_Guthmann

634 karmaJoined

Bio

Talk to me about cost benefit analysis !

Comments
148

I don't know if they're doing the ideal thing here, but they are doing way better than I imagined from your comment. 

Yep after walking through it in my head plus re- reading the post, doesn't seem egregious to me. 

I think you might have replied on the wrong subthread but a few things. 

This is the post I was referring to. At the time of extension, they claim they had ~3k applicants. They also infer that they had way fewer (in quantity or quality) applicants for the fish welfare and tobacco taxation projects but I'm not sure exactly how to interpret their claim. 
 

Did you end up accepting late applicants? Did they replace earlier applicants who would otherwise have been accepted, or increase the total class size? Do you have a guess for the effects of the new participants?

using some pretty crude math + assuming both applicant pools are the same, each additional applicant has ~.7% chance of being one of the 20 best applicants (I think they take 10 or 20). so like 150 applicants to get one replaced. if they had to internalize the costs to the candidates, and lets be conservative and say 20 bucks a candidate, then that would be about 3k per extra candidate replaced.

and this doesn't included the fact that the returns consistently diminish. and they also have to spend more time reviewing candidates, and even if a candidate is actually better, this doesn't guarantee they will correctly pick them. you can probably add another couple thousands for these considerations so maybe we go with ~5k?

Then you get into issues of fit vs quality, grabbing better quality candidates might help CE counterfactual value but doesn't help the EA movement much since your pulling from the talent pool. And lastly it's sort of unfair to the people who applied on time but that's hard to quantify. 

and I think 20 bucks per candidate is really really conservative. I value my time closer to 50$ an hour than 2$ and I'd bet most people applying would probably say something above 15$. 

So my very general and crude estimate IMO is they are implicitly saying they value replacing a candidate at 2k-100k, and most likely somewhere between 5-50k. I  wonder if we asked them how much they would have to pay for one candidate getting replaced at the time they extended what they would say. 

if anyone thinks I missed super obvious considerations or made a mistake lmk. 

Hi Peter thanks for the response - I am/was disappointed in myself also. 

I assumed RP had thought about this. and I hear what you are saying about the trade-off. I don't have kids or anything like that and I can't really relate to struggling to sit down for a few hours straight but I totally believe this is an issue for some applicants and I respect that. 

What I am more familiar with is doing school during COVID. My experience left me with a strong impression that even relatively high-integrity people will cheat in this version of the prisoner's dilemma. Moreover, it will cause them tons of stress and guilt, but they are way less likely to bring it up than someone who is caused issues from having to take the test in one sitting because no one wants to out themselves as a cheater or even thinking about cheating. 

I will say in school there is something additionally frustrating or tantalizing about seeing your math tests that usually have a 60% average be in the 90%s and having that confirmation that everyone in your class is cheating but given the people applying are thoughtful and smart they probably would assign this a high probability anyway. 

If I had to bet, I would guess a decent chunk of the current employees who took similar tests (>20%) at RP did go over time limits but ofc this is pure speculation on my part. I just do think a significant portion of people will cheat in this situation (10-50%) and given a random split between the cheaters and non-cheaters, the people who cheat are going to have better essays and you are more likely to select them. 

(to be clear I'm not saying that even if the above is true that you should definitely time the tests, I could still understand it not being worth it)

Two (barely) related thoughts that I’ve wanted to bring up. Sorry if it’s super off topic.

Rethink priorities application for a role I applied for two years ago told applicants it was timed application and not to take over two hours. However there was no actual verification of this; it was simply a Google form. The first round I “cheated” and took about 4 hours. I made it to the second round. I felt really guilty about this so made sure not to go over on the second round. I didn’t finish all the questions and did not get to the next round. I was left with the unsavory feeling that they were incentivizing dishonest behavior and it could have easily been solved by using something similar to tech companies where a timer starts when you open the task. I haven’t applied for other stuff since so maybe they fixed this.

Charity entrepreneurship made a post a couple months back extending their deadline for the incubator because they thought it was worth it to get good candidates. I decided to apply and made it a few rounds in. I would say I spent like 10 ish hours doing the tasks. I might be misremembering, but at the time of extension I’m pretty sure they already had 2000-4000 applicants. Considering the time it took me, and assuming other applicants were similar, and the amount of applicants they already had, I’m not sure it was actually positive ev extending the deadline.

Neither of these things are really that big of a deal but thought I’d share

Curious how it would do on chess 960.

Would be interesting to compare my likes on the ea forum with other people. I feel like what I up/downvote is way more honest than what I comment. If I could compare with someone the posts/comments where we had opposite reactions, i.e. they upvoted and I downvoted I feel like it could start some honest and interesting discussions. 

Fantastic post/series. The vocab words have been especially useful to me. few mostly disjunctive thoughts even though I overall agree. 

  • I wonder what you think would happen if an economically valuable island popped up in the middle of the ocean today? 
    • My guess is it would be international lands in some way and no country would let or want another country to claim the land
    • I don't think this is super analogous but I think there is some cross over. 
  • The generalization of the first bullet point is that under the right political circumstances, HV (or otherwise) governments can prevent unlicensed outward colonization from within their society without themselves colonizing. 
    • Some obvious objections here, like as soon as the gov can't lock stuff down for a period of time it could be impossible to stop the outward expansion. 
      • But this honestly depends a lot on the technology levels of the relevant players
  • Governments could also theoretically do this to other civilizations. They could do a military version of von Neumann probes, locking down areas and stopping evolution from occurring while not actually colonizing the land in any sentient adding sense. 
  • I'm concerned that it's easy to handwave a lot of stuff with claims of AGI being able to do XYZ.  While I often buy these claims myself, It would be nice to condition this question on like 5-10 different levels of maximum technology, or technology differential between a ruling state and everyone else. I think that's where a lot of the disconnect comes between the current day island scenario and your post. 
    • At the very least, it would be nice to have a section where you say ~ about what your estimate for the tech is. 
  • The Expanse is a show about a similar concept, I don't think it's necessarily a great prediction of what life will be like but it's cool to see a fleshed-out version of the tension between the expanders and non expanders.
    • It being fleshed out might give you a slightly different perspective/ see that there are perhaps a few more details or considerations needed. 
  • If PU society isn't asymmetric on the action-omission axis, then they should still have some level of concern about just expanding like crazy, since they need to consider the fact that they are locking in a worse conversion of physical resources to positive utility still. 
  • I don't fully agree with Will's claim about deleting the lightcone. It depends on the the ratio at which the suffering focused agents value pleasure to pain and where they fall on the action-ommision axis. Nonetheless, if spreading good lives is nearly as easy as spreading, spreading and destroying everything as you spread is probably in between the two, or if something like false vacuum decay is possible, even easier than spreading. 

Yep, I was about to comment on the same thing.  Would like to see what OP has to say

3. If humans become grabby, their values are unlikely to differ significantly from the values of the civilization that would've controlled it instead.

I think this is phrased incorrectly. I think the correct phrasing is :

3.  If humans become grabby, their values (in expectation) are ~ the mean values of a grabby civilization. 

Not sure if it's what you meant but let me explain the difference with an example. let's say there are three societies:

[humans | zerg | Protoss]

for simplicity let's say the winner takes all of the lightcone. 

  • EV[lightcone| zerg win] = 1  
  • EV[lightcone| humans win] = 2 
  • EV[lightcone| protoss win] = 3

Then if humans become grabby their values are guaranteed to differ from whoever else would have won, yet for utilitarian purposes we don't care because the expected value is the same, given we don't know if the zerg or protoss will win. 

I think you might have meant this? But it's somewhat important to distinguish because my updated (3) is a weaker claim than the original one, yet still enough to hold the argument together. 

Moreover, even in the face of strong selection pressure, systems don't seem to converge on similar equilibria in general.

I like this thought but to push back a bit - nearly every species we know of is incredibly selfish or at best only cares about their very close relatives. Sure, crabs are way different than lions but OP is describing a much lower dimension, which seems more likely to generalize regardless of context.

If you asked me to predict what (animal) species live in the rainforest just by showing me a picture of the rainforest I wouldn't have a chance. If you asked me if the species in the rainforest would be selfish or not that would be significantly easier. For one, it's easier to predict one dimension than all the dimensions, and second, some dimensions we should expect to be much less elastic to the set of possible inputs. 

Load more