utilistrutil

181 karmaJoined

Bio

Forum? I'm against 'em!

Comments
38

Is that lognormal distribution responsible for 

the cost-effectiveness is non-linearly related to speed-up time.

If yes, what's the intuition behind this distribution? If not, why is cost-effectiveness non-linear in speed-up time?

Something I found especially troubling when applying to many EA jobs is the sense that I am p-hacking my way in. Perhaps I am never the best candidate, but the hiring process is sufficiently noisy that I can expect to be hired somewhere if I apply to enough places. This feels like I am deceiving the organizations that I believe in and misallocating the community's resources. 

There might be some truth in this, but it's easy to take the idea too far. I like to remind myself:

  1. The process is so noisy! A lot of the time the best applicant doesn't get the job, and sometimes that will be me. I ask myself, "do I really think they understand my abilities based on that cover letter and work test?"
  2. A job is a high-dimensional object, and it's hard to screen for many of those dimensions. This means that the fact that you were rejected from one job might not be very strong evidence that you are a poor fit for another (even superficially similar) role. It also means that you can be an excellent fit in surprising ways: maybe you know that you're a talented public speaker, but no one ever asks you to prove it in an interview. So conditional on getting a job, I think you shouldn't feel like an imposter but rather eager to contribute your unique talents. My old manager was fond of saying "in a high-dimensional sphere, most of the points are close to the edge," by which he meant that most people have a unique skill profile: maybe I'm not the best at research or ops or comms, but I could still be the best at (research x ops x comms).

Thanks for the references! Looking forward to reading :)

Thanks for this! I would still be interested to see estimates of eg mice per acre in forests vs farms and I'm not sure yet whether this deforestation effect is reversible. I'll follow up if I come across anything like that.

I agree that the quality of life question is thornier.

Under CP and CKR, Zuckerberg would have given higher credence to AI risk purely on observing Yudkowsky’s higher credence, and/or Yudkowsky would have given higher credence to AI risk purely on observing Zuckerberg’s lower credence, until they agreed.

 

Should that say lower, instead?

Decreasing the production of animal feed, and therefore reducing crop area, which tends to: Increase the population of wild animals

 

Could you share the source for this? I've wondered about the empirics here. Farms do support wild animals (mice, birds, insects, etc), and there is precedent for farms being paved over when they shut down, which prevents the land from being rewilded. 

Suppose someone is an ethical realist: the One True Morality is out there, somewhere, for us to discover. Is it likely that AGI will be able to reason its way to finding it? 

What are the best examples of AI behavior we have seen where a model does something "unreasonable" to further its goals? Hallucinating citations?

What are the arguments for why someone should work in AI safety over wild animal welfare? (Holding constant personal fit etc)

  • If someone thinks wild animals live positive lives, is it reasonable to think that AI doom would mean human extinction but maintain ecosystems? Or does AI doom threaten animals as well?
  • Does anyone have BOTECs on numbers of wild animals vs numbers of digital minds?

At least we can have some confidence in the total weight of meat consumed on average by a Zambian per year and the life expectancy at birth in Zambia.

 

We should also think about these on the margin. Ie the lives averted might have been shorter than average and consumed less meat than average.

Load more