Forum? I'm against 'em!
Something I found especially troubling when applying to many EA jobs is the sense that I am p-hacking my way in. Perhaps I am never the best candidate, but the hiring process is sufficiently noisy that I can expect to be hired somewhere if I apply to enough places. This feels like I am deceiving the organizations that I believe in and misallocating the community's resources.
There might be some truth in this, but it's easy to take the idea too far. I like to remind myself:
Decreasing the production of animal feed, and therefore reducing crop area, which tends to: Increase the population of wild animals
Could you share the source for this? I've wondered about the empirics here. Farms do support wild animals (mice, birds, insects, etc), and there is precedent for farms being paved over when they shut down, which prevents the land from being rewilded.
Suppose someone is an ethical realist: the One True Morality is out there, somewhere, for us to discover. Is it likely that AGI will be able to reason its way to finding it?
What are the best examples of AI behavior we have seen where a model does something "unreasonable" to further its goals? Hallucinating citations?
What are the arguments for why someone should work in AI safety over wild animal welfare? (Holding constant personal fit etc)
Is that lognormal distribution responsible for
If yes, what's the intuition behind this distribution? If not, why is cost-effectiveness non-linear in speed-up time?