Builds web apps (eg viewpoints.xyz) and makes forecasts. Currently I have spare capacity.
Talking to those in forecasting to improve my forecasting question generation tool
Writing forecasting questions on EA topics.
Meeting EAs I become lifelong friends with.
Connecting them to other EAs.
Writing forecasting questions on metaculus.
Talking to them about forecasting.
I note that in some sense I have lost trust that the EA community gives me a clear prioritisation of where to donate.
Some clearer statements:
Where can i see the debate week diagram if I want to look back at it?
Sadly, it looks like the debate week will end without many of the stronger[1] arguments for Global Health being raised, at least at the post level. I don't have time to write them all up, and in many cases they would be better written by someone with more expertise, but one issue is firmly in my comfort zone:
To the extent that we discuss this issue rarely it really ought to be worth someone's time to write up these supposed strong arguments. To the extent that they haven't, even after a well publicised week of discussion I will believe it more likely they don't exist.
If you publish a bad pieces and share them with millions of people, I don't really feel obliged to talk or listen to other things you write until you correct the inaccurate piece. I don't think any other community would and I think it's a bad use of our time to extend this absurd level of charity.
People are free to tell me the wired article wasn't inaccurate or lazy, but scanning it, it looks that way.
Here are quote I could find in 15 minutes from your first article that leave the reader with an inaccurate impression. I have not read this new article.
I could go on.
Leif, we do not owe you our time. You had the same social credit that all critics have and a large platform. You could have come here and argued your case. I am sure people would have engaged. But for me, you have burned that credit, sharing inaccuracies to millions of people. Your piece started a news cycle about the harms of bednets based on inaccurate information. That has real harms. So I don't care to read your piece.
I don't know whether I am the hero in my own story - I have done many things I regret - but I do know a thing or two about dealing with those I disagree with. I would not publish a piece with this amount of errors and if I did I wouldn't expect people to engage with me again. I do not understand why you think we would.
I hope you are well, genuinely.
I think there is something here about the kinds of people who are steady hands not necessarily having great leverage either in terms of pay or status. But realistically such a person may be very costly to replace or do a very valuable role.
In that way, a sensible organisation would increase their pay and (to the extent possible) status by reflecting not on the change of their output from year to year, but actually how difficult they are to replace, which might be weeks of hiring, months of training, months of management time and perhaps years of time passing to get back to the function working as well as it previously did.
It is tricky to think how such negotiations can take place properly, but it seems likely to me that the sort of person who is likely to be a steady hand might not be agitatng for such, but that in turn means those who would say if paid more, appreciated more, don't see that option available to them.
I sort of think this is a reason not to have EA-endorsed politicians unless someone has really done the due diligence. This is a pretty high trust community and people expect something someone says confidently to be rubustly tested but political recommendations (and some charity ones to be fair) seem much less well researched than general discussions on policy etc.
Interesting take. I don't like it.
Perhaps because I like saying overrated/underrated.
But also because overrated/underrated is a quick way to provide information. "Forecasting is underrated by the population at large" is much easier to think of than "forecasting is probably rated 4/10 by the population at large and should be rated 6/10"
Over/underrated requires about 3 mental queries, "Is it better or worse than my ingroup thinks" "Is it better or worse than my ingroup thinks?" "Am I gonna have to be clear about what I mean?"
Scoring the current and desired status of something requires about 20 queries "Is 4 fair?" "Is 5 fair" "What axis am I rating on?" "Popularity?" "If I score it a 4 will people think I'm crazy?"...
Like in some sense your right that % forecasts are more useful than "More likely/less likely" and sizes are better than "bigger smaller" but when dealing with intangibles like status I think it's pretty costly to calculate some status number, so I do the cheaper thing.
Also would you prefer people used over/underrated less or would you prefer the people who use over/underrated spoke less? Because I would guess that some chunk of those 50ish karma are from people who don't like the vibe rather than some epistemic thing. And if that's the case, I think we should have a different discussion.
I guess I think that might come from a frustration around jargon or rationalists in general. And I'm pretty happy to try and broaden my answer from over/underrated - just as I would if someone asked me how big a star was and I said "bigger than an elephant". But it's worth noting it's a bandwidth thing and often used because giving exact sizes in status is hard. Perhaps we shouldn't have numbers and words for it, but we don't.