Hey there~ I'm Austin, currently building https://manifold.markets. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !
Hi Omega, I'd be especially interested to hear your thoughts on Apollo Research, as we (Manifund) are currently deciding how to move forward with a funding request from them. Unlike the other orgs you've critiqued, Apollo is very new and hasn't received the requisite >$10m, but it's easy to imagine them becoming a major TAIS lab over the next years!
Yeah idk, this just seems like a really weird nitpick, given that you both like Holly's work...? I'm presenting a subjective claim to begin with: "Holly's track record is stellar", as based on my evaluation of what's written in the application plus external context.
If you think this shouldn't be funded, I'd really appreciate the reasoning; but I otherwise don't see anything I would change about my summary.
3b. As a clarification, for a period of time we auto-enrolled people in a subset of groups we considered to be broadly appealing (Econ/Tech/Science/Politics/World/Culture/Sports), so those group size metrics are not super indicative of user preferences. We aren't doing this at this point in time, but did not unenroll those users.
One theory is that EA places unusual weight on issues in the long-term future, compared to existing actors (companies, governments) who are more focused on eg quarterly profits or election cycles. If you care more about the future, you should be differentially excited about techniques to see what the future will hold.
(A less-flattering theory is that forecasting just seems like a cool mechanism, and people who like EA also like cool mechanisms.)
Thanks for the thoughts (and your posts on Futarchy years ago, I found them to be a helpful review of the literature!)
I'm a bit suspicious of metrics that depend on a vote 5 years from now.
I am too, though perhaps for different reasons. Long-term forecasting has slow feedback loops, and fast feedback loops are important for designing good mechanisms. Getting futarchy to be useful probably involves a lot of trial-and-error, which is hard when it takes you 5 years to assess "was this thing any good?"
Thanks for the writeup, Nathan; I am indeed excited about the possibility of making better grants through forecasting/futarchic mechanisms. So I'll start from the other direction: instead of reaching for futarchy as a hammer, start with, what are current major problems grantmakers face?
The problem that seems most important to solve: "finding projects that turn out to be orders of magnitude more successful/impactful than the rest". Paul Graham describes funding seed-stage startups as "farming black swans", which rings true to me. To look at two example rounds from ACX Grants, which I've been involved in:
So right now, I'm most interested in mechanisms that help us find such founders/projects. Just daydreaming here, is there any kind of prediction mechanism that can turn out a report as informative as the ACX Grants 1-year project update? The information value in most prediction markets is "% chance given by the market", which misses out on the valuable qualitative sketches given by a retroactive writeup.
Other promising things:
Haha, I think you meant this sarcastically but I would actually love to find Republican, or non-college-educated, or otherwise non-"traditional EA" regrantors. (If this describes you or someone you know, encourage them to apply!)
I really appreciated your assessments of the alignment space, and would be open to paying out a retroactive bounty and/or commissioning reports for 2022 and 2023! Happy to chat via DM or email (austin@manifund.org)