Metaculus currently gives a 16% chance to the claim that total deaths before 2021 will be greater than 11.6 M.
I suggest the question you've linked has an artificially low upper bound
The question has an upper bound of 100 million deaths, not cases. I don't think that is "artificially low".
Maybe you are confusing Hurford's link with this old question, which does have an artificially low upper bound and deals with cases instead of deaths.
All metaculus questions are about cases, not deaths.
Most of them are, but the one Hurford linked to is explicitly about the number of deaths: "How many people will die as a result of the 2019 novel coronavirus (2019-nCoV) before 2021?".
I am not sure where you found the claim you cite
If you look at the bottom of the page, it says that the community predicts a ~3% chance of greater than 100 million deaths. Previously, it said 2% for the same number of deaths.
Just to be absolutely clear about what I am referring to, here is a screenshot of the relevant part of the UI.
The opposite trend occurred for SARS (in the same class as nCoV-2019), which originally had around a 2-5% deaths/cases rate but ended up with >10% once all cases ran their full course.
In a comment from October 2019, Ben Pace stated that there is currently no actionable policy advice the AI safety community could give to the President of the United States. I'm wondering to what extent you agree with this.
If the US President or an influential member of Congress was willing to talk one-on-one with you for a couple hours on the issue of AI safety policy, what advice would you give them?
The founders of PETRL include Daniel Filan, Buck Shlegeris, Jan Leike, and Mayank Daswani, all of whom were students of Marcus Hutter. Brian Tomasik coined the name.
Of these five people, four are busy doing AI safety-related research. (Filan is a PhD student involved with CHAI, Shlegeris works for MIRI, Leike works for DeepMind, and Tomasik works for FRI. OTOH, Daswani works for a cybersecurity company in Australia.)
So, my guess is that they became too busy to work on PETRL, and lost interest. It's kind of a shame, because PETRL was (to my knowledge) the only organization focused on the ethics of AI-qua-moral patient. However, it seems pretty plausible to me that the AI safety work the PETRL founders are doing now is more effective.
In July 2017, I emailed PETRL asking them if they were still active:
Dear PETRL team,
Is PETRL still active? The last blog post on your site is from December 2015, and there is no indication of ongoing research or academic outreach projects. Have you considered continuing your interview series? I'm sure you could find interesting people to talk to.
The response I received was:
Thanks for reaching out. We're less active than we'd like to be, but have an interview in the works. We hope to have it out in the next few weeks!
That interview was never published.
I'm interested.