Data scientist working on AI forecasting through Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.
Scott's analogy is correct, in that the problem with the criticism is that the thing someone failed to predict was on a different topic. It's not reasonable to conclude that a climate scientist is bad at predicting the climate because they are bad at predicting mass shootings. If it were a thousand climate scientists predicting the climate a hundred years from now, and they all died in an earthquake yesterday, it's not reasonable to conclude that their climate models were wrong because they failed to predict something outside the scope of their models.
Hey Andreas! The conference capacity is around 500, we've admitted 404 people so far, and CEA have told me that usually 95% of accepted applicants register and 95% of registered attendees show up to the conference. Therefore we have 500-404*0.95*0.95 = 135 spots left, so ideally we'd like to admit another 150 people.
Our acceptance rate is currently 80%, with another 10% waitlisted and 9% rejected.
Yeah, legibility = ease of understanding; propensity for someone unfamiliar with something to look at it and comprehend.
Seeing Like A State is about how governments want to simplify and homogenize things to make them more legible, which in turn facilitates state control of the actions of their citizens.
The EA usage seems to focus on understanding in general, without a goal of control, and not necessarily via simplification or homogenization.
Right, the focus here is legibility. Control and homogenization can be related, but are different.
This usage might have been popularized among EA-adjacent people by the Slate Star Codex book review, and trickled to the EA Forum from there.
I love the idea of blind speaker selection in principle, but how do you then ensure you are selecting talks from people who are passably good at public speaking? You might get a really interesting outline submitted by someone who gives a really bad delivery or doesn't bother to rehearse.
When the organizers of EAGxBerkeley 2022 were selecting lightning talks, they had prospective speakers send in slides, and then if those were good enough, give the presentation over a video call to someone who was responsible for reviewing all of them and selecting the best presentations. The first part of the process can be anonymized, but the second part can't.
Isn't social media approximately not a problem at all, at least on the scale of other EA causes? There are some disputed findings that it may cause increased anxiety, depression, or suicide among some demographic groups (e.g. Jonathan Haidt claims it is responsible for mental illness in teenage girls and there is an ongoing scientific debate on this) but even if these are all true, this seems very low priority compared to neglected diseases, and nowhere near the scale of other problems to do with digital minds if they have equal moral value to people and you don't discount lives in the far future.