I'm the Founder and Co-director of The Unjournal;. W organize and fund public journal-independent feedback, rating, and evaluation of hosted papers and dynamically-presented research projects. We will focus on work that is highly relevant to global priorities (especially in economics, social science, and impact evaluation). We will encourage better research by making it easier for researchers to get feedback and credible ratings on their work.
Previously I was a Senior Economist at Rethink Priorities, and before that n Economics lecturer/professor for 15 years.
I'm working to impact EA fundraising and marketing; see https://bit.ly/eamtt
And projects bridging EA, academia, and open science.. see bit.ly/eaprojects
My previous and ongoing research focuses on determinants and motivators of charitable giving (propensity, amounts, and 'to which cause?'), and drivers of/barriers to effective giving, as well as the impact of pro-social behavior and social preferences on market contexts.
Podcasts: "Found in the Struce" https://anchor.fm/david-reinstein
and the EA Forum podcast: https://anchor.fm/ea-forum-podcast (co-founder, regular reader)
Twitter: @givingtools
Added the research agendas tag -- I think this tag is very helpful for keeping track of these, avoiding overlap, finding relevant research questions, etc.
At The Unjournal (unjournal.org), we're assessing whether this research high-impact enough to commission for evaluation for The Unjournal. We prioritize research for evaluation not based on its quality, credibility, etc -- that's the evaluators' role. Instead, we consider its potential for global impact, its current influence on funding, policy, and thinking, on whether we see room for fruitful evaluation, and on whether it fits in our teams' wheelhouse and our field scope.
I'm giving a take here to get responses from the authors and others in the field, and from stakeholders. And also to give anyone who reads this some insights into some of the things we consider in prioritizing research for evaluation (and getting possible feedback on this).
Note, we've mainly covered research in economics and impact measurement. But we do have a 'psychology and attitudes' field specialist team, and I'd like to be evaluating more research in this area (but there are some particular challenges I won't discuss here).
Below, some considerations after a quick skim; your feedback is welcome. As it's a skim some of my takes below may be naive. And it's a bit of red-teaming.
They “Study how laypeople reason about human extinction.” They use mostly https://www.prolific.com/ samples, and, with various frames ask people to rank things triads like:
(A) There is no catastrophe.
(B) There is a catastrophe that immediately kills 80% of the world’s population.
(C) There is a catastrophe that immediately kills 100% of the world’s population.
And then
In terms of badness, which difference is greater: the difference between A and B, or the difference between B and C?”
Strikingly, however, they do not think that an extinction catastrophe would be uniquely bad relative to near-extinction catastrophes, which allow for recovery.
I.e., people tend to state that the difference between B and A is much bigger than the difference between B and C.
I would interpret as evidence that lay people’s quick takes/intuitions/gut attitudes are not total-population utilitarian, nor particularly extinction averse.
Looking across frames, the 'gradient' is higher (relatively more people find C-B bigger than B-A_ for animal extinction vs animal crisis and for sterilization of everyone vs nearly everyone.
According to the authors (and this seems reasonable to me) because they
focus on the immediate death and suffering that the catastrophes cause for fellow humans, rather than on the long-term consequences.
(d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.
This is the 'Utopia Condition":
So, they find extinction uniquely bad in cases where the comparison is framed in particular ways to highlight how uniquely bad it is? But isn't this a bit like leading the witness or driving agreeability bias? Are people in this condition really given alternate reasonable ways of considering this? It seems a bit forced, although I guess that it tells you that people are at least not extremely resistant to this argument.
This study was funded by BERI, CEA and others. Impact-oriented funders thought it was worth investing in. This also suggests it might have the potential to impact their future thinking and choices
To understand 'whether it could be impactful' for myself, I would want a better sense of the goal, or of how the authors and people funding this study intended it to be used, or thought it could reasonably be used.
Is the goal here to:
1. Survey people’s attitudes to get at something normative; i.e., to understand the will of the people to be able to fulfill it?... Or is it
2. To understand what could sway people to support x-risk reduction initiatives and legislation to avoid extinction-level risks?
I’m not fully convinced that any particular empirical result here would have led to a meaningfully different conclusion, implication, and recommendation. Well, I'm not doubting this is the case; I just don't see it obviously, and I'm very willing and eager to be convinced.
Are the methods... hypothetical ~quick choices on a Prolific sample, the ranking and relative-difference elicitation, between-subject comparisons (I think)... 'reasonable' enough to tell us something useful?
Are the Prolific samples (IMO the best we can do for things like these, but probably not representative of the US or UK “general public”) representative enough of the groups we care about for either goal 1 or 2 above?
I'm trying to understand... what does "exempt" mean in the phrase "exempt, salaried employee"?
Do you mean that your salary is part of the expenses of a tax-exempt nonprofit, so people who donate to PauseAI (partly to pay your salary) can deduct this from their taxes if they itemize their returns? And I'm trying to understand the connection between this and the idea of claiming pro-bono hours? Thanks!
I just wanted to make sure The Unjournal was eligible @Toby Tremlett🔹 . We made this post and tagged it but you state, "only projects that post or answer + message me are eligible for next week's Donation Election". I hadn't seen that earlier, so I'm messaging you now. (Maybe other orgs also overlooked that?)
The Unjournal (unjournal.org) commissioned the evaluation of one of the biosecurity-relevantpapers you mention (Barberio et al, 2023). See our evaluation package here, with links to each evaluation within.
The evaluators generally agree about the importance and promise of this work, but also express substantial doubts about its credibility and usefulness. (They also make specific suggestions for improvements and extensions, as well as requests for clarification.) The evaluation manager echoes this, noting that the “limitations of the paper as it stands make it far less valuable than it could be.”
Project Idea: 'Cost to save a life' interactive calculator promotion
What about making and promoting a ‘how much does it cost to save a life’ quiz and calculator.
This could be adjustable/customizable (in my country, around the world, of an infant/child/adult, counting ‘value added life years’ etc.) … and trying to make it go viral (or at least bacterial) as in the ‘how rich am I’ calculator?
The case
While GiveWell has a page with a lot of tech details, but it’s not compelling or interactive in the way I suggest above, and I doubt they market it heavily.
GWWC probably doesn't have the design/engineering time for this (not to mention refining this for accuracy and communication). But if someone else (UX design, research support, IT) could do the legwork I think they might be very happy to host it.
It could also mesh well with academic-linked research so I may have some ‘Meta academic support ads’ funds that could work with this.
Tags/backlinks (~testing out this new feature)
@GiveWell @Giving What We Can
Projects I'd like to see
EA Projects I'd Like to See
Idea: Curated database of quick-win tangible, attributable projects