Just finished Semple (2021), 'Good Enough for Government Work? Life-Evaluation and Public Policy,' which I found fascinating for its synthesis of philosophy + economics + public policy, and potential relevance to EA (in particular, improving institutional decisionmaking).
The premise of the paper is essentially, "Normative policy analysis—ascertaining what government should do—is not just a philosophical exercise. It is (or should be) an essential task for people working in government, as well as people outside government who care about what government does. Life-evaluationist welfare-consequentialism is a practical and workable approach."
Some things that are potentially EA-relevant
It gives brief policy analysis using a prioritarian welfare-consequentialism lens
It mentions unborn people, foreign residents, and animals as worthy of government + moral concern under welfare-consequentialism
It avoids having to defining welfare (and, implicitly, addresses the limitation of QALYs re: difficulty in comparing between one's current and alternate lives)
For my 'Psychology of Negotiation' (PSYC 25700) class, I'm going to write up one/two-line summaries for research articles that feel plausibly relevant to policy, community-building, or really any interpersonal-heavy work. These are primarily informed by behavioral science.
Hopefully, this will also allow me to better recall these studies for my final.
Epley et al. (2006), 'When perspective taking increases taking'
The common tendency towards naive cynicism suggests that, in competitive contexts, perspective-taking may cause people to behave MORE egoistically (reactive egoism) despite also reducing self-serving judgments of fairness (i.e., they understand they are 'objectively' not behaving fairly). This is because people assume that others will also behave egoistically.
Galinsky & Mussweiler (2001), 'First offers as anchors'
The person who makes the first offer gains a robustly good distributive advantage, but this can be mediated by the bargainer by focusing on one's target price or the other's reservation (minimum) price.
I'm (still!!!) thinking about my BA thesis research question and I think my main uncertainty/decision point is what specific policy debate to investigate. I've narrowed it down to two so far - hopefully I don't expand - and really welcome thoughts.
Context: I am examining the relationship between narratives deployed by experts on Twitter and the Biden Administration's policymaking process re: COVID-19 vaccine diplomacy. Specifically, I want to examine a debate on an issue wherein EA-aligned experts have generally coalesced around one stance.
However, the U.S. public's trust in experts has varied. It may have peaked last year and now be declining
Vaccine diplomacy (along with all health policy) is not solely an issue of 'following the science'
This is not to say that data or rationality is not important. In fact, I would be extremely interested in investigating whether the combination of scientific evidence + thematic framing is more effective than either alone.
However, that would be an experimental study which is not something I am interested in.
This suggests I might want to investigate the presence of scientific vs thematic elements in expert narratives. Not sure though... It's not what I'm immediately drawn to
Evidence/science alone is insufficient. Experts need to be able to tell stories/persuade/make a moral or emotional appeal. (Extrapolated from the claim that narratives can be influential in policymaking)
At the very least, experts should make clear that no decision is value-neutral and the specific values they are prioritizing in their recommendation
Now that I think about it, the fact that I'm 'not sure' about this re: COVID-19 might mean this would make for a good RQ? Or maybe I'm just not thinking of the relevant literature right now.
The two debates below, including general thoughts
The COVID-19 TRIPS Waiver (waiving IP)
What most excites me about this: The Biden Admin did a strong 'about-face' on this and the discourse around this was very rich (involved many actors with strong opinions, and entwines with debates around vaccine sharing etc.).
Main hesitation: I don't know how to think about experts as an actor here. Should they be considered a coalition, per the Advocacy Coalition Framework? Or should I look at a specific set of aligned expert organizations/individuals? Or should I look at all experts on Twitter?
But ACF emphasizes long-term policymaking and shared beliefs - and it seems like there was no singular expert consensus on whether the TRIPS waiver would be a net good. Now that I think about it, this might be due to a lack of transparency over what is being [morally] prioritized...
But why focus on aligned orgs/individuals? How can I justify that? How generalizable is that even?
But if I include all experts, including experts who might have other avenues to policy influence (e.g. big think tanks or former officials), then why not also examine non-expert narratives?
Specifically, the rationale behind examine Twitter is that it provides a highly-accessible advocacy platform to people who do not otherwise have much visibility/leverage
Also, looking at a wide range of Tweets helps get a sense of the general narrative
What most excites me about this: There is an explicit non-epistemic debate here (prioritizing children domestically vs the global poor), and that is what I care the most about. There still remains a scientific/epistemic component, too: "Are children safe without vaccines?"
Additionally, there is an added controversial non-epistemic element of anti-maskers
Main hesitation: But the Biden administration hasn't really 'made a policy' on this. So what policy process would I be examining?
This also straddles the line between domestic and international, in that the debate is primarily about picking between the two (in contrast to the first debate), which could be tricky
*edited for clarity - was in a rush when I posted!
These both seem like great options! Of the two, I think the first has more to play with as there is a pretty clear delineation between the epistemic vs. moral elements of the second, whereas I think debates about the first have those all jumbled up and it's thus more interesting/valuable to untangle them. I don't totally understand your hesitation so I'm afraid I can't offer much insight there, but with respect to long-term policymaking/shared beliefs, it does seem like the fault lines mapped onto fairly clear pro-free-market vs. pro-redistributive ideologies that drew the types of advocates one would have predicted given that divide.
*edit 3: After reading more on Epistemic Communities, I think I'm back where I started. *edit 4: I am questioning, now, whether I need a framework of how experts influence policymaking at all ... Maybe I should conceptualize my actors more broadly but narrow the topic to, say, the use of evidence in narratives?
I really appreciate your response, Ian! I think it makes sense that the more convoluted status of the first debate would make it a more valuable question to investigate.
My hesitation was not worded accessibly or clearly - it was too grounded in the specific frameworks I'm struggling to apply - so let me reword: it doesn't seem accurate to claim that there was one expert consensus (i.e. primarily pro-/anti-waiver). Given that, I am not sure a) how to break down the category of 'expert' - although you provide one suggestion, which is helpful - and b) how strongly I can justify focusing on experts, given that there isn't a clear divide between "what experts think" and "what non-experts think."
Non-TL;DR:
My main concern with investigating the debate around the TRIPS waiver is that there doesn't seem to be a clear expert consensus. I'm not even sure there's a clear EA-aligned consensus, although the few EAs I saw speak on this (e.g. Rob Wiblin) seemed to favor donating over waiving IP (which seems like a common argument from Europe). Given that, I question
the validity of investigating 'expert narratives' because 'experts' didn't really agree there
However, I don't know if it would be in/valid (per the theories I want to draw from, e.g. Advocacy Coalition Framework (ACF) or Epistemic Communities), so that would be one of my next steps.
This particular description worries me: "Advocacy coalitions are all those defined by political actors who share certain ideas and who coordinate among themselves in a functional way to suggest specific issues to the government and influence in the decision-making process."
This would be subverted by your suggestion, though, as I note in point 3!
the validity of investigating expert narratives specifically instead of the general public—if experts didn't coalesce around a specific stance, what's my justification for investigating them specifically instead of getting a sense of the public generally? ACF explicitly notes that "common belief systems bind members of a coalition together." Given that the pro-/anti-waiver coalitions are defined by common beliefs held by both experts and non-experts (e.g. pro-free-market), how can I justify exclusively focusing on experts?
This is probably not a valid concern, now that I think about it. After all, my thesis hinges upon the idea that experts help inform policymakers + policymaking, so it makes sense to focus on their narratives rather than looking at the public as a whole...
However, it seems like focusing exclusively on two expert groups is valid at least within the Epistemic Community framework, so perhaps this would work if it turns out that certain kinds of experts advocated for the same stance.
whom I should focus on—without being able to lump all experts together, how should I break them down?
Perhaps I could subdivide experts into coalitions - e.g. experts for the waiver and experts against the waiver? (This is akin to the fault lines you mention)
I still feel kind of iffy about investigating experts specifically here, instead of the general public, particularly because I could use the same coalitional divide (pro-/anti-waiver)
Or should I focus on EA-aligned experts specifically?
But I don't know how to justify this... It doesn't seem like the smartest research practice
Suggestion: use an expert lens, but make the division you're looking at [experts connected to/with influence in the Biden administration] vs. ["outside" experts].
Rationale: The Biden administration thinks of and presents itself to the public as technocratic and guided by science, but as with any administration politics and access play a role as well. As you noted, the Biden administration did a clear about-face on this despite a lack of a clear consensus from experts in the public sphere. So why did that happen, and what role did expert influence play in driving it? Put another way, which experts was the administration listening to, and what does that suggest for how experts might be able to make change during the Biden administration's tenure?
Hmm! Yes, that's interesting - and aligns with the fact that many different policy influencers weighed in, ranging from former to current policymakers. Thank you very much for this!
I think something I'm worried about is how I can conceptualize [inside experts] vs. [outside experts] ... It seems like a potentially arbitrary divide and/or a very complex undertaking given the lack of transparency into the policy process (i.e. who actually wields influence and access to Biden and Katherine Tai, on this specific issue?).
It also complicates the investigation by adding in the element of access as a factor, rather than purely thinking about narrative strategies - and I very much want to focus on narratives. On one hand, I think that could be interesting - e.g. looking at narrative strategies across levels of access. On the other, I'm uncertain that looking at narrative strategies would add much compared to just analyzing the stances of actors within the sphere of influence.
What do you think of this alternate RQ: "How did pro/anti-waiver coalitions use evidence in their narratives?"
Moves away from the focus on experts but still gets to the scientific/epistemic component.
(I'm also wondering whether I am being overly concerned with theoretically justifying things!)
(I'm also wondering whether I am being overly concerned with theoretically justifying things!)
I think I would agree with this. It seems like you're trying to demonstrate your knowledge of a particular framework or set of frameworks through this exercise and you're letting that constrain your choices a lot. Maybe that will be a good choice if you're definitely going into academia as a political scientist after this, but otherwise, I would structure the approach around how research happens most naturally in the real world, which is that you have a research question that would have concrete practical value if it were answered, and then you set out to answer it using whatever combination of theories and methods makes sense for the question.
A big concern that's cropped up during my current work trial is whether I'm actually just not agentic/strategic/have-good-judgment-enough to take on strategy roles at EA orgs.
I think part of this is driven by low self-confidence, but part of this is the very plausible intuition that not everyone can be in the heavy tail and maybe I am not in the heavy tail for strategy roles. And this feels bad, I guess, because part of me thinks "strategy roles" are the highest-status roles within the meta-EA space, and status is nice.
But not nice enough to sacrifice impact! It seems possible, though, that I actually could be good at strategy and I'm bottlenecked by insecurity (which leads me to defer to others & constantly seek help rather than being agentic).
My current solution is to flag this for my future manager and ensure we are trialling both strategy and operations work. This feels like a supportive way for me to see where my comparative advantage lies - if I hear, "man, you suck at strategy, but your ops work is pretty good!" Then I would consider this a win!
My brain now wants to think about the scenario where I'm actually just bad at both. But then I'll have to take the advice I give my members: "Well, then you got really valuable information - you just aren't a great fit for these specific roles, so now you get to explore options which might be great fits instead!"
One approach I found really helpful in transitioning from asking a manager to making my own strategic decisions was going to my manager with a recommendation and asking for feedback on it (or, failing that, a clear description of the problem and any potential next steps I can think of, like ways to gain more information).
This gave me the confidence to learn how my organisation worked and know I had my manager's support for my solution, but pushed me to develop my own judgment.
Thanks, this is a good tip! Unfortunately, the current options I'm considering seem more hands-off than this (i.e., the expectation is that I would start with little oversight from a manager), but this might be a hidden upside because I'm forced to just try things. : )
Thing I should think about in the future: is this "enough" question even useful? What would it even mean to be "agentic/strategic enough?"
edit: Oh, this might be insidiously following from my thought around certain roles being especially important/impactful/high-status. It would make sense to consider myself as falling short if the goal were to be in the heavy tail for a particular role.
But this probably isn't the goal. Probably the goal is to figure out my comparative advantage, because this is where my personal impact (how much good I, as an individual, can take responsibility for) and world impact (how much good this creates for the world) converges. In this case, there's no such thing as "strategic enough" - if my comparative advantage doesn't lie in strategy, that doesn't mean I'm not "strategic enough" because I was never 'meant to' be in strategy anyway!
So the question isn't, "Am I strategic enough?" But rather, "Am I more suited for strategy-heavy roles or strategy-light roles?"
Optionality cost is a useful reminder that option value consists not only of minimising opportunity cost but also increasing your options (which might require committing to an opportunity).
I know that carbon offsets (and effective climate giving) are a fairly common topic of discussion, but I've yet to see any thoughts on the newly-launched Climate Vault. It seems like a novel take on offsetting: your funds go to purchasing cap-and-trade permits which will then be sold to fund carbon dioxide removal (CDR).
I like it because it a) uses (and potentially improves upon) a flawed government program in a beneficial way, and b) I can both fund the limitation of carbon emissions and the removal, unlike other offsets which only do the latter.
However, I recognize that I have a blind spot because I respect Michael Greenstone. Some doubts:
The CDR funding will be allocated based on an RFP rather than directly funding existing solutions (e.g. Climeworks) which lowers my confidence in their ability to definitely find + fund CDR equivalent to the value of the permits they hold. However this is a pretty minor concern, in part because they shouldn't sell the permits until they find a solution they are confident in; although even then, I am concerned they might pick something without a great track record.
Is it really the most efficient to buy-then-sell these permits, rather than simply investing the funds and later funding the most efficient CDRs?
This is super new so there's basically no data or public vetting, afaik.
"How do you convert a permit into CO2 removal using CDR technologies without selling them back into the compliance market – in effect negating the offset?
We will sell the permits back into the market, but only when we’re ready to use the proceeds to fund carbon removal projects equivalent to the number of permits we’re selling, or more. So, in effect, the permits going back onto the market are negated by the tons of carbon we are paying to remove."
Once credible CDR is so cheap (now > USD 100/t, most approaches over USD 600, cf Stripe Climate) that this works (current carbon prices around USD 20), the value of additional CDR tech support is pretty low because the learning curve has already been brought down.
Am I missing something?
It seems like a good way to buy allowances which is, when the cap is fixed (also addressed in the FAQ, though not 100% convincingly) better than buying most offsets, but it seems unlikely to work in the way intended.
Realizing that what drove me to EA was largely wanting to "feel like I could help people" and not, "help the most beings." This leads me to, for example, really be into helping as many people as I can individually help flourish (at the expense of selecting for people who might be able to make the most impact)*.
This feels like a useful specification of my "A" side and how/why the "E" side is something I should work on!
*A more useful reframing of this is to put it into impact terms. Do I think the best way to make impact is to
(1) find the right contexts/problems wherein a given person can have an outsized impact
(2) focus on specific people that I think have the highest chance of having an outsized impact?
In particular, 'Inventing Human Rights: A History' seems relevant to Moral Circle Expansion.
edit: I should've read the list fully! I've actually read The Honor Code. I didn't find it that impressive but I guess the general idea makes sense. If we can make effective altruism something to be proud of - something to aspire to for people outside the movement, including people who currently denigrate it as being too elitist/out-of-touch/etc. - then we stand a chance at moral revolution.
Two thoughts inspired by UChicago EA's discussion post-Ben-Todd's-talk at EAG:
I am aware that there have been some efforts targeted towards high schoolers (I believe Stanford EA ran a workshop/program). Has there been any HS outreach targeting debaters specifically, e.g. a large-scale debate tournament? I'm thinking of, say, introducing EA-relevant debate topics to a big tournament or group
On #1: There has been a large-scale EA-themed debate tournament targeting debaters (mainly undergraduates I believe) organized by Dan Lahav from EA Israel, talked about here!
These models predicted growth followed by collapse. The first part has been proven correct, but there is little evidence for the second. Acting like past observations of growth are evidence of future collapse seems like an unusual example of Goodman's New Riddle of Induction in the wild.
To clarify - "little evidence" implies that you think observations of current conditions aligning with model predictions, e.g. "Previous studies that attempted to do this found that the model’s worst-case scenarios accurately reflected real-world developments," are weak?
If so, what/how? I don't think full-time monitoring makes sense (first rule of comms: do everything with a full comms strategy in mind!) but I wonder if a list or Airtable would still be useful for organizations to pull from or something...
I think David Nash does something similar with his EA Updates (here is the most recent one). While most of the links are focused on EA Forum and posts by EA/EA-adj orgs, he features occasional links from other venues.
My hope is that people who see EA-relevant press will post it here (even in Shortform!).
I also track a lot of blogs for the EA Newsletter and scan Twitter for any mention of effective altruism, which means I catch a lot of the most directly relevant media. But EA's domain is the entire world, so no one person will catch everything important. That's what the Forum is for :-)
I'm not sure whether you're picturing a project specific to stories about EA or one that covers many other topics. In the case of the former, me and others at CEA know about nearly everything (though we don't have it in a database; no one ever asks). In the case of the latter, the "database" in question would probably just be... Google? I'm having trouble picturing the scenario where an org needs to pull from a list of articles they wouldn't find otherwise. (But I'm open to being convinced!)
Just finished Semple (2021), 'Good Enough for Government Work? Life-Evaluation and Public Policy,' which I found fascinating for its synthesis of philosophy + economics + public policy, and potential relevance to EA (in particular, improving institutional decisionmaking).
The premise of the paper is essentially, "Normative policy analysis—ascertaining what government should do—is not just a philosophical exercise. It is (or should be) an essential task for people working in government, as well as people outside government who care about what government does. Life-evaluationist welfare-consequentialism is a practical and workable approach."
Some things that are potentially EA-relevant
For my 'Psychology of Negotiation' (PSYC 25700) class, I'm going to write up one/two-line summaries for research articles that feel plausibly relevant to policy, community-building, or really any interpersonal-heavy work. These are primarily informed by behavioral science.
Hopefully, this will also allow me to better recall these studies for my final.
I'm (still!!!) thinking about my BA thesis research question and I think my main uncertainty/decision point is what specific policy debate to investigate. I've narrowed it down to two so far - hopefully I don't expand - and really welcome thoughts.
Context: I am examining the relationship between narratives deployed by experts on Twitter and the Biden Administration's policymaking process re: COVID-19 vaccine diplomacy. Specifically, I want to examine a debate on an issue wherein EA-aligned experts have generally coalesced around one stance.
Motivating questions/insights:
The two debates below, including general thoughts
*edited for clarity - was in a rush when I posted!
These both seem like great options! Of the two, I think the first has more to play with as there is a pretty clear delineation between the epistemic vs. moral elements of the second, whereas I think debates about the first have those all jumbled up and it's thus more interesting/valuable to untangle them. I don't totally understand your hesitation so I'm afraid I can't offer much insight there, but with respect to long-term policymaking/shared beliefs, it does seem like the fault lines mapped onto fairly clear pro-free-market vs. pro-redistributive ideologies that drew the types of advocates one would have predicted given that divide.
*edit 3: After reading more on Epistemic Communities, I think I'm back where I started.
*edit 4: I am questioning, now, whether I need a framework of how experts influence policymaking at all ... Maybe I should conceptualize my actors more broadly but narrow the topic to, say, the use of evidence in narratives?
I really appreciate your response, Ian! I think it makes sense that the more convoluted status of the first debate would make it a more valuable question to investigate.
My hesitation was not worded accessibly or clearly - it was too grounded in the specific frameworks I'm struggling to apply - so let me reword: it doesn't seem accurate to claim that there was one expert consensus (i.e. primarily pro-/anti-waiver). Given that, I am not sure a) how to break down the category of 'expert' - although you provide one suggestion, which is helpful - and b) how strongly I can justify focusing on experts, given that there isn't a clear divide between "what experts think" and "what non-experts think."
Non-TL;DR:
My main concern with investigating the debate around the TRIPS waiver is that there doesn't seem to be a clear expert consensus. I'm not even sure there's a clear EA-aligned consensus, although the few EAs I saw speak on this (e.g. Rob Wiblin) seemed to favor donating over waiving IP (which seems like a common argument from Europe). Given that, I question
Suggestion: use an expert lens, but make the division you're looking at [experts connected to/with influence in the Biden administration] vs. ["outside" experts].
Rationale: The Biden administration thinks of and presents itself to the public as technocratic and guided by science, but as with any administration politics and access play a role as well. As you noted, the Biden administration did a clear about-face on this despite a lack of a clear consensus from experts in the public sphere. So why did that happen, and what role did expert influence play in driving it? Put another way, which experts was the administration listening to, and what does that suggest for how experts might be able to make change during the Biden administration's tenure?
Hmm! Yes, that's interesting - and aligns with the fact that many different policy influencers weighed in, ranging from former to current policymakers. Thank you very much for this!
I think something I'm worried about is how I can conceptualize [inside experts] vs. [outside experts] ... It seems like a potentially arbitrary divide and/or a very complex undertaking given the lack of transparency into the policy process (i.e. who actually wields influence and access to Biden and Katherine Tai, on this specific issue?).
It also complicates the investigation by adding in the element of access as a factor, rather than purely thinking about narrative strategies - and I very much want to focus on narratives. On one hand, I think that could be interesting - e.g. looking at narrative strategies across levels of access. On the other, I'm uncertain that looking at narrative strategies would add much compared to just analyzing the stances of actors within the sphere of influence.
What do you think of this alternate RQ: "How did pro/anti-waiver coalitions use evidence in their narratives?"
Moves away from the focus on experts but still gets to the scientific/epistemic component.
(I'm also wondering whether I am being overly concerned with theoretically justifying things!)
I think I would agree with this. It seems like you're trying to demonstrate your knowledge of a particular framework or set of frameworks through this exercise and you're letting that constrain your choices a lot. Maybe that will be a good choice if you're definitely going into academia as a political scientist after this, but otherwise, I would structure the approach around how research happens most naturally in the real world, which is that you have a research question that would have concrete practical value if it were answered, and then you set out to answer it using whatever combination of theories and methods makes sense for the question.
Thanks! I'll take a break from thinking about the theory - ironically, I am fairly confident I don't want to go into academia.
Again, appreciate your thoughts on this. Hope I'll hear from you again if I post another Shortform about my thesis!
A big concern that's cropped up during my current work trial is whether I'm actually just not agentic/strategic/have-good-judgment-enough to take on strategy roles at EA orgs.
I think part of this is driven by low self-confidence, but part of this is the very plausible intuition that not everyone can be in the heavy tail and maybe I am not in the heavy tail for strategy roles. And this feels bad, I guess, because part of me thinks "strategy roles" are the highest-status roles within the meta-EA space, and status is nice.
But not nice enough to sacrifice impact! It seems possible, though, that I actually could be good at strategy and I'm bottlenecked by insecurity (which leads me to defer to others & constantly seek help rather than being agentic).
My current solution is to flag this for my future manager and ensure we are trialling both strategy and operations work. This feels like a supportive way for me to see where my comparative advantage lies - if I hear, "man, you suck at strategy, but your ops work is pretty good!" Then I would consider this a win!
My brain now wants to think about the scenario where I'm actually just bad at both. But then I'll have to take the advice I give my members: "Well, then you got really valuable information - you just aren't a great fit for these specific roles, so now you get to explore options which might be great fits instead!"
One approach I found really helpful in transitioning from asking a manager to making my own strategic decisions was going to my manager with a recommendation and asking for feedback on it (or, failing that, a clear description of the problem and any potential next steps I can think of, like ways to gain more information).
This gave me the confidence to learn how my organisation worked and know I had my manager's support for my solution, but pushed me to develop my own judgment.
Thanks, this is a good tip! Unfortunately, the current options I'm considering seem more hands-off than this (i.e., the expectation is that I would start with little oversight from a manager), but this might be a hidden upside because I'm forced to just try things. : )
Thing I should think about in the future: is this "enough" question even useful? What would it even mean to be "agentic/strategic enough?"
edit: Oh, this might be insidiously following from my thought around certain roles being especially important/impactful/high-status. It would make sense to consider myself as falling short if the goal were to be in the heavy tail for a particular role.
But this probably isn't the goal. Probably the goal is to figure out my comparative advantage, because this is where my personal impact (how much good I, as an individual, can take responsibility for) and world impact (how much good this creates for the world) converges. In this case, there's no such thing as "strategic enough" - if my comparative advantage doesn't lie in strategy, that doesn't mean I'm not "strategic enough" because I was never 'meant to' be in strategy anyway!
So the question isn't, "Am I strategic enough?" But rather, "Am I more suited for strategy-heavy roles or strategy-light roles?"
Optionality cost is a useful reminder that option value consists not only of minimising opportunity cost but also increasing your options (which might require committing to an opportunity).
This line in particular feels very EA:
I know that carbon offsets (and effective climate giving) are a fairly common topic of discussion, but I've yet to see any thoughts on the newly-launched Climate Vault. It seems like a novel take on offsetting: your funds go to purchasing cap-and-trade permits which will then be sold to fund carbon dioxide removal (CDR).
I like it because it a) uses (and potentially improves upon) a flawed government program in a beneficial way, and b) I can both fund the limitation of carbon emissions and the removal, unlike other offsets which only do the latter.
However, I recognize that I have a blind spot because I respect Michael Greenstone. Some doubts:
If anyone has thoughts, would appreciate them!
"How do you convert a permit into CO2 removal using CDR technologies without selling them back into the compliance market – in effect negating the offset?
We will sell the permits back into the market, but only when we’re ready to use the proceeds to fund carbon removal projects equivalent to the number of permits we’re selling, or more. So, in effect, the permits going back onto the market are negated by the tons of carbon we are paying to remove."
Once credible CDR is so cheap (now > USD 100/t, most approaches over USD 600, cf Stripe Climate) that this works (current carbon prices around USD 20), the value of additional CDR tech support is pretty low because the learning curve has already been brought down.
Am I missing something?
It seems like a good way to buy allowances which is, when the cap is fixed (also addressed in the FAQ, though not 100% convincingly) better than buying most offsets, but it seems unlikely to work in the way intended.
Hmm okay! Thanks so much for this. So I suppose the main uncertainties for me are
Really appreciate you helping clarify this for me!
Realizing that what drove me to EA was largely wanting to "feel like I could help people" and not, "help the most beings." This leads me to, for example, really be into helping as many people as I can individually help flourish (at the expense of selecting for people who might be able to make the most impact)*.
This feels like a useful specification of my "A" side and how/why the "E" side is something I should work on!
*A more useful reframing of this is to put it into impact terms. Do I think the best way to make impact is to
(1) find the right contexts/problems wherein a given person can have an outsized impact
(2) focus on specific people that I think have the highest chance of having an outsized impact?
I wonder if anyone has read these books here? https://www.theatlantic.com/books/archive/2022/04/social-change-books-lynn-hunt/629587/?utm_source=Sailthru&utm_medium=email&utm_campaign=Future%20Perfect%204-19-22&utm_term=Future%20Perfect
In particular, 'Inventing Human Rights: A History' seems relevant to Moral Circle Expansion.
edit: I should've read the list fully! I've actually read The Honor Code. I didn't find it that impressive but I guess the general idea makes sense. If we can make effective altruism something to be proud of - something to aspire to for people outside the movement, including people who currently denigrate it as being too elitist/out-of-touch/etc. - then we stand a chance at moral revolution.
Two thoughts inspired by UChicago EA's discussion post-Ben-Todd's-talk at EAG:
On #1: There has been a large-scale EA-themed debate tournament targeting debaters (mainly undergraduates I believe) organized by Dan Lahav from EA Israel, talked about here!
Very useful, thank you! Apparently they did a trial with high schoolers, so I've reached out : )
At work so have no mental space to read this carefully right now, but wonder if anyone has thoughts - specifically about whether there's any EA-relevant content: MIT Predicted in 1972 That Society Will Collapse This Century. New Research Shows We’re on Schedule. (vice.com)
These models predicted growth followed by collapse. The first part has been proven correct, but there is little evidence for the second. Acting like past observations of growth are evidence of future collapse seems like an unusual example of Goodman's New Riddle of Induction in the wild.
Thank you, so helpful!
To clarify - "little evidence" implies that you think observations of current conditions aligning with model predictions, e.g. "Previous studies that attempted to do this found that the model’s worst-case scenarios accurately reflected real-world developments," are weak?
Would it be useful to compile EA-relevant press?
Inspired by me seeing this Vice article on wet-bulb conditions (a seemingly unlikely route for climate change to become an existential risk): Scientists Studying Temperature at Which Humans Spontaneously Die With Increasing Urgency
If so, what/how? I don't think full-time monitoring makes sense (first rule of comms: do everything with a full comms strategy in mind!) but I wonder if a list or Airtable would still be useful for organizations to pull from or something...
I think David Nash does something similar with his EA Updates (here is the most recent one). While most of the links are focused on EA Forum and posts by EA/EA-adj orgs, he features occasional links from other venues.
Good flag, thanks!
My hope is that people who see EA-relevant press will post it here (even in Shortform!).
I also track a lot of blogs for the EA Newsletter and scan Twitter for any mention of effective altruism, which means I catch a lot of the most directly relevant media. But EA's domain is the entire world, so no one person will catch everything important. That's what the Forum is for :-)
I'm not sure whether you're picturing a project specific to stories about EA or one that covers many other topics. In the case of the former, me and others at CEA know about nearly everything (though we don't have it in a database; no one ever asks). In the case of the latter, the "database" in question would probably just be... Google? I'm having trouble picturing the scenario where an org needs to pull from a list of articles they wouldn't find otherwise. (But I'm open to being convinced!)