To demonstrate CGD's cherished principle of not taking organisational positions, here is a response from a couple of us in the health team to our colleague Justin Sandefur's recent(ish) blog on cost-effectiveness evidence and PEPFAR.
Our concern was that readers might come away from Justin's blog thinking that cost-effectiveness evidence wasn't useful in the original PEPFAR decision and wouldn't be useful in similar decisions about major global health initiatives. We disagree and wanted to make the case for cost-effectiveness as well as addressing some of Justin's specific points along the way.
https://www.cgdev.org/blog/did-economists-really-get-africas-aids-epidemic-analytically-wrong-reply
--
A recent, thought-provoking blog by our colleague, Justin Sandefur, titled “How Economists got Africa’s AIDS Epidemic Wrong”, has sparked a debate about the historical role of cost-effectiveness analysis in assessing the investments of the President's Emergency Plan for AIDS Relief (PEPFAR) and, implicitly, the value of such analysis in making similar global health decisions. Justin tells the story of PEPFAR and concludes that economists that raised concerns over the cost-effectiveness of antiretroviral therapies got PEPFAR “analytically wrong”, a conclusion that some readers may interpret as a reason to discard cost-effectiveness analysis for such decisions in the future. The original blog draws three lessons:
Lesson #1. What persuaded the White House was evidence of feasibility and efficacy, not cost-effectiveness
Lesson #2. The budget constraint wasn’t fixed; PEPFAR unlocked new money
Lesson #3. Prices also weren’t fixed, and PEPFAR may have helped bring them down
In this blog we argue that while Justin’s observations hold some truth, they do not discredit the value of cost-effectiveness analysis in decision-making. Specifically, we contend that:
- Because there were many feasible and effective options at the time, this was not sufficient criteria for such a large decision. It should have considered the cost-effectiveness of other options, to explore the relative impact.
- PEPFAR may have unlocked some new money, but it wasn’t all new money, and it will have had short- and long-term opportunity costs. Moreover we cannot be certain that PEPFAR was uniquely able to increase available funding. Thus the decision could have considered cost-effectiveness analysis to reveal likely trade-offs.
- Price reductions could have been analytically explored for PEPFAR and for alternative options as part of cost-effectiveness analysis during decision-making.
The bigger lesson, we conclude, is that when the next PEPFAR-sized decision happens, our systems and their stakeholders must strive for higher standards, embracing analysis that models a range of good options and assesses them against key criteria. Cost-effectiveness analysis is a necessary component of this, but it is not sufficient, and additional analysis and scenarios should be considered through a deliberative process, before settling on a final decision.
Below we offer reflections on each of Justin’s three lessons, in order, then draw out the overall conclusions.
Response 1: Feasibility and efficacy are not enough
Justin uses an analogy of giving to a homeless person to invite the reader to agree that cost is not really the relevant issue when considering whether to do a good deed. True enough, if something can be considered not effective or not feasible then it’s a non-starter and we don’t need to trouble ourselves over cost or cost-effectiveness. But when there are multiple feasible and effective options with different levels of effectiveness and cost, understanding which does the most good for the money is absolutely worth knowing. Indeed we agree that there is a moral imperative to consider such evidence, since millions of lives are at stake. This was indeed the scenario when PEPFAR was introduced—there were many worthy global health initiatives, such as malaria prevention and childhood vaccines, that were feasible, scalable and effective and whether the political process was willing to consider them or not, the opportunities were there.
Response 2: Was PEPFAR the only thing that could have unlocked new development money?
In his second lesson, Justin suggests that, because PEPFAR unlocked new money, the options may have been either PEPFAR or nothing. This implies that the burden of considering trade-offs is unnecessary. PEPFAR had a significant budget and we acknowledge that it may have increased net Official Development Assistance (ODA), but in order to throw out the idea of opportunity cost (at least from a development perspective), as Justin implies, we would have to be sure that PEPFAR was the only thing that could have unlocked this additional spend. Can we be sure this was the case? Could we ever be? Certainly those making the case for greater focus on HIV prevention didn’t think so and it seems plausible that at least the volume of financing drawn into PEPFAR meant other smaller initiatives struggled to win funding. This can perhaps be seen in figure 3 of Justin’s blog where there appears to be a reduction in aid to areas with lower HIV prevalence, such as the middle east and north Africa. In addition, as Justin noted, PEPFAR’s $15 billion included only $10 billion new money, suggesting at least an immediate short term $5 billion development sector opportunity cost.
Assuming that there is no alternative to PEPFAR means accepting the political preferences of the time as immovable. We argue instead that analysis that shows the value for money of different options is useful for informing political debates. There are, of course, reasonable cases where constrained optimisation makes sense. For example, in their current forms, Gavi and the Global Fund against HIV/AIDS, Tuberculosis and Malaria have clear remits and analysis that is designed to inform their decisions and optimises within their areas of focus, ignoring other potential health spending, which is understandable. However, when deciding whether to create Gavi, or in evaluating its value as a global health initiative, it’s clear that we must compare with alternatives, regardless of the local political climate at the time of inception. Therefore the counterfactual of PEPFAR clearly wasn’t nothing, and it will have had a substantial opportunity cost in the short-term and long-term. Rigorous cost-effectiveness analysis of the relevant counterfactuals, along with other evidence and a deliberative appraisal process could have been helpful to inform its creation. If there are political dynamics which prevent certain choices, this can, in a sense, be seen as the choice of that decision-making system—but it doesn’t mean that alternative options were never there.
Response 3: Prices aren’t fixed. But they won’t be for comparators either
Justin notes that prices dramatically declined over time, suggesting PEPFAR may have contributed to this trend. Even if we accept that PEPFAR was responsible for the price reductions, we still don't know the counterfactual—could equivalent investment have influenced the price of other treatments or vaccines? It is hard to be sure, but during PEPFAR’s time period, active intervention in health technology markets has delivered wide ranging benefits, including developing a 43 percent reduction in the price of pneumococcal vaccines and 90 percent cheaper hepatitis C treatment. With the benefit of hindsight, the scale of the price reductions of course makes the PEPFAR decision look better than it would have done at the time to analysts such as Emily Oster, but given this factor wasn’t a key part of the original decision-making, it wasn’t a fully-informed decision—the world just got lucky. The important question really is—how can good decision-making take the endogenous, dynamic, and negotiated nature of drug prices into account in future? Shouldn’t it model it? Using the best available evidence, and considering appropriate counterfactuals and their likely price trajectories? Yes, standard cost-effectiveness doesn't predict price changes, but sensitivity analysis should be done to allow for different price considerations. This can be done with input from market shaping experts and industry. Indeed, in many cases cost-effectiveness analysis is precisely the tool used to exert downward pressure on price and there are ways to deploy both scale incentives and value for money assessments together.
Conclusion
In conclusion, if PEPFAR was the best decision for development, then decision-makers got lucky, because it was not a well-informed decision. It would have been reasonable for decision-makers to think PEPFAR was likely to have substantial opportunity costs and therefore cost-effectiveness analysis was a necessary part of good decision-making. But we acknowledge cost-effectiveness analysis was also not sufficient, and that, in reality decisions are made for a range of important reasons. We believe the correct lesson here isn't merely that we need better economics for major decisions such as PEPFAR, but that decision-makers should invest in transparent deliberative processes, similar to health technology assessment, where all relevant factors can be appropriately considered, such as feasibility, efficacy, equity, and where appropriate—market shaping potential. This would also enable the views of recipients to be heard. This process should embrace complexity and strive to maximize the good derived from money spent.
This matters, because sooner or later there will be another PEPFAR proposed, and we need better systems in place to integrate evidence, politics, and social values to make these decisions wisely. It is also possible that we are entering a period of global health disinvestment. We also hope that key decisions on whether to wind down major global health initiatives use rigorous, deliberative processes with wise use of both cost-effectiveness evidence and supplementary analysis. Ironically, with antiretrovirals now becoming better value for money compared to alternatives in many settings, it could be that cost-effectiveness evidence saves PEPFAR in the future.
I liked this recent interview with Mark Dybul who worked on PEPFAR from the start: https://www.statecraft.pub/p/saving-twenty-million-lives
One interesting contrast with the conclusion in this post is that Dybul thinks that PEPFAR's success was a direct consequence of how it didn't involve too many people and departments early on — because the negotiations would have been too drawn out and too many parties would have tried to get pieces of control. So maybe a transparent process that embraced complexity wouldn't have achieved much, in practice.
(At other parts in the process he leaned farther towards transparency than was standard — sharing a ton of information with congress.)
Thanks for sharing - it’s an interesting interview. My first reaction is that interdepartmental bureaucracy is quite a different beast to an evidence-to-policy process. I agree that splitting development policy/programmes across multiple government depts causes lots of problems and is generally to be avoided if possible (I’m thinking about the UK system but imagine the challenges are similar in the US and elsewhere).
Of course you do need some bureaucracy to facilitate evidence-to-policy too, but on the whole I think it’s absolutely worth the time. For public policy we should aim to make a small number of decisions really well. The idea a small efficient group who just know what to do and crack on is appealing; it’s a more heroic narrative than a careful weighing of the evidence. Though I can’t imagine the users of this forum need persuading of the importance of using evidence to do better than our intuitions and overcome our biases.
Incidentally, I feel this kind of we-know-what-to-do-let’s-crack-on instinct is more acceptable in development policy than domestic, and in my view development policy would benefit from being much more considered. We cause a lot of chaos and harm to systems in LMICs in the way we offer development assistance, even through programmes that are supporting valuable services. I think all of the major GHI’s do great work, but all could benefit from substantial reforms. Though again, this is somewhat separate from the point about interdepartmental bureaucracy.
My conclusion from this, and other situations I've seen, is that you can arrive at a robust modelling or scientific conclusion (eg: under standard assumptions, PEPFAR isn't cost effective). But working out the policy or other action-relevant implications of that conclusion is likely at least as hard as coming to that conclusion.
Thanks for your comment and I agree. Modelling (even rigorous modelling) is just that, a model. It's a simplification of a more complex reality. We should not mistake the map for the territory, but equally, not using a map would be foolish.