KT

Karthik Tadepalli

Economics PhD @ UC Berkeley
3511 karmaJoined Pursuing a doctoral degree (e.g. PhD)karthiktadepalli.com

Bio

I research a wide variety of issues relevant to global health and development. I'm always happy to chat - if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!

Sequences
1

What we know about economic growth in LMICs

Comments
434

One thing I've never seen people who are bullish on management interventions talk about (including myself) is why the corresponding interventions for microenterprises are so much less effective. (McKenzie 2014, McKenzie 2020) Microenterprises and self employed entrepreneurs also don't do simple business practices like "keep accounts", but even heavy-touch interventions pushing them to do that have zero to small effects. What's going on?

I think this is a bit of a roundabout argument. From the Philippines study:

We examine the effects of the policy changes on enrollment and graduation in other degree programs to determine whether increased migration prospects for nurses spurred new students to obtain postsecondary education or, instead, caused students to shift from other fields of study. While these results are relatively imprecise, they suggest that nursing enrollees primarily switched to nursing from other fields. This result helps to explain our large enrollment effects by clarifying that we are not estimating the elasticity of overall education to migration opportunities. Rather, the policy changes examined here were occupation specific, and individuals might be more elastic in switching between fields of study than making the extensive margin decision to enroll in higher education... While the enrollment effects were driven primarily by students switching from other degree types, students persisted to graduation at higher rates, leading to an overall increase in college graduates in the Philippines.

So they are finding exactly what you suggest, that people switch to the sector from other sectors, but they also find that if people hadn't moved, they would have been less likely to graduate college, period. So if you see increases in the overall stock of college workers as an overall positive effect, the program did have an overall positive effect.

But in general, I don't even think you need to appeal to that kind of reasoning, because brain drain is usually in jobs that are among the most valuable possible jobs for the country. (This is likely because those jobs are both the jobs that rich countries want to import, and also because they must be well-paid for the people to have the means to emigrate.) Medical workers are extremely valuable, so are engineers. It seems a little contrived to imagine that the sectors that lost out were comparably socially valuable.

(xpost)

Really excited to see where this substack goes, but I have to start off with some disagreements! The remittances point is fine, as is return migration. But the literature on brain gain has always seemed pretty uncompelling.

The most obvious problem is that increasing the supply of skilled workers requires both increasing demand for education (which emigration possibilities do) and increasing the supply of education. The latter is not a given in any country. Expanding college enrolment is hard. New colleges need staff, instructors, and administrators, all of which are scarce. Government colleges need to be established by a bureaucracy, private colleges need to be regulated and quality-controlled, both of which require a lot of governance capacity by the country. We can't just handwave the claim that if more people want to become doctors, more people can become doctors.

So I'm concerned that there's a site selection bias in the countries studied in this literature. People are writing papers about the countries that did manage to successfully pull off a large educational expansion, so they find that emigration boosted human capital. But for countries that can't pull it off, emigration really might be a brain drain.

How large could this site selection bias be? I pulled some data on college enrolment rates and emigration (net, not for any skill group) and compared India and the Philippines to the rest of them. (There was no data on Cabo Verde, the other country you cited.) Among the top 20 emigrant-sending developing countries, India had one of the highest increases in college enrolment (21 pp) between 1990 and 2015, while the Philippines was a bit above average (13 pp). As a specific contrasting example, Nigeria had only a 7 pp increase in college enrollment during this period. (data, graph)

A similar picture emerges when comparing India and the Philippines to developing countries as a whole. India has close to the highest enrolment growth over this period, and the Philippines is still above average, while Nigeria is still below average. (graph) So we should not expect Nigeria's brain gain to be anywhere close to that of India or the Philippines.

You could argue that India and the Philippines had higher growth because emigration incentives increased the supply of education. But:

  1. The emigration incentives studied by Khanna/Morales and Abarcar/Theoharides are not India-specific or Philippines-specific - this fact is necessary for their estimates to be causal! - so we shouldn't expect these countries to have larger increases in enrolment just from emigration incentives.

  2. Even if emigration incentives have a causal effect on the supply of colleges that was for some reason higher in India and the Philippines, I would expect that effect to be small relative to other factors that make governments want to supply more colleges (domestic political projects, trying to attract foreign companies, trying to spur industrial growth). So heterogeneous effects of emigration incentives can't explain much of the difference between these two countries and other developing countries.

In general, I wish there was more nuance around the brain gain hypothesis. I would speculate it has such immediate acceptance because it resolves our conflicting commitments as cosmopolitans: we want people to be able to pursue a better life, we want high-income countries to have more open immigration policy, and we want low-income countries to grow faster. The brain gain hypothesis is alluring because it promises that we can have all of the above. But I think that relies on other things going right that absolutely don't have to go right. And I wish there was more acceptance of that nuance.

I don't mean to say that risk preferences in general are unimpeachable and beyond debate. I was only saying that I personally do not put my risk preferences up for debate, nor do I try to convince others about their risk preferences.

In any debate about different approaches to ethics, I place a lot of weight on intuitionism as a way to resolve debates. Considering the implications of different viewpoints for what I would have to accept is the way I decide what I value. I do not place a lot of weight on whether I can refute the internal logic of any viewpoint.

Great points!

I feel there’s a bit of tension in you stating that “I don't think we should sidestep the philosophical aspect of this debate” while later concluding that “Worldview diversification is a useful and practical way for the EA community to make decisions.”

I say the former as a justification to avoid making an assumption (diminishing returns to money across causes) that would automatically support a balanced allocation of money without any other normative judgments. But I personally place high premium on decisions being "robustly" good so I do see worldview diversification as a useful and practical way to make decisions (to someone who places a premium on robustness).

In economics we’re used to treating basically any functional form for utility as permissible, so this is somewhat strange, but here we’re thinking about normative ethics rather than consumption choices.

I appreciate the push, since I didn't really mount a defense of risk aversion in the post. I don't really have a great interest in doing so. For one thing, I am axiomatically risk-averse and I don't put that belief up for debate. Risk aversion leads to the unpalatable conclusion that marginal lives are less worth saving, as you point out. But risk neutrality leads to the St Petersburg paradox. Both of them are slightly-contrived scenarios but not so contrived that I can easily dismiss them as irrelevant edge cases. I don't have solutions in mind (the papers you linked look interesting, but I find them hard to parse). So I don't feel passionately about arguing the case for risk-averse decisionmaking, but I still believe in it.

In reality I don't think anyone who practices worldview diversification (allocating resources across causes in a way that's inconsistent with any single worldview) actually places a really high premium on tight philosophical defenses of it. (See the quote at the start of the post!) I wrote this more for my own fun.

I want to be clear that I see risk aversion as axiomatic. In my view, there is no "correct" level of risk aversion. Various attitudes to risk will involve biting various bullets (St Petersburg paradox on the one side, concluding that lives have diminishing value on the other side), but I view risk preferences as premises rather than conclusions that need to be justified.

I don't actually think moral weights are premises. However, I think in practice our best guesses on moral weights are so uninformative that they don't admit any better strategy than hedging, given my risk attitudes. (That's the view expressed in the quote in my original comment.) This is not a bedrock belief. My views have shifted over time (in 2018 I would have scoffed at the idea of THL and AMF being even in the same welfare range), and will probably continue to shift.

If it is hard to answer these questions, is there a risk of your risk aversion not being supported by seemingly self-evident assumptions[3], and instead being a way of formalising/rationalising your pre-formed intuitions about cause prioritisation?

Yes, I am formalizing my intuitions about cause prioritization. In particular, I am formalizing my main cruxes with animal welfare - risk aversion and moral weights. (These aren't even cruxes with "we should fund AW", they are cruxes only with "AW dominates GHD". I do think we should reallocate funding from GHD to AW on the margin.)

Is my risk aversion just a guise for my preference that GHD should get lots of money? I comfortably admit that my choice to personally work on GHD is a function of my background and skillset. I was a person from a developing country, and a development economist, before I was an EA. But risk aversion is a universal preference descriptively – it shouldn't be a high bar to believe that I'm actually just a risk averse person.

At the end of the day, I hold the normie belief that good things are good. Children not dying of malaria is good. Chickens not living in cages is good. Philosophical gotchas and fragile calculations can supplement that belief but not replace it.

I avoid it if it's not necessary but I have a low bar for "necessary". I don't find it morally wrong.

The amount of global spending on each cause is basically irrelevant if you think most of it is non-impactful. Imaginine that John Q Warmglow donates $1 billion to global health, but he stipulates that that billion can only be spent on PlayPumps. Then global spending on GHD is up by $1 billion, but the actual marginal value of money to GHD is unchanged, because that $1 billion did not go to the best opportunities, the ones that would move down the marginal utility of money to the whole cause area. I understand you're aware of this, which is why your Fermi estimates focus on the marginal value of money to each cause by comparing the best areas within each cause. But the level of global spending on a cause contributes very little to the marginal value of money if most of that spending is low-impact.

I don't have a satisfying answer to what x is for me. I will say somewhere between 0.5 and 1.5, corresponding to the intuition that neither GHD nor FAW dominates each other. I would guess my cruxes with you come from two sources:

  1. My median moral weight on chickens is much less than 0.33, ~2 OOMs less.[1] This is a difficult inferential gap to cross.
  2. I think the quality of FAW cost-effectiveness estimates is vastly lower than GHD cost-effectiveness estimates, making the comparison apples-to-oranges. Saulius's estimates are a good start on a hard problem, but
    • There are a lot of made-up numbers based on intuition (e.g. their assumption of 24% compliance with pledges in the absence of follow-up pressure is wildly out of line with my intuitions)
    • There's likely steeply declining returns to effort given that campaigns will initially target the lowest hanging fruit, and eventually things will get much harder. Making a cost-effectiveness estimate based on early successful attempts is not representative of the value of future funding.

This is not a knock on people who are doing the best they can with limited data. I am just not comfortable taking these as unbiased estimates and I put a pretty high premium on having more certain evidence.

I see my views as consistent with expected utility maximization coupled with risk aversion, but not as expected value maximization (which, as its commonly defined, implies risk neutrality). The more uncertainty you have about a cause area, the more a risk-averse decisionmaker will want to hedge. (Edit: I also really like this argument for having a preference for certainty.)


  1. I understand RP is estimating welfare ranges rather than moral weights, but I think you have to do some sneaky philosophical equivalences to use them as weights in a cost-effectiveness estimate. I'm open to being wrong about that. ↩︎

I was thinking about the "no systemic change" thing mainly, and no articles I can think of, just a general vibe

It is a data point against a different kind of criticism, that sounds more like "EA is a bunch of 20-something dilettantes running around having urgent conversations instead of doing anything in the world". I hear that flavor of criticism more than "EA might build the world destroyer", and I suspect it is more common in the world.

Load more