Currently enrolled in a MA in Economics at the University of Texas, Austin. I used to teach a high school philosophy class, which included a unit on ethics, meaning that I got to teach my high school students about Effective Altruism. I'm currently transitioning to a career in AI policy.
I donated a kidney altruistically in April 2020.
The charities I donate to regularly include GiveWell's Maximum Impact Fund and the Clean Air Task Force. I've taken the Giving What We Can pledge.
Sometimes I write things down here: ordinaryevents.substack.com
I'm pleased to hear that you're running this fellowship again and extremely excited about applying!
A question about the application process: For the think tank tracks, you require a writing sample. "Applicant should be the sole or main author, ≤5 pages, can be an excerpt. Required for think tank track, optional for congressional and federal track. Please do not create new material." Can you give more feedback on what you're looking for, especially as far as content and style go? Would a well-researched EA Forum post qualify, or more of an academic paper? Should it relate to tech policy explicitly?
"I think this high success rate [at receiving meeting requests] was due to a few key things:"
It might not be due to such key things at all! I was at EAGx Boston this weekend and I also had quite a high success rate at scheduling 1:1 meetings. And I don't have much in common with your experience - most of my messages were no more than a couple days out from the conference, I mostly asked people for how they could help me, and I have no full-time EA projects at the moment.
Plausibly, it might just be the case that EAs who attend such conferences are inclined to meet with you - either for their own selfish reasons (perhaps more common than you think!), altruistic inclinations, or an acknowledgement of the vibe of an EAGx being more geared to students and early career professionals.
My point here being that people plausibly should expect a somewhat high degree of success with 1:1 meeting requests at EA conferences, even without being diligent about making such requests ahead of time or feeling like they have much to offer their conferees.
Thanks Akhil!
There are a couple good reasons to think that more fine grained monitoring could be effective. For one thing, PM2.5 conditions are often much more localized than we realize, so some neighborhoods and microregions are exposed to much higher conditions than others. And they are time-dependent, meaning that some days and times that are much worse than others. So this more fine grained data can improve our understanding of the hardest hit regions at the neighborhood level, while giving local residents better information as well - imagine if everyone had the kind of understanding of air quality conditions that Bay Area residents have during wildfires.
I also think it’s possible that better local monitoring creates its own momentum, since local residents now have quantifiable proof of their air quality conditions. It’s possible that this kind of information would elevate the issue to a more pressing political priority in the hardest-hit areas, though I am still uncertain about that.
I love that you are celebrating your successes here! Your parenthetical apologizing for potentially sounding self-congratulatory made me think, "Huh, I'd actually quite like to see more celebration of when theory turns to action." The fact that your work influenced FP to start the Patient Philanthropy Fund is a clear connection demonstrating the potential impact of this kind of research; if you were to shout that from the rooftops, I wouldn't begrudge you! If anything, clarity about real-world impacts of transformational research into the long-term future likely inspire others to pursue the field (citation needed).
I'm quite sympathetic to your mission of developing a robust understanding of the parameters of cause prioritization. I do have a maybe-dumb question: what is your Theory of Change? You write,
"In GPI’s first few years, we have made a good start on producing high-quality and mission-aligned research papers. In 2022 we are planning to continue the momentum and have set ourselves ambitious targets on the number of papers we want to get through different stages of the publishing pipeline, as well as that we want to post as working papers on our website."
What do you plan on doing with your research output? What would you like to see others do with it, concretely? Is the goal to let your research percolate throughout EA-space/academia and maybe influence others' work? Is there a more direct policy or philanthropic goal of your research?
I suppose you answer some of these questions here:
"In 2021, we commenced a project to design and then begin tracking more sophisticated progress metrics. This project was put on hold, for reasons of capacity constraint, with the resignation of our Research Manager. We plan to continue the project once we have succeeded in hiring the successor of this role."
But I'm still interested in, like, your top-level thinking around your theory of change, or maybe your gut-check.
Open questions:
What's the incentive structure here? If I'm following the money, it seems likely that there's a much higher likely return if you hype up your plausibly-really-important product, and if you believe in the hype yourself. I don't see why Musk or Zuckerberg should ask themselves the hard questions about their mission given that there's not, as far as I can see, any incentive for them to do so. (Which seems bad!)
What can be done? Presumably we could fund two FTE in-house at any given EA research organization to red-team any given massive corporate effort like SpaceX. But I don't have a coherent theory of change as to what that would accomplish. Pressure the SEC to require annual updates to SEC filings? Might be closer...
To my mind, the piece is a welcome response to the recent (imo) irresponsible hyping of cross-strait risk by influential US actors. To the extent that anyone's expectation of the risk of cross-strait violence was influenced by such voices, this piece should help recalibrate down. But of course the fundamental risk remains, even if there are reasons to doubt its immimence as represented by China hawks.
You could do a Straussian reading of this piece such that it is in fact saying 'China won't bomb TW next week so let's calm down' in order to "avoid having the US government kick-starting a nuclear war by pre-emptive strikes or panic-induced miscalculations." To the extent that Tim Heath is respected and that WotR is widely read by US decision-makers, I think this reading makes some sense (although ofc there are very strong incentives for the US gov't to not start a war with China that have nothing to do with whether they're reading WotR or not). Your mileage may vary.
Your broader point, though, that we should take a longer/less-temporally-bound/more structural view of the risk, is one that I agree with.