E

Eva

568 karmaJoined

Posts
3

Sorted by New

Comments
14

Thanks, Ozzie! This is interesting. There could well be something there. Could you say more about what you have in mind?

Eva
16
0
0

Thanks for these questions. This probably falls on me to answer, though I am leaving GPI (I have tendered my resignation for personal reasons; Adam Bales will be Acting Director in the interim).

The funding environment is not as free as it was previously. That does suggest some recalibration and different decisions on the margin. However, I’m afraid your message paints a misleading picture and I’m concerned about the potential for inaccurate gossip. I won't correct every inaccuracy, but funding was never as concentrated as your citation of OP's website suggests. For one thing, they are not our only funder, grants support activities that can be many years out into the future, and their timeline does not correspond to when we applied for or received funds. For another example, the decision about the Global Priorities Fellowship was not taken due to money but due to a) the massive amounts of researcher time it took to run (which is not easily compensated for because it was focused at one time of year), b) a judgment that the program could run less frequently and still capture most of the benefits by raising the bar and being more selective. We had observed that - as is common in recruitment - the "returns" from the top few participants were much higher than the "returns" from the average participant. PhD students are in school for many years (in my field, economics, 6 years is common), and while in some of those years they may be more focused on their dissertations, running the program only occasionally still leaves ample opportunity to try to catch the students who might be a particularly good fit while they are in their PhD. Running it less frequently certainly implies lower monetary costs, but in this case that was a side benefit rather than the main consideration.

To return to your main question, the broadening of the agenda is a natural result of both a) broadening the team and b) trying to build an agenda for global priorities research that can inform external researchers. As we’ve engaged with more and more exceptional researchers at other institutions, it has shifted our overall strategy and the extent to which we try to produce research “in-house” vs. build and support an external network for global priorities research. This varies somewhat by discipline, but think of there as just being “stubs” for economics and psychology at GPI, with most of the work that we support done outside of it. I don’t mean "support" monetarily (though we have benefited from an active visitors program), but support with ideas and convenings and discussion. In the past few years, we have been actively expanding our external network, mostly in economics and psychology but also in philosophy, and we anticipate that this external engagement will be the main way through which we have impact. I can talk at length about the structural reasons why this approach makes sense in academia, but that is probably a discussion for another day. (Some considerations: faculty tend to be naturally spread out, and while one might like to have agglomeration effects, this happens more through workshops and the free exchange of ideas, or collaborations, because faculty are tied to institutions whose own incentives are to diversify. You can try to make progress with just focused work by postdocs and pre-docs, but that misses a huge swath of people, and even postdocs and pre-docs become faculty over time. In the long run, if you are being successful, the bulk of your impact has to come from external places. The fact this research is mostly done at other institutions now is a sign that global priorities research has matured.)

As a final note, consider the purpose of this research agenda. It takes time to write a good research agenda - we embarked on it when I arrived almost two years ago, and we quickly did some initial brainstorming, but then we continued on a longer and more thorough deliberative process, such as through working groups focused on exploring whether a certain topic appeared promising. In each of the growth areas you highlight - AI and psychology - we made a few hires. Those hires hit the ground running with their own research, but they also helped further develop and refine the agenda. Developing this agenda helped shape our internal priorities (though not every topic on the agenda is something we want to pursue ourselves), but the main purpose of this agenda is external. We wouldn't have needed to put nearly so much effort into it if it were for internal use. The agenda is simultaneously forward-looking and naturally driven by the hires we already made.

Hope this helps. With apologies, I'm not likely to get into follow-ups as it takes a long time to respond.

Eva
10
0
0

There's another point I don't quite know how to put but I'll give it a go.

Despite the comments above about having many ideas and getting feedback early about one's projects - which both point to having and abandoning ideas quickly - there's another sense in which actually what one needs is an ability to stick to things. And the good taste to be able to evaluate when to try something else and when to keep going. (This is less about specific projects and more about larger shifts like whether to stay in academia/a certain line of work at all.)

I feel like sometimes people get too much advice to abandon things early. It's advice that has intuitive appeal (if you can't pick winners, at least cut your losses early), and it's good advice in a lot of situations. But my impression is that while there are some people who would do better failing faster, there are also some people who would do better if they were more patient. At least for myself, I started having more success when sticking with things for longer. The longer you stick to a thing, the more expertise you have in it. That may not matter in some fields, but it matters in academia. 

Now, obviously, you want to be very selective about what you stick to. That's where having good taste comes in. But I'd start by looking honestly at yourself and looking at people near you that you see doing well for themselves in your chosen field, and asking which side of the impatient-patient spectrum you fall on compared to them. Some people are too patient. Some people are too impatient. I was too impatient and improved with more patience, and for some people it's the opposite. Which advice applies the most to you depends on your starting point and field, and of course the outside options. 

For econ PhDs, I think it's worth having a lot of ideas and discarding them quickly especially in grad school because a lot of them are bad at first, but I also think there are people who jump ship from an academic career too early, like when they are on the market or in the first few years after. I suspect this might be generally true in academia where expertise really, really matters and you need to make a long-term investment, but I can't speak for certain about other academic fields beyond economics. And I've definitely met many academics who played it too safe for maximizing impact, and many people who didn't leave quickly enough. What I'm trying to emphasize is that it's possible to make mistakes in both directions and you should put effort into figuring out which type of error you personally are more likely to make.

Eva
23
1
0

Thanks. A quick, non-exhaustive list:

  • Get feedback early on. Talking to people can save a lot of time
  • You should have a very clear idea of why it is needed. Good ideas sound obvious after the fact
  • That's not to say people won't disagree with you. If your idea takes off you will need to have a thick skin
  • A super-easy way to have more impact is to collaborate with others. This doesn't help for job market papers, where people tend to want to have solo-authored work. But you can get a lot more done collaborating with others and the outputs will be higher-quality, too
  • Apart from collaborating with people on the actual project, do what you can to get buy-in from other people who have no relationship to the project. Other people can magnify the impact in big ways and small
  • It can take a while before early-career researchers find a good idea. Have more ideas than you would think
Eva
61
20
1

I've stayed at a (non-EA) professional contact's house before when they'd invited me to give a talk and later very apologetically realized they didn't have the budget for a hotel. They likely felt obliged to offer; I felt like it would be awkward to decline. We were both at pains to be extremely, exceedingly, painstakingly polite given the circumstances and turn the formality up a notch.

I agree the org should have paid for a hotel, I'm only mentioning this because if baseline formality is a 5, I would think it would be more normal to kick it up to a 10 under the circumstances. It makes this situation all the more bizarre.

Eva
22
9
1

I would really like to read a summary of this book. The reviews posted here (edit: in the original post) do not actually give much insight as to the contents. I'm hoping someone will post a detailed summary on the forum (and, as EAs love self-criticism, fully expect someone will!).

Eva
42
17
1

I'm not going to deal with the topic of the post, but there's another reason to not post under a burner account if it can be avoided that I haven't seen mentioned, which this post indirectly highlights.

When people post under burner accounts, it makes it harder to be confident in the information that the posts contain, because there is ambiguity and it could be the same person repeatedly posting. To give one example (not the only one), if you see X number of burner accounts posting "I observe Y", then that could mean anywhere from 1 to X observations of Y, and it's hard to get a sense of the true frequency. This means it undermines the message of those posting, to post under burners, because some of their information will be discounted.

In this post, the poster writes "Therefore, I feel comfortable questioning these grants using burner accounts," which suggests in fact that they do have multiple burner accounts. I recognize that using the same burner account would, over time, aggregate information that would lead to slightly less anonymity, but again, the tradeoff is that it significantly undermines the signal. I suspect it could lead to a vicious cycle for those posting, if they repeatedly feel like their posts aren't being taken seriously.

Thanks for mentioning the Social Science Prediction Platform! We had some interest from other sciences as well.

With collaborators, we outlined some other reasons to forecast research results here: https://www.science.org/doi/10.1126/science.aaz1704. In short, forecasts can help to evaluate the novelty of a result (a double-edged sword: very unexpected results are more likely to be suspect), mitigate publication bias against null results / provide an alternative null, and over time help to improve the accuracy of forecasting. There are other reasons, as well, like identifying which treatment to test or which outcome variables to focus on (which might have the highest VoI). In the long run, if forecasts are linked to RCT results, it could also help us say more about those situations for which we don't have RCTs - but that's a longer-term goal. If this is an area of interest, I've got a podcast episode, EA Global presentation and some other things in this vein... this is probably the most detailed.

I agree that there's a lot of work in this area and decision makers actively interested in it. I'll also add that there's a lot of interest on the researcher side, which is key.

P.S. The SSPP is hiring web developers, if you know anyone who might be a good fit.

As a small note, we might get more precise estimates of the effects of a program by predicting magnitudes rather than whether something will replicate (which is what we're doing with the Social Science Prediction Platform). That said, I think a lot of work needs to be done before we can have trust in predictions, and there will always be a gap between how comfortable we are extrapolating to other things we could study vs. "unquantifiable" interventions.

(There's an analogy to external validity here, where you can do more if you can assume the study you predict is drawn from the same set as those you have studied, or the same set if weighted in some way. You could in principle make an ordering of how feasible something is to be studied, and regress your ability to predict on that, but that would be incredibly noisy and not practical as things stand, and past some threshold you don't observe studies anymore and have little to say without making strong assumptions about generalizing past that threshold.)

Load more