Jona

622 karmaJoined
cfactual.com
Interests:
Forecasting

Comments
37

  1.  Thanks for creating this post! 
  2. I think it could be worth clarifying how you operationalize EA epistemics. In this comment, I mostly focus on epistemics at EA-related organizations and focus on "improving decision-making at organizations" as a concrete outcome of good epistemics. 
  3. I think I can potentially provide value by adding anecdotal data points from my work on improving epistemics of EA related organizations. For context, I work at cFactual, supporting high-impact organizations and individuals during pivotal times. So far we have done 20+ projects partnering with 10+ EA adjacent organizations. 
    1. Note that there might be several sampling biases and selection effects, e.g., organizations who work with us are likely not representing all high-impact organizations.
    2. So please read it what it is: Mixed confidence thoughts based on weak anecdotal data which are based on doing projects on important decisions for almost 2 years. 
  4. Overall, I agree with you that epistemics at EA orgs tend to be better than what I have seen while doing for-profit-driven consulting in the private, public and social sectors. 
    1. For example, following a simple decision document structure including: Epistemic status, RAPID, current best guess, main alternatives considered, best arguments for the best guess, best arguments against the best guess, key uncertainties and cruxes, most likely failure mode and things we would do if we had more time, is something I have never seen in the non-EA world. 
  5. The services we list under "Regular management and leadership support for boards and executives" are gaps we see that often ultimately improve organizational decision-making.
    1. Note that clients pay us, hence, we are not listing things, which could be useful but don't have a business model (like writing a report on improving risk management by considering base rates and how risks link and compound). 
    2. I think many of the gaps, we are seeing, are more about getting the basics right in the first place and don't require sophisticated decision-making methods, e.g.,
      1. spending more time developing goals incl. OKRs, plans, theories of change, impact measurement and risk management 
        1. Quite often it is hard for leaders to spend time on the important things instead of the urgent and important things, e.g., more sophisticated risk management still seems neglected at some organizations even after the FTX fall-out
      2. improving executive- and organization-wide reflection, prioritization and planning rhythms 
      3. asking the right questions and doing the right, time-effective analysis at a level which is decision-relevant
      4. getting an outside view on important decisions and CEO performance from well-run boards, advisors and coaches
      5. improving the executive team structure and hiring the right people to spend more time on the topics above
    3. Overall, I think the highest variance on whether an organization has good epistemics can be explained by hiring the right people and the right people simply spending more time on the prioritized, important topics. I think there are various tweaks on culture (e.g., rewarding if someone changes someone's mind, introducing an obligation to dissent and Watch team backup), processes (e.g., having a structured and regular retro and prioritization session, making forecasts when launching a new project) as well as targeted upskilling (e.g., there are great existing calibration tools which could be included in the onboarding process) but the main thing seems to be something simple like having the right people, in the right roles spending their time on the things that matter most. 
      1. I think simply creating a structured menu of things organizations currently do to improve epistemics (aka a google doc) could be a cost-effective MVP for improving epistemics at organizations
  6. To provide more concrete, anecdotal data on improving epistemics of key organizational decisions, the comments I leave most often when redteaming google docs of high impact orgs are roughly the following:
    1. What are the goals?
    2. Did you consider all alternatives? Are there some shades of grey between Option A and B? Did you also consider postponing the decisions?
    3. What is the most likely failure mode? 
    4. What are the main cruxes and uncertainties which would influence the outcome of the decision and how can we get data on this quickly? What would you do if you had more time?
    5. Part X doesn't seem consistent with part Y
  7. To be very clear, 
    1. I also think that I am making many of these prioritization and reasoning mistakes myself! Once a month, I imagine providing advice to cFactual as an outsider and every time I shake my head due to obvious mistakes I am making. 
    2. I also think there is room to use more sophisticated methods like forecasting for strategy, impact measurement and risk management or other tools mentioned here and here

Thanks, Ollie! I thought this was helpful.

Jona
5
1
0
1

Thanks for creating this post! +1 to the general notion incl. the uncertainties around if it is always the most impactful use of time. On a similar note, after working with 10+ EA organizations on theories of change, strategies and impact measurement, I was surprised that there is even more room for more prioritization of highest leverage activities across the organization (e.g., based on results of decision-relevant impact analysis). For example, at cFactual, I don't think we have nailed how we allocate our time. We should probably deprioritize even more activities, double down even more aggressively on the most impactful ones and spend more time exploring new impact growth areas which could outperform existing ones. 

FWIW, I also think one key consideration is the likelihood of organizations providing updates and making sure the data means the same thing across organizations (see caveats in the report for more)

Registered. It also seems valuable to talk to impact-driven people who seriously considered quitting but then decided to finish their PhD as (a) it is not obvious to me that quitting is always the right choice and (b) it might be useful to know common reasons why people decided to continue working on their PhD. 

Jona
20
3
0
1

Thanks for creating this post! Sharing some thoughts on the topic based on my experience creating and redteaming theories of change (ToCs) with various EA orgs (partly echoing your observations and partly adding new points; Two concrete project examples can be found here and here).

  1. Neglectedness of ToC work (basically echoing your claim). Due to the non-pressing nature and required senior input, ToC/strategy work seems to get deprioritized very often, e.g., I have been deprioritizing updating our ToC for three months due to more pressing work. I think the optimal time spent thinking about your ToC/prioritization/strategy depends on the maturity of your project and is hard to get right, but based on my experience, most of us spend too little time on it (just like we tend to spend too little time exploring our career options as it is worth to invest 800 hours in our career planning when we can increase its impact by 1% in expectation). Assuming your org has ten staff, which work 200 days a year for 8 hours per day, you would want to invest 16k hours in figuring out how staff spends their time best if it is likely to result in a 1% impact increase
  2. More than one ToC. I think most orgs should have a ToC on the org, team and individual level as well as for each main program/activity. It seems not optimal to work on something without even having thought through how this will change the world for at least 3min (and if there are alternatives, how you could achieve the same with less work)
  3. Different levels of granularity. Depending on the purpose and the context of your ToC, you can have a three-row ToC (e.g., for small projects you are exploring), a flow-chart (e.g., to communicate the ToC of your org clearly, see examples in this post) and/or an exhaustive document showing lots of reasoning transparency, alternatives you considered etc. (e.g., to lay out the ToC of your research agenda)
  4. Developing a ToC. One simplified approach to develop a ToC on an org level which worked well with some clients but always needs tailoring looks very roughly like this: (1) Map out all potential sources of value of today (and potentially in the future), (2) Prioritize them, (3) Create a flow chart for the most promising source of value (potentially include other sources of value or create several flow charts), (4) Think through the flow of impact/value end to end as a sanity check, (5) collect data (e.g., talk to experts, run small experiments) to reduce uncertainties, (6) re-iterate. See more here
  5. Strategic implications and influencing decisions (previously mentioned but I think this is a point many people found useful and is not stressed enough in typical ToC literature). Your ToC should inform key decisions and ultimately how you allocate resources (your staff's time or money). I never experienced that we were certain about all the sources of value and the causal relationships between each step or did not have a hypothesis on how to have even more impact with an adapted or new program. So the ToC work always had at least some implications for allocating time. One great example is the Fish Welfare Initiative, which included resolving uncertainties around their ToC in their yearly priorities (See slides 13 and 24)
  6. Areas for improvement. (1) Trying to map every casual pathway and not focusing on the most important ones, (2) Deferring too much to others on what they perceive as valuable, e.g., the target group, and not doing enough first principles/independent thinking and data collection, (3) Not considering counterfactuals at all and/or not considering that there are likely several counterfactual worlds, and (4) Not laying out key assumptions and uncertainties (previously mentioned in the post but seems valuable to highlight that this also reflects my experience) among other things

Note that I likely have a significant sample bias, as organizations are unlikely to reach out to me if they have enough time to think through their ToC. Additionally, please read this as "random thoughts which came to Jona's mind when reading the article" and not as "these are the X main things EA orgs get wrong based on a careful analysis". I expect to update my views as I learn more

Jona
12
4
2

Hmm. Obviously, career advice depends a lot on the individual and the specific context, all things equal, I tentatively agree that there is some value in having seen a large "functioning" org. I think many of these orgs have also dysfunctional aspects (e.g., I think most orgs are struggling with sexual harassment and concentration of formal and informal power) and that working at normal orgs has quite high opportunity costs. I also think that many of my former employers were net negative for some silly which I think are highly relevant, e.g., high-quality decision making 

Thanks for clarifying! I think Training for Good looked into "scalable management trainings", but had a hard time identifying a common theme, which they could work on (This is my understanding based on a few  informal chats. This might be outdated and I am sure they have a more nuanced take). Based on my experience, different managers seem to have quite different struggles which change over time and good coaching and peer support seemed to be the most time-effective interventions for the managers (This is based on me chatting occasionally to people and not based on proper research or deep thinking about the topic) 

Jona
14
6
0

What do you specifically mean by "maturing in management, generally"? I noticed that people  tend to have very different things in mind when they are talking about "Improving management in EA" so could be worth clarifying

Jona
24
3
0

Some shameless self-promotion as this might be relevant to some readers: I work at cFactual, a new EA strategy consultancy, where one of our three initial  services is to optimize ToC's and KPI's together with organizations. Illustrative project experience includes the evaluation of the ToC and design of a KPI for GovAI’s fellowship program, building a quantitative impact and cost-effectiveness model for a global health NGO,  internally benchmarking the impact potential of two competing programs of an EA meta organization with each other, doing coaching with a co-founder of a successful longtermist org around Fermi-estimates and prioritization of activities as well as redteaming the impact evaluation of a program of a large EA organization.


 

Load more