S

SebastianSchmidt

Co-founder @ Impact Academy
396 karmaJoined

Bio

Impact Academy is a non-profit organization that enables people to become world-class leaders, thinkers, and doers who are using their careers and character to solve our most pressing problems and create the best possible future.

I also work as an impact-driven and truth-seeking coach for people who are trying to do the most good.

I'm also a medical doctor, author, and former visiting researcher (biosecurity) at Stanford.

Comments
86

Yeah, this could be the case. Just not sure that gpt4 can be given enough context for it to be a highly user friendly chatbot in the curriculum. But it might be the best of the two options.

Hi Peter, thanks for your work. I have several questions:

  1. Most organizations within EA are relatively small (<20). Why do you think that's the case and why is RP different?
  2. How do you decide on which research areas to focus on and, relatedly, how do you decide how to allocate money to them?
  3. What do you focus on within civilizational resilience?
  4. How do you decide whether something belongs to the longtermism department (i.e., whether it'll affect the long-term future)?

Thanks for running with the idea! This is a major thing within education these days (e.g., Khan academy). This seems reasonably successful although Peter's example and the tendency to hallucinate makes me a bit concerned.

I'd be keen on attempting to fine-tune available foundations models on the relevant data. E.g., gpt-3.5 and see how good a result one might get.

Hi Riley,
Thanks a lot for your comment. I'll mainly speak to our (Impact Academy) approach to impact evaluation but I'll also share my impressions with the general landscape.

Our primary metric (*counter-factual* expected career contributions) explicitly attempts to take this into account. To give an example of how we roughly evaluate the impact: 

Take an imaginary fellow, Alice. Before the intervention, based on our surveys and initial interactions, we expected that she may have an impactful career, but that she is unlikely to pursue a priority path based on IA principles. We rate her Expected Career Contribution (ECC) to be 2. After the program, based on surveys and interactions, we rate her as 10 (ECC) because we have seen that she’s now applying for a full-time junior role in a priority path guided by impartial altruism. We also asked her (and ourselves) to what extent that change was due to IA and estimate that to be 10%. To get our final Counterfactual Expected Career Contribution (CECC) for Alice, we subtract her initial ECC score of 2 from her final score of 10 to get 8, then multiply that score by 0.1 to get the portion of the expected career contribution which we believe we are responsible for. The final score is 0.8 CECC. As an formula: 10 (ECC after the program) - 2 (ECC before the program) * 0.1 (our counterfactual influence) = 0.8 CECC.

You can read more here: https://docs.google.com/document/d/1Pb1HeD362xX8UtInJtl7gaKNKYCDsfCybcoAdrWijWM/edit#heading=h.vqlyvfwc0v22

I have the sense that other orgs are quite careful about this too. E.g., 80,000hours seems to think that they only caused a relatively modest amount of significant career changes because they discovered that the people had updated significantly due to reasons not related to 80,000hours.

 

Thanks for this. I think it could've been more awesome by having a stronger statement on the importance of the EA ideas, values, and mindsets. I recognize that you somewhat mention this under reasons 2 and 5 but I would've liked to see it stated even more strongly.

Thanks so much for doing this. I'm very happy to see how the general public and university students seemed to be mostly unaware and unaffected by FTX. By being happy, I don't mean to imply that we should take the situation lightly and not learn from it. I'm curious about other groups such as young professionals. However, I am somewhat shocked to see the massive drop in trust in leadership (1/3 distrusting leaders). This is definitely a significant effect which might yield good consequences - e.g., people being more likely to develop their own views and be less inclined to defer to certain individuals.

Thanks for the model - I think it's useful. 
I think it'd probably be more appropriate to say that wave 2 was x-risk (and not broad longtermism) and/or that longtermism became x-risk. Before reading your thoughts on the possibilities for the third wave, I spent a few seconds developing my thoughts. The thoughts were:
1. Target audience: More focus on Global South/LMIC.
2. Culture: Diversification and more ways of living (e.g., the proportion of Huel drinkers go down).
3. Call-to-action: A higher level community/set of ideas (e.g., distilling and formalizing the method of EA and/or longtermism) and cause-specific community (AI safety, etc.).
4. Other: EA as a label gets reduced (e.g., if CEA changes its name).

Thanks for this! How many people did you interview for this?

Thanks a lot for this. I eagerly read it last year and found several valuable takeaways. Looking forward to reading the foundation handbook!

Just inserting a high-level description for other readers:
I expected that their perspective would be too rigid (e.g., overly reliant on rigorous research on average effects and generalizing too strongly), cynical (as opposed to humanistic and altruistic), and overly focused on intelligence. Fortunately, my expectations were off. In fact, they were highly nuanced (emphasizing the importance of judgment and context), considerate (e.g., devoting a full chapter to women and minorities), and deemphasized intelligence (taking a multiplicative model of success - although, to be clear, they still claim that intelligence is very important). That said, their theory of change is quite distinct from ours (e.g., innovation and creativity are emphasized substantially more than morality and doing the most good).

I also appreciated their discussion of the evidence around intelligence, role models, and talent search in sports.

Thanks for this! We'll soon be vetting talent - are there any resources you'd recommend for understanding and selecting talent?

Load more