C

constructive

549 karmaJoined

Bio

RA in compute governance

Comments
73

Thanks for contributing these examples! Added a link to your comment in the main text.

Thanks for the hint. Skimming this, it sounds somewhat exaggerated. I'd like to see a more rigorous investigation. (I.e., how strong can flares get, which equipment would be damaged.) This article suggests flares are much less harmful (only read first few paragraphs)

(Uncertain) My guess would be that a global conflict would increase AI investment considerably, as (I think) R&D typically increases in war times. And AI may turn out to be particularly strategically relevant. 

Though you need to consider the counterfactual where the talent currently at OAI, DM, and Anthropic all work at Google or Meta and have way less of a safety culture.

I think a central idea here is that superintelligence could innovate and thus find more energy-efficient means of running itself. We already see a trend of language models with the same capabilities getting more energy efficient over time through algorithmic improvement and better parameters/data ratios. So even if the first Superintelligence requires a lot of energy, the systems developed in the period after it will probably need much less. 

 

 

Weakly held opinion that you could be investing too much into this progress. I'd expect to hit diminishing returns after ~50-100 hours (though have no expertise whatsoever)

Sounds good! Thanks for the reply.
I think I would additionally find it helpful to get some insights into what you are prioritizing to give feedback on project plans (or maybe your future post on impact metrics will include that?) - but I know communication takes a lot of effort, so it may be easier to just receive that feedback as you're rolling out the projects. Looking forward to your next post

Thanks!
I think even for community building, there is a rather broad category, though most of the ones I know of are online, so they probably don't apply here. It would be helpful to have some examples. 

Thanks for writing this! 
 

Your high-level strategy sounds great but is also fairly generic, so hard to criticize. One theme throughout seems to be an emphasis on creating solid structures and making sure everything is legally backed. I think this is important-especially after the FTX crisis-though I sense a risk of an overemphasis on security. EA Germany is yet a small community and has so much room to grow! I think we need a large number of capable people to solve some serious problems fairly soon. EA Germany should thus emphasize ambition and growth, primarily through targeted outreach and connecting the community better internationally.

I was hoping to learn more about the concrete projects that you are working on/ about to start. Here are some ideas that I see as priorities (Most are already happening to some degree):

  • Further growing Berlin as a hub, with the goal of making it attractive for people working full-time on EA causes. (E.g., organizing speaker events, creating housing for temporary stays, attracting talent)
  • Connecting the German community better internationally: Inviting leading EAs to give talks in major German cities and encouraging Germans to attend international events. (I think one reason German EAs seem to be less ambitious than EAs from other countries is that they aren't connected well to senior EAs and thus receive less mentorship and worse access to opportunities)
  • Targeted outreach, particularly at national student fellowships such as Studienstiftung
  • Creating pipelines for talent, e.g., by running AI alignment camps or local iterations of the AGISF

Thanks for all the work you are already doing! It's always easy to suggest a lot of projects without having to start them myself :D

Looking forward to your strategy playing out.
 

Load more