I've tried joining the matrix chat room, but got the following error:
MatrixError: [403] You do not belong to any of the required rooms/spaces to join this room. (https://matrix-client.matrix.org/_matrix/client/r0/join/!kTsOmBGiyQWKmETKhS%3Aone.ems.host?server_name=one.ems.host)
What do I need to do to join?
Essentially, if you received money from an FTX entity in the debtor group anytime on or after approximately August 11, 2022, the bankruptcy process will probably ask you, at some point, to pay all or part of that money back.
I received a travel grant from the FTX regranting program after August 11. Does this mean I will likely have to pay this money back, or does this depend on when the FTX Future Fund itself received the money from FTX?
Yes, that makes sense. How about stating that reasoning and thereby nudging participants to post in the EA forum/LessWrong/Alignment Forum, but additionally have a non-public submission form? My guess would be that only a small number of participants would then submit via the form, so the amount of additional work should be limited. This bet seems better to me than the current bet where you might miss really important contributions.
… if only they had allowed people not to publish on EA Forum, LessWrong, and Alignment Forum :)
Honestly, it seems like a mistake to me to not allow other ways of submission. For example, some people may not want to publicly apply for a price or be associated with our communities. An additional submission form might help with that.
Thanks for these interesting points!
About the first 3 statements on existential risks, takeoff scenarios and how influential our time is: How much is your view the general wisdom of experts in the corresponding research fields (I'm not sure what this field would be for assessing our influence on the future) and how much is it something like your own internal view?
Conditional on this happening, does this lead to a non-negligible likelihood that humanity would never create AGI since the stigma increases to greater proportions than developing AGI becomes easier?