L

Larks

14135 karmaJoined

Comments
1350

Topic contributions
1

If that is the case then the post seems shockingly disingenuous, even within the category of 'denounce people for tolerating controversial people' posts. It really seems like the OP was trying to let readers assume that the speakers' strong opinions in question were pro-holocaust or pro-holocaust denial, especially given the post was also calling them racist. If those strong opinions were actually included their opposition to the genocide ... well, what would the OP prefer? Speakers with mixed and equivocal views on the holocaust?

These statements seem at least generally consistent with the complaint at first glance. Could someone point out where they are not?

Sure, they hedged in some places. But the literal title just states it outright:

Sam Bankman-Fried funded a group with racist ties.

Now I know people often say that writers do not choose their titles. But the Guardian as a newspaper did, so I think they can fairly be criticized for it, and somehow I doubt the author registered any objection to the title.

Nor did the article in any way alert the reader, likely less knowledgeable about bankruptcy procedures than you, about the potential fallibility of bankruptcy complaints, or the strategic issues involved. Even after the corrections made to the article, they wait until the 10th paragraph to mention the fact Habryka denies it, and not until the 50th paragraph do we learn that he presented evidence the allegations are false.

Larks
28
12
1

To me that actually seems like an argument against self-identification as a criteria. 

I think the underlying idea is a good idea, but I'm pretty pessimistic about actual implementation. My impression is that presidential administrations of both flavours attempt to expand executive power, and "if we do this it might be abused by the other side" arguments have generally not been effective. For example, when I look at recent SCOTUS cases the current administration has generally sided with expanding government/executive power every time (e.g. Chevron, censorship, firearms, debt modification). I'm not sure if this is due to myopia or being much less concerned about future authoritarianism than people often claim.

Unrelatedly, I'm not sure why this post is tagged 'community' as it is not really about the EA community.

In a new interview Trump again brought up the risk of artificial intelligence, discussed the connection to nuclear risks, and mentioned that some people think it will take over the human race.

I think the issue isn't so much a constant -10%, but that some specific life-saving interventions might saves lives yet leave people with unusually low quality of life, and for those interventions the error term might be much larger than 10%.

Answer by Larks9
2
0

In 2021-2022 GiveWell rolled over funds into the next year because they felt that the top charities had enough. They've said they no longer do this but it suggests to me that they tend to be in the ballpark of fully-funded.

I think I would draw the opposite conclusion from this specific piece of evidence - it suggests that, unlike most charities, they're willing to say "yeah we got enough for now", so we should infer that when they don't say this they actually could use some more.

Answer by Larks12
2
0
2

You might find helpful the existing discussion on the topic that you can find with the tag here, which I also added to this post.

  • I have around 1 life of value left, whereas I calculated an expected value of the future of 1.40*10^52 lives.
  • Ensuring the future survives over 1 year, i.e. over 8*10^7 lives (= 8*10^(9 - 2)) for a lifespan of 100 years, is analogous to ensuring I survive over 5.71*10^-45 lives (= 8*10^7/(1.40*10^52)), i.e. over 1.80*10^-35 seconds (= 5.71*10^-45*10^2*365.25*86400).
  • Decreasing my risk of death over such an infinitesimal period of time says basically nothing about whether I have significantly extended my life expectancy. In addition, I should be a priori very sceptical about claims that the expected value of my life will be significantly determined over that period (e.g. because my risk of death is concentrated there).

10^-35 is such a short period of time that basically nothing can happen during it - even a laser couldn't cut through your body that quickly. But it seems intuitive to me that ensuring someone survives the next one second, if they would otherwise be hit by a bullet during that one second, could dramatically increase their life expectancy.

To explicitly do the calculation, lets assume a handgun bullet hits someone at around ~250m/s, and decelerates somewhat, taking around 10^-3 seconds to pass through them. Assuming they were otherwise a normal person who didn't often get shot at, intervening to protect them for ~10^-3 seconds would give them about 50 years ~= 10^9 seconds of extra life, or 12 orders of magnitude of leverage.

This example seems analogous to me because I believe that transformative AI basically is a one-time bullet and if we can catch it in our teeth we only need to do so once.

Load more