L

Larks

14701 karmaJoined

Comments
1395

Topic contributions
1

Maybe asks how he chooses which issues to focus on?

Good analysis, thanks for writing this up! It does seem that in general our political/regulatory system has little to no sensitivity to the dollar cost of fulfilling requirements and avoiding identifiable but small harms.

And I think drawing the line at we're not going to allow hypotheticals about murdering discernible people

Do you think it is acceptable to discuss the death penalty on the forum? Intuitively this seems within scope - historically we have discussed criminal justice reform on the forum, and capital punishment is definitely part of that.

If so, is the distinction state violence vs individual violence? This seems not totally implausible to me, though it does suggest that the offending poster could simply re-word their post to be about state-sanctioned executions and leave the rest of the content untouched.

The baby analogy seems a bit forced to me because babies do not drink blood (and babies in utero do not choose to be there). But if an adult came along and started biting me hard enough to break the skin, potentially infecting me with some disease, I'd consider myself justified in whacking them as hard as it takes to get them off. I guess to your point I'd try to hit them non-lethally though, unlike with a mosquito. 

Answer by Larks5
2
5

I have no qualms about killing something that literally chose to try to steal my blood.

Larks
81
20
4
2

In a Nov 2023 speech Harris mentioned she’s concerned about x-risk and risks from cyber & bio. She has generally put more emphasis on current harms but so far without dismissing the longer-term threats.

This seems like a very generous interpretation of her speech to me. I feel like you are seeing what you want to see.

For context, this was a speech given when she came to the UK for the AI Safety Summit, which was explicitly about existential safety. She didn't really have a choice but to mention them unless she wanted to give a major snub to an important US ally, so she did:

But just as AI has the potential to do profound good, it also has the potential to cause profound harm.  From AI-enabled cyberattacks at a scale beyond anything we have seen before to AI-formulated bio-weapons that could endanger the lives of millions, these threats are often referred to as the “existential threats of AI” because, of course, they could endanger the very existence of humanity. (Pause)

These threats, without question, are profound, and they demand global action.

... and that's it. That's all she said about existential risks. She then immediately derails the conversation by offering a series of non-sequiturs:

But let us be clear.  There are additional threats that also demand our action — threats that are currently causing harm and which, to many people, also feel existential.

Consider, for example: When a senior is kicked off his healthcare plan because of a faulty AI algorithm, is that not existential for him?

When a woman is threatened by an abusive partner with explicit, deep-fake photographs, is that not existential for her?

When a young father is wrongfully imprisoned because of biased AI facial recognition, is that not existential for his family?

I think it's pretty clear that these are not the sorts of things you say if you are actually concerned about existential risks. No-one genuinely motivated by fear of the deaths of every human on earth, and all future generation, goes around saying "oh yeah and a single person's health insurance admin problems, that is basically the same thing".

I won't quote the speech in full, but I think it is worth looking at. She repeatedly returns to potential harms of AI, but never - once the bare necessities of diplomatic politeness have been met - does she bother to return to catastrophic risks. Instead we have:

... make sure that the benefits of AI are shared equitably and to address predictable threats, including deep fakes, data privacy violations, and algorithmic discrimination. 

and

... establish a national safety reporting program on the unsafe use of AI in hospitals and medical facilities.  Tech companies will create new tools to help consumers discern if audio and visual content is AI-generated.  And AI developers will be required to submit the results of AI safety testing to the United States government for review. 

and

... protect workers’ rights, advanced transparency, prevent discrimination, drive innovation in the public interest, and help build international rules and norms for the responsible use of AI. 

and

the wellbeing of their customers, the safety of our communities, and the stability of our democracies. 

and

... the principles of privacy, transparency, accountability, and consumer protection. 

My interpretation here, that she is basically rejecting AI safety, is not unusual. You can see for example Politico here calling it a 'rebuke' to Sunak and the focus on existential risks, and making clear that it was very deliberate.

Overall this actually makes me more pessimistic about Kamala. You clearly wrote this post in a soldier mind and looked for the best evidence you could find to show that Kamala cared about existential risks, so if this speech, which I think basically suggests the opposite, is the best you could find then that seems like a pretty big negative update. In particular it seems worse than Trump, who gave a fairly clear explanation of one casual risk pathway - deepfakes causing a war - and he did this without being explicitly asked about existential risks and without a teleprompter. Are there any examples of Kamala, unprompted, bringing up in an interview the risk of AI causing a nuclear war, or taking over the human race? 

I agree with your point that the record of the Biden Administration seems fairly good here, and she might continue out of status quo bias, continuity of staff, and so on. But in terms of her specific views she seems significantly less well aligned than Biden or Rishi were, and maybe less than Trump.

 

(I previously wrote about this here)

I assume organisations and groups considering signing up for this program have been doing donor due diligence on the legal and reputational risks of this funding opportunity

Perhaps I am being dense here but... do you literally mean this? Like you actually think it is more likely than not that most groups and orgs considering signing up have been doing legal due diligence? Given the relatively small amounts of dollars at play, and the fact that your org might get literally zero, I would expect very few orgs to have done any specific legal due diligence. Charitable organisations need to be prudent with their resources and paying attourneys because of the possibility you might get a small grant that might have some unspecified legal issue - though you are not aware of any specific red flags - does not seem like a good use of donor resources to me.

A recent RCT in Liberia evaluated by IPA found strong effects ten years post-intervention, suggesting long-lasting reductions in crime.


Presumably referencing this from Innovations for Poverty Action:

Men offered CBT and cash were much less likely to commit thefts and robberies. In the long run, those in the therapy-only group reported 61 percent fewer crimes compared to men in the comparison group, while those who participated in STYL reported a 57 percent decrease
in crimes committed. Interpolating, this translates to roughly 338 fewer crimes per subject over 10 years—$1.50 per crime avoided, given the low program cost.

Financial workers' careers are not directly affected very much by populist prejudice because 1) they provide a useful service and ultimately people pay for services and 2) they are employed by financial firms who do not share that prejudice. Likewise in short timeline worlds AI companies are probably providing a lot of services to the world, allowing them to pay high compensation, and AI engineers would have little reason to want to work elsewhere. So even if they are broadly disliked (which I doubt) they could still have very lucrative careers.

Obviously its a different story if antipathy towards the financial sector gets large enough to lead to a communist takeover.

I believe the scale of unemployment could be much higher. E.g. 5% ->15% unemployment in 3 years.

During the financial crisis U-3 rose from 5% to 10% over about 2 years, suggesting on this metric at least you should think it is at least comparable (2.5%/year vs 3.3%/year). 

and the [X] in question is extremely toxic to most everyone on the outside

That seems not true to me? Trump and Kamala are roughly equally popular.

It's like if I made a piece "the feminist case for Benito Mussolini" where I made clear that I am not a feminist but feminists should be supporting Mussolini.

I guess I don't share your intuition there. Obviously you should try to accurately represent feminist premises and drive sound inferences, and object-level criticisms would be very appropriate if you failed in this, but the writing such a post itself seems fine to me if it passed the ideological turing test. It reminds me of how students and lawyers often have to write arguments for something from the perspective of someone else, even if they don't believe it.

It seems very strange to me to think that this post is bad, but a word-for-word identical post would be good if the author self-identified as an EA. The title is meant to describe the content of the post, and the post is about how EA premises might support Trump.

Load more