pretraining data safety; responsible AI/ML
It also worries me, in the context of marginal contributions, when some people (not all) start to think of "marginal" as a "sentiment" rather than actual measurements (getting to know those areas, the actual resources, and the amount of spending, and what the actual needs/problems may be) as reasoning for cause prioritization and donations. A sentiment towards a cause area, does not always mean the cause area got the actual attention/resources it was asking for.
This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. |
Commenting and feedback guidelines:
|
I find it surprising when people (people in general, not EA specific) do not seem to understand the moral perspective of "do no harm to other people". This is confusing to me, and I wonder what aspects/experiences contributed to people being able to understand this vs people not being able to understand this.
Great initiative; thanks! Would"This is a Draft Amnesty Week draft." also apply to quick notes as well?
From some expressions on extinction risks as I have observed - extinction risks might actually be suffering risks. It could be the expectation of death is torturing. All risks might be suffering risks.
Read some other comments, and career coaching from 80k sounds like a good suggestion!
Some other thoughts:
A few thoughts
It is so sad to see the "humans are creating suffering for humans" amplified right now