Hi Sonia,
You may not have the whole picture.
In 2019, I was leaked a document circulating at the Centre for Effective Altruism, the central coordinating body of the EA movement. Some people in leadership positions were testing a new measure of value to apply to people: a metric called PELTIV, which stood for “Potential Expected Long-Term Instrumental Value.” It was to be used by CEA staff to score attendees of EA conferences, to generate a “database for tracking leads” and identify individuals who were likely to develop high “dedication” to EA — a list that was to be shared across CEA and the career consultancy 80,000 Hours. There were two separate tables, one to assess people who might donate money and one for people who might directly work for EA.
What I saw was clearly a draft. Under a table titled “crappy uncalibrated talent table,” someone had tried to assign relative scores to these dimensions. For example, a candidate with a normal IQ of 100 would be subtracted PELTIV points, because points could only be earned above an IQ of 120. Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.
Source: https://www.vox.com/future-perfect/23569519/effective-altrusim-sam-bankman-fried-will-macaskill-ea-risk-decentralization-philanthropy
Absolutely true that it was ultimately not used and AI safety is higher priority for leadership. But proposals like this, especially by organizers of CEA, are definitely condescending and non-respectful and is not an appropriate way to treat fellow EAs working on climate change / poverty / animal welfare or other important cause areas.
The recent fixation of certain EAs on AI/ longtermism renders everything else less valuable in comparison and treating EAs not working on AI safety as "NPCs" (people who don't ultimately matter) is completely unacceptable.