PabloAMC 🔸

Quantum algorithm scientist @ Xanadu.ai
1056 karmaJoined Working (6-15 years)Madrid, España

Bio

Participation
5

Hi there! I'm an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)

Comments
113

I believe the thing that people would be willing to change their behaviour most for is feeling in-group. Eg, when people know that they are expected to do X, and people around them will know if they do not. But that is very hard to implement.

Commenters are also confusing 'should we give PauseAI more money?' with 'would it be good if we paused frontier models tomorrow?'

I think it is a reasonable assumption that we only should give PauseAI more money (necessary conditions) if (1) we thought that pausing AI is desirable and (2) PauseAI methods are relatively likely to achieve that outcome, conditioned on having the resources to do so. I would argue that many of the comments highlight that both those assumptions are not clear for many of the forum participants. In fact I think it is reasonable to stress disagreement with (2) in particular.

This reminds me of quantum computers or fusion reactors — we can build them, but the economics are far from working.

A quantum research scientist here: actually I would argue that is a misleading model for quantum computing. The main issue right now is technical, not economical. We still have to figure out error correction, without which you are bound to roughly 1000 logical gates. Far too little to do anything interesting.

But they also gave 0.5 million to research which is a 14% roughly.

I would say they also do a fair amount of helping foster an alternative protein market, see eg $1 million dollars in Science and Technology (https://animalcharityevaluators.org/charity-review/the-good-food-institute/2021-nov/) and also has (or had) a research grant program (https://gfi.org/wp-content/uploads/2023/01/Research-Grant-Program-RFP-2023.pdf).

Hi! I wonder if there is a reason why all recommendations are in the area of outreach/advocacy (with the exception of wild animal welfare). The Good Food Institute, which works on research and development, used to be recommended by ACE, but it is no longer recommended. I am curious about why this might be the case, though perhaps it is simply that other organizations have more pressing funding needs.

I tend to dislike treating all AI policy equal, the type of AI policy that affects AI safety is unlikely to represent a significant burden when developing frontier models. Thus reducing red tape on AI might actually be pretty positive.

Actually, something I am confused about is whether the AI academics are per person*year as the technical researchers in various fields.

Hi there! Some minor feedback for the webpage: instead of starting with the causes, I’d argue you should start with the value proposition: “your euro goes further or something along those lines”. You may want to check ayudaefectiva.org for an example. Congratulations on the new org!

Thanks, Chris, that's very much true. I've clarified I meant donations.

Load more