DT

David T

469 karmaJoined Dec 2023

Comments
80

Strongly agree. A discount rate calculator with accompanying explanations emphasizing that - where practical - giving away money now is better because of inflation and compounding returns on saving lives or solving problems (and uncertainty about whether you'll stick to your pledge!) but let people trade that off against the reality they'd have a lot more disposable income after paying significant interest on loans/mortgages or realistic near term career progression

(obviously discount rate calculation isn't for everyone and isn't something I'd put on the main page, but for some people it's illuminating)

I agree that movement building strategy may vary in specific fields and with your specific examples (and hinted as much about recruitment in my first post!) so I don't think our differences are irreconcilable!

But your original post conveyed - perhaps more strongly than you intended - the sentiment that it didn't make much sense to try to persuade people [who in most cases don't even hear about EA] outside a small number of elite universities to pledge or do direct work [versus the CEA tweaking priorities to direct even more funding to generally already well-established and well-funded student orgs]. If the context was that for every student at $randomuni that was asked to pledge, someone at Oxford never heard about EA, maybe there would be some truth that it made less sense to fund outreach ti them, but I don't think that represents the reality at all. If anything, the OP and various others have suggested that the funding available in some circumstances is even sufficient to have a negative impact on incentives; on the other hand there's probably a high return to reaching people who would otherwise not have even heard of EA taking Giving Pledges or contemplating the many areas of direct work that don't need elite academic credentials.

As for taking the opposite stance and actively trying to spread funding to more universities or workplaces, I recognise there are many other challenges to incubating organizations without the people and institutions already in place and don't claim to have a solution, but I suspect it would be net positive, and generally more net positive than lowering the funding bar for groups already best positioned to access EA resources. But tbh my comment was less making a particular case for funding and more pushing back on the negative framing of the idea of funding outreach to "poorer students" in a subthread provoked by someone talking about how the original decision was a setback to their attempts to defend the movement against accusations of elitism.

(I also broadly endorse the third David's interpretation of my argument, FWIW :D)

I think it's inaccurate that only people at top universities are likely to have outsized influence, or to dismiss everyone else as "poorer students"  that it "doesn't make as much sense" to encourage to engage in altruistic activity. The university Sorting Hat really isn't that good.

And more specifically from a movement building perspective it usually makes sense to prioritise reaching more people than to ensure a small group of [already advantaged] people have access to particularly lavish allowances. Students at elite universities' ability to achieve outsized impact later in life probably isn't particularly closely linked to the size of the stipend the current organizer of their well-established EA group is able to claim from central funding bodies, whereas actually having some outreach at other universities is going to have more impact, even if fewer of those students' impact will be outsized and the median earn to give amounts might be a little lower.

Edit: not really sure what's so controversial here, though I've amended the quote just in case it's because my representation of DavidNash's original comment was considered uncharitable. 

Whilst I sympathise with the desire to see more of this kind of information particularly given EA jobs being notoriously competitive, I'd be concerned that the signals sent out by raw numbers might be misleading and deter suitable applicants

The classic example is LinkedIn, which does display applicant numbers. Having seen the other side of LinkedIn job ads, I'm well aware that a job with 30+ applicants probably has about 25 who one-click apply to everything vaguely related to their field even when lacking basic qualifying criteria such as visa status. If I hadn't seen that side of things, I'd probably be deterred from applying based on not meeting a bullet point or two where actually I'd probably be in the top 10% of qualified candidates.

What I think would be valuable to some people is those organizations with relatively complex processes involving exercises and application forms choosing to indicate roughly how many people completed exercises for similar jobs in the past (as a marginal candidate I'm a lot more likely to fancy my chances of standing out if it's 10 than if it's 70), but that's a helpful thing before people devote considerable time to a process rather than something to search for.

I think it's elitist (and inaccurate) to assume that only attendees of a small number of elite universities will have the future funds to give away. 

And ultimately it's not a straight decision between whether to fund a student group at Oxford or one at Oxford Brookes, it's a decision whether to pay student society leaders at a small number of target universities so much they feel uncomfortable about it and fund expensive retreats for them, or spreading movement building budget more widely to support outreach in more places (that's not to suggest there aren't other challenges to setting up more student groups in places that don't have an existing community). I can see the argument that focusing resources on a handful of courses at a handful of elite universities makes sense for recruitment into a small number of highly specialised positions, but not for maximising future fundraising capacity.

I'd class those comments as mostly a disagreement around ends . The emphasis on not getting the credit from his own support base and Republicans not wanting to talk about it are the most revealing. A sizeable fraction of his most committed support base are radically antivax to the point there was audible booing at his own rally when he recommended they got the vaccine, even after he'd very carefully worded it in terms of their "freedoms". It's less a narrow disagreement about a specific layer of Biden bureaucracy and more a recognition that his base sees less government involvement in healthcare and less reaction to future pandemics and in some cases even rejection of evidence-based medicine as valuable ends. And whilst he clearly doesn't reject evidence-based medicine himself, above all Trump loves adulation from that fanbase.

Either way, his position is quite different from those EAs who see pandemic preparedness as an extremely important permanent priority rather than a reactive thing..

And I can't believe it needs saying, but a "Torres exception" is not a good idea here. Even completely disregarding Torres' own feelings there are a lot of people who are not Emile Torres which those lines of attack stigmatise.

Also when, the fundamental complaint about someone is that they repeatedly make uncharitable and probably false claims about people's true motivations and engage in odd personal attacks on people they might legitimately be unimpressed by, adding a drive-by pop-diagnosis of a mental health condition and a nasty observation on their gender identity doesn't strengthen that observation, it just sets off the irony meter...

I don't disagree that these are also factors, but if tech leaders are pretty openly stating they want the regulation to happen and they want to guide the regulators, I think it's accurate to say that they're currently more motivated to achieve regulatory capture (for whatever reason) than they are to ensure that x-risk concerns don't become a powerful political argument as suggested by the OP, which was the fairly modest claim I made. 

(Obviously far more explicit and cynical claims about, say, Sam Altman's intentions in founding OpenAI exist, but the point I made doesn't rest on them)

Because their leaders are openly enthusiastic about AI regulation and saying things like "better that the standard is set by American companies that can work with our government to shape these models on important issues" or "we need a referee", rather than arguing that their tech is too far away from AGI to need any regulation or arguing the risks of AI are greatly exaggerated, as you might expect if they saw AI safety lobbying as a threat rather than an opportunity. 

I'm not sure that I buy that critics lack motivation. At least in the space of AI, there will be (and already are) people with immense financial incentive to ensure that x-risk concerns don't become very politically powerful.

The current situation still feels like the incentives are relatively small compared with the incentive to create the appearance that the existence of anthropogenic climate change is still uncertain. Over decades advocates have succeeded in actually reducing fossil fuel consumption in many countries as well as securing less-likely-to-be-honoured commitments to Net Zero, and direct and indirect energy costs are a significant part of everyone' household budget.

Not to mention that Big Tech companies whose business plans might be most threatened by "AI pause" advocacy are currently seeing more general "AI safety" arguments as an opportunity to achieve regulatory capture...

Load more