4370 karmaJoined Sep 2014


EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work


Topic contributions

I'd be interested to reread that, but on my version p41 has the beginning of the 'civilisational virtues' section and end of 'looking to our past', and I can't see anything relevant. 

I may have forgotten something you said, but as I recall, the claim is largely that there'll be leftover knowledge and technology which will speed up the process. If so, I think it's highly optimistic to say it would be faster:

1) The blueprints leftover by the previous civilisation will at best get us as far as they did, but to succeed we'll necessarily need to develop substantially more advanced technology than they had.

2) In practice they won't get us that far - a lot of modern technology is highly contingent on the exigencies of currently available resources. E.g. computers would presumably need a very different design in a world without access to cheap plastics.

3) The second time around isn't the end of the story - we might need to do this multiple times, creating a multiplicative drain on resources (e.g. if development is slowed by the absence of fossil fuels, we'll spend that much longer using up rock phosphorus), whereas lessons available from previous civilisations will be at best additive and likely not as good as that - we'll probably lose most of the technology of earlier civilisations when dissecting it to make the current one. So even if the second time would be faster, it would move us one civilisation closer to a state where it's impossibly slow.

Thanks Toby, that's good to know. As I recall, your discussion (much of which was in footnotes) focussed very strongly on effects that might be extinction-oriented, though, so I would be inclined to put more weight on your estimates of the probability of extinction than your estimates of indirect effects. 

E.g. a scenario you didn't discuss that seems seem plausible to me is approximately "reduced resource availability slows future civilisations' technical development enough that they have to spend a much greater period in the time of perils, and in practice become much less likely to ever successfully navigate through it" - even if we survive as a semitechnological species for hundreds of millions of years.

Very strong agree. The 'cons' in the above list are not clearly negatives from an overall view of 'make sure we actually do the most good, and don't fall into epistemic echo chambers' perspective.

I don't know if they're making a mistake - my question wasn't meant to be rhetorical.

I take your point about capacity constraints, but if no-one else is stepping up, it seems like it might be worth OP stepping up their capacity constraints.

I continue to think the EA movement systematically underestimates the x-riskiness of nonextinction events in general and nuclear risk in particular by ignoring much of the increased difficulty of becoming interstellar post-destruction/exploitation of key resources. I gave some example scenarios of this here (see also David's results) - not intended to be taken too seriously, but nonetheless incorporating what I think are significant factors that other longtermist work omits (e.g. in The Precipice, Ord defines x-risk very broadly, but when he comes to estimate the x-riskiness of 'conventional' GCRs, he discusses them almost entirely in terms of their probability of making humans immediately go extinct, which I suspect constitutes a tiny fraction of their EV loss).

You might be right, but that might also just be a failure of imagination. 20 years ago, I suspect many people would have assumed by the time we got AI the level of ChatGPT it would basically be agentic - as I understand it, the Turing test was basically predicated on that idea, and ChatGPT has pretty much nailed that while having very few characteristics that we might recognise in an agent. I'm less clear, but also have the sense that people would have believed something similar about calculators before they appeared.

I'm not asserting that this is obviously the most likely outcome, just that I don't see convincing reasons for thinking it's extremely unlikely.

It doesn't seem too conceptually murky. You could imagine a super-advanced GPT, which when you ask it any questions like 'how do I become world leader?' gives in-depth practical advice, but which never itself outputs anything other than token predictions.

nuclear security is getting almost no funding from the community, and perhaps only ~$30m of philanthropic funding in total.

Do we know why OP aren't doing more here? They could double that amount and it would barely register on their recent annual expenditures.

I'm curious which direction the disagree voters are disagreeing - are they expressing the view that quantifying people like this at all is bad, or that if you're going to do it, this is a more effective way?

For what it's worth, I sympathise with the need to make some hard prioritisation decisions - that's what EA is about, after all. Nonetheless, it seems like the choice to focus on top universities has been an insufficiently examined heuristic. After all, the following claim...

top universities are the places with the highest concentrations of people who ultimately have a very large influence on the world.

... is definitely false unless the only categorisation we're doing of people is 'the university they go to'. We can subdivide people into any categories we have data on, and while 'university' provides a convenient starting point for a young impact-focused organisation, it seems like a now-maturing impact-focused organisation should aspire to do better. 

For a simple example, staying focused on universities, most university departments receive their own individual rankings, which are also publicly available (I think the final score for the university is basically some weighted average of these, possibly with some extra factors thrown in). 

I'm partially motivated to write this comment because I know of someone who opted to go to the university with the better department for their subject, and has recently found out that, by opting to go to the university with the lower overall ranking, they're formally downgraded by both immigration departments and EA orgs.

So it seems like EA orgs could do better simply by running a one-off project that pooled departmental rankings and prioritising based on that. It would probably be a reasonably substantial (but low skill) one-off cost with a slight ongoing maintenance cost, but if 'finding the best future talent' is so important to EA orgs, it seems worth putting some ongoing effort into doing it better. [ETA - apparently there are some premade rankings that do this!]

This is only one trivial suggestion - I suspect there are many more sources of public data that seem like they could be taken into account to make a fairer and (which IMO is equivalent) more accurate prioritisation system. Since as the OP points out, selecting for the top 100 universities is a form of strong de facto prejudice against people from countries that don't host one, it might be also worth adding some multiplier to people at the top departments in their country - and so on. There might be quantifiable considerations that have nothing to do with university choice.

Having said that, if CEA or any other org does do something like this, I hope they'll

a) have the courage to make unpopular weighting decisions when the data clearly justifies them and

b) do it publicly, open sourcing their weighted model, so that anyone interested can see that the data does clearly justify it - hopefully avoiding another PELTIVgate.

Load more