OCB

Owen Cotton-Barratt

9868 karmaJoined

Sequences
3

Reflection as a strategic goal
On Wholesomeness
Everyday Longermism

Comments
885

Topic contributions
3

This isn't about the ways they explicitly care and work on global poverty. This is a holistic sense that the existence of extreme poverty in the world is a driver of a feeling of fraughtness, nationalism, and poor decision-making in rich countries (cf. attitudes towards immigration today; and how past eras with more extreme poverty tended to have more war). If we could choose a world without extreme poverty to develop AGI, compared to one with extreme poverty, I wouldn't be confident, but I definitely would think it was a meaningful edge (enough to bet on). I think the corresponding effects for factory farming are quite a bit weaker (though for sure there are still effects there).

I don't disagree with you that rich countries are likely to have disproportionate influence; but I think that the presence or absence of extreme poverty in the world they're living in will have more influence on their implicit decision algorithms than you're suggesting. I think eliminating global poverty would have a significantly bigger effect reducing the risk of AI catastrophe than would eliminating factory farming.

I do think I hadn't properly considered the impact of potentially-short AI timelines on this question, and that pushes in favour of animals (since there's more room for value shifts to happen quickly than economic fundamentals to shift quickly).

I think of this question mostly in terms of the trajectory I think this nudges us towards. It feels like there's something of a hierarchy of needs for humanity as a whole, and getting out of the zone where we have extreme poverty feels like the right first step, in a way that makes me feel more optimistic about wise decision processes being able to rise to the top thereafter.

I'm not certain what current spending looks like; that might make me change my mind here. (I think it's definitely right to start ramping up spending on animal welfare at some point before poverty is entirely eliminated.)

The judging process should be complete in the next few days. I expect we'll write to winners at the end of next week, although it's possible that will be delayed. A public announcement of the winners is likely to be a few more weeks.

I think in some generality scandals tend to be "because things aren't transparent enough", since greater transparency would typically have meant issues people would be unhappy with would have tended to get caught and responded to earlier. (My case had elements of "too transparent", but also definitely had elements of "not transparent enough".)

Anyway I agree that this particular type of transparency wouldn't help in most cases. But it doesn't seem hard to imagine cases, at least in the abstract, where it would kind of help? (e.g. imagine EA culture was pushing a particular lifestyle choice, and then it turned out the owner of the biggest manufacturer in that industry got invited to core EA events)

I think a proper account of this wants to explain why there appear to be arguments which argue for an anthropic shadow effect, and why there appear to be arguments which argue against an anthropic shadow effect, and how to reconcile them. 

In my view, Teru Thomas's paper is the first piece which succeeds in doing that. 

(My historical position is like "I always found anthropic shadow arguments fishy, but didn't bottom that concern out". I found Toby Crisford's post helpful in highlighting what might be a reason not to expect anthropic shadow effects, but it left things feeling gnarly so I wasn't confident in it -- again, without investing a great deal of time in trying to straighten it out. I missed Jessica Taylor's post, but looking at it now I think I would have felt similarly to Toby Crisford's analysis.)

I'm nigh-certain that Wytham was never under the control of CEA's Executive Director.

I think that this litmus test is pretty weak, though, as a response to Arepo's suggestion that CEA was the primary beneficiary of Wytham. However, I also think that this suggestion is mistaken. I believe that CEA hosted <10% of the events at Wytham (maybe significantly less; I don't know precisely, and am giving 10% as a round threshold that I'm relatively confident using as an upper bound).

I agree with this. (While noting that some forms of scaffolding will work noticeably better with humans than others will, so there are still capabilities boosts to be had for organizations of humans from e.g. procedures and best practices.)

But if our plan was to align organizations of some human-like entities that we were gradually training to be smarter, I'd be very into working out how to get value out of them by putting them into organizations during the training process, as I expect we'd learn important things about organizational design in the process (and this would better position us to ensure that the eventual organizations were pretty safe).

Oh I see. I definitely wasn't expecting anything that zoomed in. Rather, I was thinking maybe you had some abstract model which separated out capabilities-from-base-model from capabilities-from-scaffolding, and could explain something about the counterfactual of advancing the latter, and how it all interacted with safety.

Load more