M

MichaelStJules

10091 karmaJoined May 2016

Sequences
2

Human impacts on animals
Welfare and moral weights

Comments
2277

Topic contributions
12

I'm writing a quick piece on the scale, in case you (or anyone else) is interested in giving feedback before I post it (probably next week).

Well fuck, I guess this probably explains it. Yao & Li, 2018:

Mandarin fish have unusual feeding habits. The fish only eat live fish and shrimps, and do not consume dead prey or artificial diets during all lifecycle stages (Chiang 1959; Li et al. 2014a; Yao and Liang 2015). In nature it is completely carnivorous, and has been found to capture live fry of other fish species from the first feeding stages (Chiang 1959).

Also makes substitutes for fish fry not very promising; they'd probably also have to be other animals. But maybe we could find some that matter much less per kg.

Otherwise, we'd probably just want to reduce mandarin fish production, which could be hard to target specifically, especially being in China.

 

Some different fry numbers in Hu et al., 2021:

According to data from the China Fishery Statistical Yearbook, the fry number of freshwater fish increased from 59.51 billion in 1981 to 1.252 trillion in 2019, while the fry number of marine fish increased from 167 million in 1996 to 11.44 billion in 2019 [6,7].

 

Appendix: huge numbers of juveniles raised for an unknown reason

 

I suspect they're raised as feed for other farmed fish (and maybe other farmed aquatic species). Maybe they could also be released into wild fisheries as feed for wild-caught aquatic animals.

From Li and Xia, 2018, first some updated numbers:

Aquaculture production was 28.02 million tonnes in 2013, and the corresponding production of artificially propagated fry was 1 914.3 billion.

And they have a figure:

 

They have a section "6.4.4.2 Fry Production of Prey Fish as Feed for Predatory Fish". They specifically discuss mud carp fry as feed for mandarin fish (Siniperca chuatsi). They write:

6.4.4.2.3 The Relationship Between the Production of Mud Carp Fry and Mandarin Fish

Mud carp, is the favorite prey fish of mandarin fish. The production of mandarin fish has increased in relation to the growth of mud carp culture. The production of mud carp per growth cycle is about 7500 kg/ha, while mandarin fish was about 6000–7500 kg/ha, and feed coefficient was about 1:3–4. When the feed coefficient is 3.5, the production of mandarin fish was 284 780 tonnes in 2013, and required a prey fish production of mud carp of about 996 730 tonnes. Almost all prey for mandarin fish is provided through artificial propagation. The production of mandarin fish has increased over the years, and is significantly positively correlated with fry availability (Figure 6.4.9) (Pearson correlation = 0.70, P < 0.01) (China Fishery Statistical Yearbook 1996–2014). As a high‐quality food for mandarin fish, the variation in production of mud carp is directly related to the aquaculture scale of mandarin fish, as shown from the example in Guangdong Province (Figure 6.4.10). Here again, there is a significant linear correlation between the production of mandarin fish and mud carp (y = 0.348x – 48057, R2 = 0.765, P < 0.05) (Yao 1999).

(Though looking at the figures 6.4.9 and 6.4.10, fry production and mandarin fish production don't look very closely related, and it could just be all aquaculture going up.)

 

https://thefishsite.com/articles/cultured-aquatic-species-mandarin-fish, https://www.fao.org/fishery/affris/species-profiles/mandarin-fish/faqs/en/, and https://www.fao.org/fishery/affris/species-profiles/mandarin-fish/natural-food-and-feeding-habits/en/ also discuss fish fry fed live to mandarin fish.

Ah, I definitely saw your post before, but it looks like I forgot about it. Thanks for the reminder.

(I started building the table here for another piece that uses it, and decided to spin off a separate piece with the table. That next post should be up in the next few days.)

I guess I'll add humans for comparison, like you.

Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice.

If we go extinct, they won't exist, so won't be real people or have any valid moral claims. I also consider compassion, by definition, to be concerned with suffering, harms or losses. People who don't come to exist don't experience suffering or harm and have lost nothing. They also don't experience injustice.

Longtermists tend to seem focused on ensuring future moral patients exist, i.e. through extinction risk reduction. But, as above, ensuring moral patients come to exist is not a matter of compassion or justice for those moral patients. Still, they may help or (harm!) other moral patients, including other humans who would exist anyway, animals, aliens or artificial sentience.

On the other hand, longtermism is still compatible with a primary concern for compassion or justice, including through asymmetric person-affecting views and wide person-affecting views (e.g. Thomas, 2019, probably focus on s-risks and quality improvements), negative utilitarianism (focus on s-risks) and perhaps even narrow person-affecting views. However, utilitarian versions of most of these views still seem prone, at least in principle, to endorsing killing everyone to replace us and our descendants with better off individuals, even if each of us and our descendants would have had an apparently good life and object. I think some (symmetric and perhaps asymmetric) narrow person-affecting views can avoid this, and maybe these are the ones that fit best with compassion and justice. See my post here

That being said, empathy could mean more than just compassion or justice and could endorse bringing happy people into existence for their own sake, e.g. Carlsmith, 2021. I disagree that we should create people for their own sake, though, and my intuitions are person-affecting.

Other issues people have with longtermism are fanaticism and ambiguity; the probability that any individual averts an existential catastrophe is usually quite low at best (e.g. 1 in a million), and the numbers are also pretty speculative.

He could have said different nice things or just left out the bit about safety. Do you think he's straightfowardly lying to the public about what he believes?

Or maybe he's just being (probably knowingly) misleading? "confident that OpenAI will build AGI that is both safe and beneficial" might mean 95% in safe beneficial AGI from OpenAI, and 5% it kills everyone.

Worth noting he said he's "confident that OpenAI will build AGI that is both safe and beneficial under [current leadership]".

There are descriptions of and opinions on some animal welfare certifications here and here. It seems Animal Welfare Approved, Certified Humane and Animal Welfare Certified (level 5 and up, maybe level 4, too?) should be pretty good.

GAP was funded by Open Phil for its Animal Welfare Certified program back in 2016, and this was one of the first grants Open Phil made in farm animal welfare.

Bitcoin is only up around 20% from its peaks in March and November 2021. It seems far riskier in general than just Nvidia (or SMH) when you look over longer time frames. Nvidia has been hit hard in the past, but not as often or usually as hard.

Smaller cap cryptocurrencies are even riskier.

I also think the case for outperformance of crypto in general is much weaker than for AI stocks, and it has gotten weaker as institutional investment has increased, which should increase market efficiency. I think the case for crypto has mostly been greater fool theory (and partly as an inflation hedge), because it's not a formally productive asset and its actual uses seem overstated to me. And even if crypto were better, you could substantially increase (risk-adjusted) returns by also including AI stocks in your portfolio.

I'm less sure about private investments in general, and they need to be judged individually.

I don't really see why your point about the S&P500 should matter. If I buy 95% AI stocks and 5% other stuff and don't rebalance between them, AI will also have a relatively smaller share if it does relatively badly, e.g. due to regulation.

Maybe there's a sense in which market cap-weighting from across sectors and without specifically overweighting AI/tech is more "neutral", but it really just means deferring to market expectations, market time discount rates and market risk attitudes, which could differ from your own. Equal-weighting (securities above a certain market cap or asset classes) and rebalancing to maintain equal weights seems "more neutral", but also pretty arbitrary and probably worse for risk-adjusted returns.

Furthermore, I can increase my absolute exposure to AI with leverage on the S&P500, like call options, margin or leveraged ETFs. Maybe I assume non-AI stocks will do roughly neutral or in line with the past, or the market as a whole will do so assuming AI progress slows. Then leverage on the S&P500 could really just be an AI play.

How much impact do you expect such a COI to have compared to the extra potential donations?

For reference:

  1. You could have more than doubled your investments over the past 1 year period by investing in the right AI companies, e.g. Nvidia, which seemed like a predictably good investment based on market share and % exposure to AI and is up +200% (3x). SMH is up +77%.
  2. Even the S&P500 is around 30% Microsoft, Apple (maybe not much of an AI play now), Nvidia, Amazon, Meta, Google/Alphabet and Broadcom, and these big tech companies have driven most of its gains recently (e.g. this and this).

And how far do you go in recommending divestment from AI to avoid COIs?

  1. Do you think people should avoid the S&P500, because its exposure to AI companies is so high? (Maybe equal-weight ETFs, or specific ETFs missing these companies, or other asset classes.)
  2. Do you think people should short or buy put options on AI companies? This way they're even more incentivized to see them do badly.
Load more