G

GideonF

1862 karmaJoined

Bio

Participation
5

Independent researcher on SRM and GCR/XRisk and on pluralisms in existential risk studies

How I can help others

Reach out to me if you have questions about SRM/Solar geoengineering

Comments
178

I assume is this is an accidental mispelling of Quakerism

GideonF
42
12
0
3

There seems to be this belief that arthopod welfare is some ridiculous idea only justified by extreme utilitarian calculations, and that loads of EA animal welfare money goes to it at the expensive of many other things, and this just seems really wrong to me. Firstly, arthropods hardly get any money at all, they are possibly the most neglected, and certainly amongst the most neglected, areas of animal welfare. Secondly, the argument for arthropod welfare is essentially exactly the same as your classic antispeciesist arguments; there aren't morally relevant differences between arthropods and other animals that justifies not equally considering their interests (or if you want to be non-utilitarian, equally considering them). Insects can feel pain (or certainly, the evidence is probably strong enough that they would probably pass the bar of sentience under UK law), and have other sentient experiences, so why would we not care about their welfare? Indeed, non-utilitarian philosophers also take this idea seriously: Christine Korsgaard, one of the most prominent Kantian philosophers today, sees insects as part of the circle of animals that are under moral consideration, and Nussbaum's capabilities approach is restricted to sentient animals, and I think we have good reason to think insects are sentient as well. Many insects seem to have potentially rich inner lives, and have things that go well and badly for them, things they strive to do, feelings of pain etc. What principled reason could we give for their exclusion, that wouldn't be objectionably speciesist. Also, all arthropod welfare work at present is about farmed animals; those farmed animals just happen to be arthropods!

Some useful practical ideas that could emerge:

  • Inform what welfare requiremens ought to be put into law when farming insects
  • Inform and lobby the insect farming industry to protect these welfare requirements (eg corporate campaigns); do this in a similar way to how decapod welfare research has informed the work of the Shrimp Welfare Project
  • Understand the impacts of pesticides on insect welfare, and use this to lobby for pesticide substitutes
  • Improve the evidence base of insect sentience such that they can be incorporated into law (although I think the evidence is probably at least as strong as decapods which are already seen as sentient under UK Law).

Insect suffering is here now and real, and there is a lot of practical things we could do about it; dismissing it as 'head in the cloud philosophers' seems misguided to me

I think it's probably important to note that some people (ie me) do in fact think a unilateral pause by one of the major players (eg USA, China, UK, EU) may actually be pretty effective if done in the right way with the right messaging (likely to be useful in pushing towards a coordinated or uncoordinated global pause). Particularly if the US paused, I could very much see this starting a change reaction

I think this is untrue with regards to animal protests. My impression is a decently significant percentage of EA people working on animals have participated in protests

As another former fellow and research manager (climate change), this seems perhaps a bit of a strange justification.

The infrastructure is here - similar to Moritz's point, whilst Cambridge clearly has a very strong AI infrastructure, the comparative advantage of Cambridge over any other location, would, at least to my mind, be the fact it has always been a place of collaboration across different cause areas and considerations of the intersections and synergies involved (ie through CSER). It strikes me that in fact other locales, such as London (which probably has one of the highest concentration of AI Governance talent in the world) may have been a better location than Cambridge. I think this idea that Cambridge is best suited for purely AI seems surprising, when many fellows commented (me included) on the usefulness of having people from lots of different cause areas around, and the events we managed to organise (largely due to the Cambridge location) were mostly non-AI yet got good attendence throughout the cause areas.

Success of AI-safety alumni - similar to Moritz, I remain skeptical of this point (I think there is a closely related point which I probably endorse, which I will discuss later). It doesn't seem obvious that, when accounting for career level, and whether participants were currently in education, that AI safety actually scores better. Firstly, you have the problem of differing sample size, for example, take climate change; there have only been 7 climate change fellows (5 of which were last summer, and of those (depending on how you judge it), only 3 have been available for job opportunities for more than 3 months after the fellowship, so the sample size is much smaller than AI Safety and governance (and they have achieved a lot in that time). Its also, ironically, not clear that the AI Safety and Governance cause areas have been more successful at the metric of 'engaging in AI Safety projects'; for example, 75% of one of the non-AI cause areas' fellows from 2022 are currently employed in, or have offers for PhD's in, AI XRisk related projects, which seems a similar rate of success than AI in 2022.

I think the bigger thing that acts in favour of making it AI focused it that it is much easier for junior people to get jobs or internships in AI Safety and Governance than in XRisk focused work in some other cause areas; there simply are more role available for talented junior people that are clearly XRisk related. This might be clearly one reason to make ERA about AI. However, whilst I mostly buy this argument, its not 100% clear to me that this means counterfactual impact is higher. Many of the people entering into the AI safety part of the programme may have gone on to fill these roles anyway (I know of something similar to this being the case with a few rejected applicants), or the person whom they got the role above may have been only marginally worse. Whereas, for some of the cause areas, the participants leaned less XRisk-y by background, so ERA's counterfactual impact may be stronger, although it also may be higher variance. I think on balance, this does seem to support the AI switch, but by no margin am I sure of this.

It  seems that the successful opposition to previous technologies was indeed explicitly against that technology, and so I'm not sure the softening of the message you suggest is actually necessarily a good idea. @charlieh943  recent case study into GM crops highlighted some of this (https://forum.effectivealtruism.org/posts/6jxrzk99eEjsBxoMA/go-mobilize-lessons-from-gm-protests-for-pausing-ai - he suggests emphasising the injustice of the technology might be good); anti-SRM activists have been explictly against SRM (https://www.saamicouncil.net/news-archive/support-the-indigenous-voices-call-on-harvard-to-shut-down-the-scopex-project), anti-nuclear activists are explicitly against nuclear energy and many more. Essentially, I'm just unconvinced that 'its bad politics' is necessarily supported by case studies that are most relevant to AI.

Nonetheless, I think there are useful points here both about what concrete demands could look like, or who useful allies could be, and what more diversified tactics could look like. Certainly, a call for a morotorium is not necessarily the only thing that could be useful in pushing towards a pause. Also, I think you make a point that a 'pause' might not be the best message that people can rally behind, although I reject the opposition. I think, in a similar way to @charlieh943 that emphasising injustice may be one good message that can be rallied around. I also think a more general 'this technology is dangerous and allowing companies to make it are dangerous' may also be a useful rallying message, which I have argued for in the past https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different

I feel in a number of areas this post relies on the concept of AI being constructed/securitised in a number of ways that seem contradictory to me. (By constructed, I am referring to the way the technology is understood, percieved and anticipated, what narratives it fits into and how we understand it as a social object. By securitised, I mean brought into a limited policy discourse centred around national security that justifies the use of extraordinary measures (eg mass surveillance or conflict) to combat, concerned narrowly with combatting the existential threat to the state, which is roughly equal to the government, states territory and society. )

For example, you claim that hardware would be unlikely to be part of any pause effort, which would imply that AI is constructed to be important, but not necessarily exceptional (perhaps akin to climate change). This is also likely what would allow companies to easily relocate without major issues. You then claim it is likely international tensions and conflict would occur over the pause, which would imply thorough securitisation such that breaching a pause would be considered a threat enough to national security that conflict could be counternanced; therefore exceptional measures to combat the existential threat are entirely justified(perhaps akin to nuclear weapons or even more severe). Many of your claims of what is 'likely' seem to oscillate between these two conditions, which in a single juristiction seem unlikely to occur simultaeously. You then need a third construction of AI as a technology powerful and important enough to your country to risk conflict with the country that has thoroughly securitised it. SImilarly there must be elements in the paused country that are powerful that also believe it is a super important technology that can be very useful, despite its thorough securitisation (or because of it; I don't wish to project securitisation as necessarily safe or good! Indeed, the links to military development, which could be facilitated by a pasue, may be very dangerous indeed.)

You may argue back two points; either that whilst all the points couldn't occur simultanously, they are all pluasible. Here I agree, but then the confidence in your language would need to be toned down. Secondly that these different constructions of AI may differ across juristictions, meaning that all of these outcomes are likely. This also seems certainly unlikely, as countries are impacted by each other; narratives do spread, particularly in an interconnected world and particularly if they are held by powerful actors. Moreover, if powerful states are anywhere close to risking conflict over this, other economic or diplomatic measures, would be utilised first, likely meaning the only countries that would continue to develop it would be those who construct it as a super important (those who didn't would likely give into the pressure). In a world where the US or China construct the AI Pause as a vital matter of national security, middle ground countries in their orbit allowing its development would not be counternanced. 

I'm not saying a variety of constructions are not plausible. Nor am I saying that we necessarily fall to the extreme painted in the above paragraph (honestly this seems unlikely to me, but if we don't then a Pause by global cooperation seems more plausible). Rather, I am suggesting that as it stands your idea of 'likely outcomes', are, together, very unlikely to happen, as they rely on different worlds to one another.

I think the most likely thing is that on a post like this the downvotes vs disagreevotes distinction isn't very strong. Its suggestions, so one would upvite the suggestions one likes most, and downvote those you like least (to contribute to visibility). If this is the case, I think its pretty fair to be honest. 

If not, then I can only posit a few potential reasons, but these all seem  bad to me that I would assume the above is true:

  • People think 80K platforming people who think climate change could contribute to XRisk would be actively harmful (eg by distracting people from more important problems)
  • People think 80K platforming Luke (due to his criticism of EA- which I assume they think is wrong or bad faith) would be actively harmful, so it shouldn't be considered
  • People think having a podcast specifically talking about what EA gets wrong about XRisk would be actively harmful (perhaps it would turn newbies off, so we shouldn't have it)
  • People think suggesting Luke is trolling because they think their is no chance that 80K would platform him (this would feel very uncharitable towards 80K imo)
Answer by GideonF11
6
2

Christine Korsgaard on Kantian Approaches to animal welfare/ about her recent-ish book 'Fellow Creatures'

Load more