Jackson Wagner

Scriptwriter for RationalAnimations @ https://youtube.com/@RationalAnimations
3220 karmaJoined Working (6-15 years)Fort Collins, CO, USA

Bio

Scriptwriter for RationalAnimations!  Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc.  Also a big fan of EA / rationalist fiction!

Comments
331

To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):

There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.

Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".

Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).

One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.

I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

Thinking about my point #3 some more (how do you launch a satellite after a nuclear war). I realized that if you put me in charge of making a plan to DIYing this (instead of lobbying the US military to do it for me, which would be my first choice), and if SpaceX also wasn't answering my calls to see if I could buy any surplus starlinks...

You could do worse than partnering with Rocketlab, a satellite and rocket company based in New Zealand, developing the emergency satellite based on their "Photon" platform (design has flown before, small enough to still be kinda cheap, big enough to generate much more power than a cubesat). Then Rocketlab can launch their Electron rocket from New Zealand in the event of a nuclear war, and (in a real crisis like that), the whole company would help make sure the mission happened -- the idea of partnering with someone rather than just buying a satellite is key, IMO, because then it's mostly THEIR end of the world plan and in a crisis would benefit from their expertise / workforce.

I'd try to talk to the CEO, get him on board. Seems like the kind of flashy, Elon-esque, altruistic-in-a-sexy-way mission that could help with making RocketLab seem "cool" and recruiting eager mission-driven employees. (RocketLab's CEO currently has ambitions to do some similar flashy missions, like sending their own probe to Venus.)

But this would definitely be more like a $30M project, than a $300K project.

Kind of a funny selection effect going on here here where if you pick sufficiently promising / legible / successful orgs (like Against Malaria Foundation), isn't that just funging against OpenPhil funding?  This leads me to want to upweight new and not-yet-proven orgs (like the several new AIM-incubated charities), plus things like PauseAI and Wild Animal Initiative that OpenPhil feels they can't fund for political reasons.  (Same argument would apply for invertebrate welfare, but I personally don't really believe in invertebrate welfare.  Sorry!)

I'm also somewhat saddened by the inevitable popularity-contest nature of the vote; I feel like people are picking orgs they've heard of and picking orgs that match their personal cause-prioritization "team" (global health vs x-risk vs animals).  I like the idea that EA should be experimental and exploratory, so (although I am a longtermist myself), I tried to further upweight some really interesting new cause areas that I just learned about while reading these various posts:
- Accion Transformadora's crime-reduction stuff seems like a promising new space to explore for potential effective interventions in medium-income countries.
- One Acre Fund is potentially neat, I'm into the idea of economic-growth-boosting interventions and this might be a good one.
- It's neat that Observatorio de Riesgos Catastroficos is doing a bunch of cool x-risk-related projects throughout latin america; their nuclear-winter-resilience-planning stuff in Argentina and Brazil seems like a particularly well-placed bit of local lobbying/activism.

But alas, there can only be three top-three winners, so I ultimately spent my top votes on Team Popular Longtermist Stuff (Nucleic Acid Observatory, PauseAI, MATS) in the hopes that one of them, probably PauseAI, would become a winner.

(longtermist stuff)
1. Nucleic Acid Observatory
2. Observatorio de Riesgos Catastroficos
3. PauseAI
4. MATS

(interesting stuff in more niche cause areas, which i sadly doubt can actually win)
5. Accion Transformadora
6. One Acre Fund
7. Unjournal

(if longtermism loses across the board, I prefer wild animal welfare to invertebrate welfare)
8. Wild Animal Inititative
9. Faunalytics

I don't know anything about One Acre Fund in particular, but it seems plausible to me that a well-run intervention of this sort could potentially beat cash transfers (just as many Givewell-recommended charities do).

  • Increasing African agricultural productivity has been a big cause area for groups like the Bill & Melinda Gates Foundation for a long time.  Hanna Ritchie, of OurWorldInData, explains here why this cause seems so important -- it just seems kinda mathematically inevitable that if labor productivity doesn't improve, these regions will be trapped in poverty forever.  (But improving productivity seems really easy -- just use fertilizer, use better crop varieties, use better farming methods, etc.)  So this seems potentially similar to cash transfers, insofar as if we did cash transfers instead, we'd hope to see people spending a lot of the money on better agricultural inputs!
  • Notably, people who are into habitat / biodiversity preservation and fighting climate change, really like the positive environmental externalities of improving agricultrual productivity.  (The more productive the world's farmland gets, the less pressure there is to chop into jungle and farm more land.)  So if you are really into the environment, maybe those positive eco externalities make a focused intervention like this much more appealing than cash transfers, which are more about the benefits to the direct recipients and local economy.
  • One could look at this as a kind of less-libertarian, more top-down alternative to cash transfers, which makes it look bad.  (Basically -- give people the cash, and wouldn't they end up making these agricultural improvements themselves eventually?  Wouldn't cash outperform, since central planning underperforms?)  But you could also look at it as a very pro-libertarian, economic-growth-oriented intervention designed to provide public goods and create stronger markets, which makes it look good. (Hence all the emphasis about educating farmers to store crops and sell when prices are high, or preemptively transporting agricultural inputs around to local villages where they can then be sold.  Through this lens I feel like "they're solving coordination problems and providing important information to farmers.  Of course a sufficiently well-run version of this charity has the potential to outperform cash!")  This is basically me rephrasing your second bullet point.
  • Just a feeling, but I think your first bullet point (loans are more efficient because the money is paid back) wouldn't obviously make this more efficient than cash transfers?  (Maybe you are alluding to this with your use of "you believe".)  Yes, making loans is "cheaper than it first seems" because the money is paid back.  But giving cash transfers is also "better than it first seems" because the money (basically stimulus) has a multiplier effect as it percolates throughout the local economy.  Whether it's better for people to buy farming tools with cash they've been loaned (and then you get the money back and make more loans to more people who want to buy tools), versus cash they've been given (and then the cash percolates around the local economy and again other people make purchases), seems like a complicated macroeconomics question that might vary based on the local unemployment & inflation rate or etc.  It's not clear to me that one strategy is obviously better.

But these are all just thoughts, of course -- I too would be curious if One Acre Fund has some real data they can share.

Hi!  Jackson Wagner here, former aerospace engineer -- I worked as a systems engineer at Xona Space Systems (which is trying to develop next-gen GPS technology, and is recently getting involved in a military program to create a kind of backup for GPS).  I am also a big fan of the ALLFED concept.

Here are some thoughts on the emergency satellite concept mentioned -- basically I think this is a bad idea!  I am sorry that this is a harsh and overly-negative rant that just harps on one small detail of the post; I think the other ideas you mention are pretty good!:

1. No way will you be able to build and launch a satellite for $300K??  Sure, if you are SpaceX, with all the worlds' most genius engineers, and you can amortize your satellite design costs over tens of thousands of identical Starlink copies, then maybe you can eventually get marginal satellite construction cost down to around $300K.  But for the rest of us mere mortals, designing and building individual satellites, that is around the price of building and launching a tiny cubesat (like the pair I helped build at my earlier job at a tiny Virginia company called SpaceQuest).

2. I'm pretty skeptical that a tiny cubesat would be able to talk directly to cellphones?  I thought direct-to-cell satellites were especially huge due to the need for large antenas.  Although I guess Lynk Global's satellites don't seem so big, and probably you can save on power when you're just transmitting the same data to everybody instead of trying to send and recieve individual messages.  Still, I feel very skeptical that a minimum-viable cubesat will have enough power to do much of use. (Many cubesats can barely fit enough batteries to stay charged through eclipse!)

3. How are you going to launch and operate this satellite amid a global crisis??  Consider that even today's normal cubesat projects, happening in a totally benign geopolitical / economic environment, have something like a 1/3 rate of instant, "dead on arrival" mission failure (ie the ground crew is never able to contact the cubesat after deployment).  In the aftermath of nuclear war or other worldwide societal collapse, you are going to have infinitely more challenges than the typical university cubesat team.  Many ground stations will be offline because they're located in countries that have collapsed into anarchy, etc!  Who will be launching rockets, aside from perhaps the remnants of the world's militaries?  Your satellite's intended orbit will be much more radioactive, so failure rates of components will be much higher!  Basically, space is hard and your satellite is not going to work.  At the very least, you'd want to make three satellites -- one to launch and test, another to keep in underground storage for a real disaster (maybe buy some rocket, like a RocketLab Electron, to go with it!), and probably a spare.
(If the disaster is local rather than global, then you'd have an easier time launching from eg the USA to help address a faminine Africa.  But in this scenario you don't need a satellite as badly -- militaries can airdrop leaflets, neighboring regions can set up radio stations, we can ship food aid around on boats, etc.)

4. Are you going to get special permission from all the world's governments and cell-network providers, that you can just broadcast texts to anyone on earth at any time?  Getting FCC licensed for the right frequencies, making partnerships with all the cell-tower providers (or doing whatever else is necessary so that phones are pre-configured to be able to recieve your signal), etc, seems like a big ask!

5. Superpower militaries are already pretty invested in maintaining some level of communications capability through even a worst-case nuclear war.  (Eg, the existing GPS satellites are some of the most radiaiton-hardened satellites ever, in part because they were designed in the 1980s to remain operational through a nuclear war.  Modern precision ASAT weapons could take out GPS pretty easily -- but hence the linked Space Force proposal for backup "resilient GPS" systems.  I know less about military comms systems, but I imagine the situation is similar.)  Admittedly, most of these communications systems aren't aimed at broadcasting information to a broad public.  But still, I expect there would be some important communications capability left even during/after an almost inconcievably devastating war, and I would bet that crucial information could be disseminated surpisingly well to places like major cities.

6. Basically instead of building satellites yourselves, somebody should just double-check with DARPA (or Space Force or whoever) that we are already planning on keeping a rocket's worth of Starlink satellites in reserve in a bunker somewhere.  This will have the benefit of already being an important global system (many starlink terminals all around the world), reliable engineering, etc.

Okay, hopefully the above was helpful rather than just seeming mean!  If you are interested in learning more about satellites (or correcting me if it turns out I'm totally wrong about the feasibility of direct-to-cellphone from a cubesat, or etc), feel free to message me and we could set up a call!  In particular I've spent some time thinking about what a collapse of just the GPS system would look like (eg if China or Russia did a first-strike against western global positioning satellites as part of some larger war), which might be interesting for you guys to consider.  (Losing GPS would not be totally devastating to the world by itself -- at most it would be an economic disruption on the scale of covid-19.  But the problem is that if you lose GPS, you are probably also in the middle of a world war, or maybe an unprecedented worst-case solar storm, so you are also about to lose a lot of other important stuff all at once!)

Concluding by repeating that this was a hastily-typed-out, kinda knee-jerk response to a single part of the post, which doesn't impugn the other stuff you talk about! 

Personally, of the other things you mentioned, I'd be most excited about both of the "#1" items you list -- continuing research on alternative foods themselves, and lobbying naturally well-placed-to-survive-disaster governments to make better plans for resiliency.  Then #4 and #5 seem a little bit like "do a bunch of resliliency-planning research ourselves", which initially struck me as seeming less good than "lobbying governments to do resiliency planning" (since I figure governments will take their own plans more seriously).  But of course it would also be great to be able to hand those governments detailed, thoughtful information for them to start from and use as a template, so that makes #4 and #5 look good again to me.  Finally, I would be really hyped to see some kind of small-scale trials of ideas like seaweed farming, papermill-to-sugar-mill conversions, etc.

 

Cross-posting a lesswrong comment where I argue (in response to another commenter) that not only did NASA's work on rocketry probably benefitted military missile/ICBM technology, but their work on satellites/spacecraft also likely contributed to military capabilities:

Satellites were also plausibly a very important military technology.  Since the 1960s, some applications have panned out, while others haven't.  Some of the things that have worked out:

  • GPS satellites were designed by the air force in the 1980s for guiding precision weapons like JDAMs, and only later incidentally became integral to the world economy.  They still do a great job guiding JDAMs, powering the style of "precision warfare" that has given the USA a decisive military advantage since 1991's first Iraq war.
  • Spy satellites were very important for gathering information on enemy superpowers, tracking army movements and etc.  They were especially good for helping both nations feel more confident that their counterpart was complying with arms agreements about the number of missile silos, etc.  The Cuban Missile Crisis was kicked off by U-2 spy-plane flights photographing partially-assembled missiles in Cuba.  For a while, planes and satellites were both in contention as the most useful spy-photography tool, but eventually even the U-2's successor, the incredible SR-71 blackbird, lost out to the greater utility of spy satellites.
  • Systems for instantly detecting the characteristic gamma-ray flashes of nuclear detonations that go off anywhere in the world (I think such systems are included on GPS satellites), and giving early warning by tracking ballistic missile launches during their boost phase (the Soviet version of this system famously misfired and almost caused a nuclear war in 1983, which was fortunately forestalled by one Lieutenant colonel Stanislav Petrov).

Some of the stuff that hasn't:

  • The air force initially had dreams of sending soldiers into orbit, maybe even operating a military base on the moon, but could never figure out a good use for this.  The Soviets even test-fired a machine-gun built into one of their Salyut space stations: "Due to the potential shaking of the station, in-orbit tests of the weapon with cosmonauts in the station were ruled out.  The gun was fixed to the station in such a way that the only way to aim would have been to change the orientation of the entire station.  Following the last crewed mission to the station, the gun was commanded by the ground to be fired; some sources say it was fired to depletion".
  • Despite some effort in the 1980s, were were unable to figure out how to make "Star Wars" missile defence systems work anywhere near well enough to defend us against a full-scale nuclear attack.
  • Fortunately we've never found out if in-orbit nuclear weapons, including fractional orbit bombardment weapons, are any use, because they were banned by the Outer Space Treaty. But nowadays maybe Russia is developing a modern space-based nuclear weapon as a tool to destroy satellites in low-earth orbit.

Overall, lots of NASA activities that developed satellite / spacecraft technology seem like they had a dual-use effect advancing various military capabilities.  So it wasn't just the missiles.  Of course, in retrospect, the entire human-spaceflight component of the Apollo program (spacesuits, life support systems, etc) turned out to be pretty useless from a military perspective. But even that wouldn't have been clear at the time!

Rethink's weights unhedged in the wild: the most recent time I remember seeing this was when somebody pointed me towards this website: https://foodimpacts.org/, which uses Rethink's numbers to set the moral importance of different animals. They only link to where they got the weights in a tiny footnote on a secondary page about methods, and they don't mention any other ways that people try to calculate reference weights, or anything about what it means to "assume hedonism" or etc. Instead, we're told these weights are authoritative and scientific because they're "based on the most elaborate research to date".

IMO it would be cool to be able to swap between Rethink, versus squared neuron count or something, versus everything-is-100%. As is, they do let you edit the numbers yourself, and also give a checkbox that makes everything equal 100%. Which (perhaps unintentionally) is a pretty extreme framing of the discussion!! "Are shrimp 3% as important as a human life (30 shrimp = 1 person)! Or 100%? Or maybe you want to edit the numbers to something in-between?"

I think the foodimpacts calculator is a cool idea, and I don't begrudge anyone an attempt to make estimates using a bunch of made-up numbers (see the ACX post on this subject) -- indeed, I wish the calculator went more out on a limb by trying to include the human health impacts of various foods (despite the difficulties / uncertainties they mention on the "methods" page). But this is the kind of thing that I was talking about re: the weights.

Animal welfare feeling more activist & less truth-seeking:

  • This post is specifically about vegan EA activists, and makes much stronger accusations of non-truthseeking-ness than I am making here against the broader animal welfare movement in general: https://forum.effectivealtruism.org/posts/qF4yhMMuavCFrLqfz/ea-vegan-advocacy-is-not-truthseeking-and-it-s-everyone-s
  • But I think that post is probably accurate in the specific claims that it makes, and indeed vegan EA activism is part of overall animal welfare EA activism, so perhaps I could rest my case there.
  • I also think that the broader animal welfare space has a much milder version of a similar ailment. I am pretty "rationalist" and think that rationalist virtues (as expounded in Yudkowsky's Sequences, or Slate Star Codex blog posts, or Secular Solstice celebrations, or just sites like OurWorldInData) are important. I think that global health places like Givewell do a pretty great job embodying these virtues, that longtermist stuff does a medium-good job (they're trying! but it's harder since the whole space is just more speculative), and animal welfare does a worse job (but still better than almost all mainstream institutions, eg way better than either US political party). Mostly I think this is just because a lot of people get into animal EA without ever first reading rationalist blogs (which is fine, not everybody has to be just like me); instead they sometimes find EA via Peter Singer's more activist-y "Animal Liberation", or via the yet-more-activist mainstream vegan movement or climate movements. And in stuff like climate protest movements (greta thurnberg, just stop oil, sunrise, etc), being maximally truthseeking and evenhanded just isn't a top priority like it is in EA! Of course the people that come to EA from those movements are often coming specifically because they recognize that, and they prefer EA's more rigorous / rationalist vibe. (Kinda like how when Californians move to Texas, they actually make Texas more republican and not more democrat, because California is very blue but Californians-who-choose-to-move-to-Texas are red.) But I still think that (unlike the CA/TX example?) the long-time overlap with those other activist movements makes animal welfare less rationalist and thereby less truthseeking than I like.
  • (Just to further caveat... Not scoring 100/100 on truthseekingness isn't the end of the world. I love the idea of Charter Cities and support that movement, despite that some charter city advocates are pretty hype-y and use exaggerated rhetoric, and a few, like Balajis, regularly misrepresent things and feel like outright hustlers at times. As I said, I'd support animal welfare over GHD despite truthseeky concerns if that was my only beef; my bigger worries are some philosophical disagreements and concern about the relative lack of long-term / ripple effects.)

David Mathers makes a similar comment, and I respond, here.  Seems like there are multiple definitions of the word, and EA folks are using the narrower definition that's preferred by smart philosophers.  Wheras I had just picked up the word based on vibes, and assumed the definition by analogy to racism and sexism, which does indeed seem to be a common real-world usage of the term (eg, supported by top google results in dictionaries, wikipedia, etc).  It's unclear to me whether the original intended meaning of the word was closer to what modern smart philosophers prefer (and everybody else has been misinterpreting it since then), or closer to the definition preferred by activists and dictionaries (and it's since been somewhat "sanewashed" by philosophers), or if (as I suspect ) it was mushy and unclear from the very start -- invented by savvy people who maybe deliberately intended to link the two possible interpretations of the word.

Good to know!  I haven't actually read "Animal Liberation" or etc; I've just seen the word a lot and assumed (by the seemingly intentional analogy to racism, sexism, etc) that it meant "thinking humans are superior to animals (which is bad and wrong)", in the same way that racism is often used to mean "thinking europeans are superior to other groups (which is bad and wrong)", and sexism about men > women. Thus it always felt to me like a weird, unlikely attempt to shoehorn a niche philosophical position (Are nonhuman animals' lives of equal worth to humans?) into the same kind of socially-enforced consensus whereby things like racism are near-universally condemend.

I guess your definition of speciesism means that it's fine to think humans matter more than other animals, but only if there's a reason for it (like that we have special quality X, or we have Y percent greater capacity for something, therefore we're Y percent more valuable, or because the strong are destined to rule, or whatever).  Versus it would be speciesist to say that humans matter more than other animals "because they're human, and I'm human, and I'm sticking with my tribe".

Wikipedia's page on "speciesism" (first result when I googled the word) is kind of confusing and suggests that people use the word in different ways, with some people using it the way I assumed, and others the way you outlined, or perhaps in yet other ways:

The term has several different definitions.[1] Some specifically define speciesism as discrimination or unjustified treatment based on an individual's species membership,[2][3][4] while others define it as differential treatment without regard to whether the treatment is justified or not.[5][6] Richard D. Ryder, who coined the term, defined it as "a prejudice or attitude of bias in favour of the interests of members of one's own species and against those of members of other species".[7] Speciesism results in the belief that humans have the right to use non-human animals in exploitative ways which is pervasive in the modern society.[8][9][10] Studies from 2015 and 2019 suggest that people who support animal exploitation also tend to have intersectional bias that encapsulates and endorses racist, sexist, and other prejudicial views, which furthers the beliefs in human supremacy and group dominance to justify systems of inequality and oppression.

The 2nd result on a google search for the word, this Britannica article, sounds to me like it is supporting "my" definition:

Speciesism, in applied ethics and the philosophy of animal rights, the practice of treating members of one species as morally more important than members of other species; also, the belief that this practice is justified.

That makes it sound like anybody who thinks a human is more morally important than a shrimp, by definition is speciesist, regardless of their reasons.  (Later on the article talks about something called Singer's "principle of equal consideration of interests".  It's unclear to me if this thought experiment is supposed to imply humans == shrimps, or if it's supposed to be saying the IMO much more plausible idea that a given amount of pain-qualia is of equal badness whether it's in a human or a shrimp.  (So you could say something like -- humans might have much more capacity for pain, making them morally more important overall, but every individual teaspoon of pain is the same badness, regardless of where it is.)

Third google result: this 2019 philosophy paper debating different definitions of the term -- I'm not gonna read the whole thing, but its existence certainly suggests that people disagree.  Looks like it ends up preferring to use your definition of speciesism, and uses the term "species-egalitarianists" for the hardline humans == shrimp position.

Fourth: Merriam-Webster, which has no time for all this philosophical BS (lol) -- speciesism is simply "prejudice or discrimination based on species", and that's that, apparently!

Fifth: this animal-ethics.org website -- long page, and maybe it's written in a sneaky way that actually permits multiple definitions?  But at least based on skimming it, it seems to endorse the hardline position that not giving equal consideration to animals is like sexism or racism: "How can we oppose racism and sexism but accept speciesism?" -- "A common form of speciesism that often goes unnoticed is the discrimination against very small animals." -- "But if intelligence cannot be a reason to justify treating some humans worse than others, it cannot be a reason to justify treating nonhuman animals worse than humans either."

Sixth google result is PETA, who says "Speciesism is the human-held belief that all other animal species are inferior... It’s a bias rooted in denying others their own agency, interests, and self-worth, often for personal gain."  I actually expected PETA to be the most zealously hard-line here, but this page definitely seems to be written in a sneaky way that makes it sound like they are endorsing the humans == shrimp position, while actually being compatible with your more philosophically well-grounded definition.  Eg, the website quickly backs off from the topic of humans-vs-animals moral worth, moving on to make IMO much more sympathetic points, like that it's ridiculous to think farmed animals like pigs are less deserving of moral concern than pet animals like dogs.  And they talk about how animals aren't ours to simply do absolutely whatever we please with zero moral consideration of their interests (which is compatible with thinking that animals deserve some-but-not-equal consideration).

Anyways.  Overall it seems like philosophers and other careful thinkers (such as the editors of the the EA Forum wiki) would like a minimal definition, wheras perhaps the more common real-world usage is the ill-considered maximal definition that I initially assumed it had.  It's unclear to me what the intention was behind the original meaning of the term -- were early users of the word speciesism trying to imply that humans == shrimp and you're a bad person if you disagree?  Or were they making a more careful philosophical distinction, and then, presumably for activist purposes, just deliberately chose a word that was destined to lead to this confusion?

No offense meant to you, or to any of these (non-EA) animal activist sources that I just googled, but something about this messy situation is not giving me the best "truthseeking" vibes...

Load more