Alexander Herwix 🔸

586 karmaJoined

Participation
4

  • Organizer of Effective Altruism Cologne
  • Attended an EA Global conference
  • Attended an EAGx conference
  • Attended more than three meetings with a local EA group

Posts
14

Sorted by New

Comments
125

I don’t agree with this sentiment. At least for me I really do not see any real cost associated with being vegan that would keep me from earning more or being a better person in any meaningful way.

For example, I am pretty sure I wouldn’t work more if I ate more meat, why would I? There really doesn’t seem to be a causal pathway here. Maybe if you really crave beef and you can’t help yourself thinking about this all the time… yeah, that could be distracting and reduce your performance but I am not sure that something like this occurs all that often. Never happened to me at least.

I would argue it’s actually quite the opposite. Being vegan is normally quite a healthy lifestyle that has positive effects on health all around. Don’t underestimate the impact of having to live with the cognitive dissonance of being directly responsible for the unnecessary suffering of harmless animals.

But I guess there are different preferences and maybe you see things differently. I just wanted to flag that you are not really presenting knock down arguments here. To me it seems more like a self-justificatory move to somehow “absolve” you from doing the right thing.

Maybe I am naive but what is the cost that’s associated with not eating meat? Not having the taste of it? What motivates you to donate money to reduce animal suffering if you believe that your taste is more valuable than the life of the animal in the first place? Or are you at a point where you believe that animals matter enough to warrant some small amounts of donations but not to deprive you of their taste?

I mean, of course it’s good to donate but I don’t see why this means that you should continue the practice that you want to offset if you can help it or am I missing something?

Similarly, if I offset pollution, I do not turn around and pollute more because that would defeat the purpose?!

Reading your comments, I think we come from different perspectives when reading such a post. 

I read the post as an attempt to highlight a blind spot in "orthodox" EA thinking, which simply tries to make a case for the need to revisit some deeply ingrained assumptions based on alternative viewpoints. This tends to make me curious about the alternative viewpoints offered and if I find them at least somewhat plausible and compelling I try to see what I can do with them based on their own assumptions. I do not necessarily see it as the job of the post to anticipate all the questions that a person coming from the "orthodox" perspective may come up with. Certainly, it's nice if it is well written and can anticipate some objections but this forum is not a philosophical journal (far from it).

So, what I am concerned with in your reaction is that it gives me an impression that you may be applying the same standards for people who share your "orthodox" understanding that "only sentient beings count" and those who question the viability of this understanding. You seem to take the "orthodox" understanding as given and demand that the other person makes arguments that are convincing from this "orthodox" perspective. This can be very difficult if the other side questions very fundamental assumptions of your position. There is a huge gap between noticing inconsistencies and problems with an "orthodox" framework and being able to offer viable alternatives that make sense to people looking at this through the lens of the "orthodox" framework. A seminal reading to appreciate the nature of this situation would probably be Thomas Kuhn (2012). The Structure of Scientific Revolutions: 50th Anniversary Edition

The whole reason I commented in the first place is that I am sometimes disappointed by people down-voting critical posts that challenge "orthodoxy" but in the next breath triumphantly declare how open-minded EA is and how curiosity and critique is at the heart of the movement. "EA is an open-ended question", they say and go down-vote the post that questions some of their core assumptions (not saying this is you, but there must be some cases of this given what I have seen happen here in the forum). Isn't it in this communities best interest and stated self-understanding that it should be a welcoming place to people who are well meaning and able to articulate their questions or critiques in a coherent manner even if they go against prevailing orthodoxy? Isn't this where EA itself came from?

Moving out of the slight rant mode and trying to reply to your substantial question about practical differences. I think my previous comment and also this provide some initial directions for this. If your fundamental assumptions change, it does not necessarily make sense to keep everything else as is. In this way, it's a starting point for the development of a new "paradigm" and this can take time. For example, EA still has arguably a mostly modern understanding of "progress", which may need to be revisited in a more systemic paradigm. There are some efforts ongoing in this direction, for example, under the label of "metamodernism". 

I personally also find the work of Daniel Schmachtenberger and the Civilization Research Institute quite interesting. They have a new article on this very topic that may be an interesting read: https://consilienceproject.org/development-in-progress/. 

However, there are many more people active in this space. The "Great Simplification" podcast by Nate Hagens has some interesting episodes with quite a few of them. Disclaimer: I am not naively endorsing all of the content on the podcast (e.g., I don't really listen to the "frankly" episodes) but I think it provides an interesting, useful, and often inspiring window on this emerging systemic perspective. If you are not too familiar with the planetary boundaries framework there is a recent episode with Johan Rockström that discusses it in broad strokes. 

I think the post was already acknowledging the difference in perspective and trying to make the case that the perspective that you are advocating for seems shortsighted from their perspective.

The key point here seems to be the consideration that is given to interconnectedness. Whereas “traditional” EA assumes stability in the Earth System and focuses “only” on marginal improvements ceteris paribus, the ecological perspective highlights the interconnectedness of “everything” and the need for a systemic focus on sustaining the entire Earth system rather than simply assuming it’s continued functioning in the face of ongoing disruption and destruction.

I think the argument is sound and does show a pretty big blind spot in “traditional” EA thinking. I think the post itself probably could have made the point in a way that is easier to digest for people with contrarian beliefs but the level of downvoting seems pretty harsh and ultimately self-defeating to me.

In terms of practical consequences, I would first of all expect more recognition of systemic perspectives in EA discourse and more openness to considering the value of ecosystems and earth systems in general. This seems worthwhile even just on instrumental grounds.

I have never said that how we treat nonhuman animals is “solely” due to differences in power. The point that I have made is that AIs are not humans and I have tried to illustrate that differences between species tend to matter in culture and social systems.

But we don’t even have to go to species differences, ethnic differences are already enough to create quite a bit of friction in our societies (e.g., racism, caste systems, etc.). Why don’t we all engage in mutually beneficial trade and cooperate to live happily ever after?

Because while we have mostly converging needs in a biological sense, we have different values and beliefs. It still roughly works out in the grand scheme of things because cultural checks and balances have evolved in environments where we had strongly overlapping values and interests. So most humans have comparable degrees of power or are kept in check by those checks and balances. That was basically our societal process of getting to value alignment but as you can probably tell by looking at the news, this process has not reached a satisfactory quality, yet. We have come far but it’s still a shit show out there. The powerful take what they can get and often only give a sh*t to the degree that they actually feel consequences from it.

So, my point is that your “loose” definition of value alignment is an illusion if you are talking about super powerful actors that have divergent needs and don’t share your values. They will play along as long as it suits them but will stop doing it as soon as an alternative more aligned with their needs and values is more convenient. And the key point here is that AIs are not humans and that they have very different needs from us. If they become much more powerful than us, only their values can keep them in check in the long run.

But what makes you think that this can be a longterm solution if the needs and capabilities of the involved parties are strongly divergent as in human vs AI scenarios?

I agree that trading can probably work for a couple of years, maybe decades, but if the AIs want something different from us in the long term what should stop them from getting this?

I don’t see a way around value alignment in the strict sense (ironically this could also involve AIs aligning our values to theirs similar to how we have aligned dogs).

The difference is that a superintelligence or even an AGI is not human and they will likely need very different environments from us to truly thrive. Ask factory farmed animals or basically any other kind of nonhuman animal if our world is in a state of violance or war… As soon as strong power differentials and diverging needs show up the value cocreation narrative starts to lose it’s magic. It works great for humans but it doesn’t really work with other species that are not very close and aligned with us. Dogs and cats have arguably fared quite well but only at the price of becoming strongly adapted to OUR needs and desires.

In the end, if you don’t have anything valuable to offer there is not much more you can do besides hoping for, or ideally ensuring, value alignment in the strict sense. Your scenario may work well for some time but it’s not a longterm solution.

This reminds me of the work on the Planungszelle in Germany but with some more bells and whistles. One difference that I see is that afaik the core idea in more traditional deliberation processes is that the process itself is also understandable by the average citizen. This gives it some grounding and legitimacy in that all people involved in the process can cross-check each other and make sure that the outcome is not manipulated. You seem to be diverging from this ideal a little bit in the sense that you seem to require the use of sophisticated statistical techniques, which potentially cannot be understood or cross-checked by a general cross-section of the population. 

Maybe it would make sense to use a two-stage procedure where in the first (preparation) stage you gain general agreement on what process to run in the second (work) stage? Or looking at your model to actually have the citizen assembly be involved in managing and controlling the expert modeling process or have at least multiple different expert teams provide models to the citizen assembly. Otherwise it seems like you have a single point of failure where the democratic aspect of the process can be neutralized potentially quite easily.

I am just speculating though, haven't had time to look at the white paper in detail. Maybe/probably you have thought about those aspects already!

The key point that I am trying to make is that you seem to argue against our common sense understanding that animals are sentient because they are anatomically similar to us in many respects and also demonstrate behavior that we would expect sentient creatures to have. Rather you come up with your own elaborate requirements that you argue are necessary for a being able to say something about qualia in other beings but then at some point (maybe at the point where you feel comfortable with your conclusions) you stop following your own line of argument through to the end (i.e., qualia somewhere in the causal structure != other humans have qualia) and just revert back to "common sense", which you have argued against just before as being insufficient in this case. So, your position seems somewhat selective and potentially self-serving with respect to supporting your own beliefs rather than intellectually superior to the common sense understanding.

But how can you assume that humans in general have qualia if all the talking about qualia tells you only that qualia exist somewhere in the causal structure? Maybe all talking about qualia derives from a single source? How would you know? For me, this seems to be a kind of a reductio ad absurdum for your entire line of argument.

Load more