Alexander Herwix 🔸

580 karmaJoined

Participation
4

  • Organizer of Effective Altruism Cologne
  • Attended an EA Global conference
  • Attended an EAGx conference
  • Attended more than three meetings with a local EA group

Posts
14

Sorted by New

Comments
123

Reading your comments, I think we come from different perspectives when reading such a post. 

I read the post as an attempt to highlight a blind spot in "orthodox" EA thinking, which simply tries to make a case for the need to revisit some deeply ingrained assumptions based on alternative viewpoints. This tends to make me curious about the alternative viewpoints offered and if I find them at least somewhat plausible and compelling I try to see what I can do with them based on their own assumptions. I do not necessarily see it as the job of the post to anticipate all the questions that a person coming from the "orthodox" perspective may come up with. Certainly, it's nice if it is well written and can anticipate some objections but this forum is not a philosophical journal (far from it).

So, what I am concerned with in your reaction is that it gives me an impression that you may be applying the same standards for people who share your "orthodox" understanding that "only sentient beings count" and those who question the viability of this understanding. You seem to take the "orthodox" understanding as given and demand that the other person makes arguments that are convincing from this "orthodox" perspective. This can be very difficult if the other side questions very fundamental assumptions of your position. There is a huge gap between noticing inconsistencies and problems with an "orthodox" framework and being able to offer viable alternatives that make sense to people looking at this through the lens of the "orthodox" framework. A seminal reading to appreciate the nature of this situation would probably be Thomas Kuhn (2012). The Structure of Scientific Revolutions: 50th Anniversary Edition

The whole reason I commented in the first place is that I am sometimes disappointed by people down-voting critical posts that challenge "orthodoxy" but in the next breath triumphantly declare how open-minded EA is and how curiosity and critique is at the heart of the movement. "EA is an open-ended question", they say and go down-vote the post that questions some of their core assumptions (not saying this is you, but there must be some cases of this given what I have seen happen here in the forum). Isn't it in this communities best interest and stated self-understanding that it should be a welcoming place to people who are well meaning and able to articulate their questions or critiques in a coherent manner even if they go against prevailing orthodoxy? Isn't this where EA itself came from?

Moving out of the slight rant mode and trying to reply to your substantial question about practical differences. I think my previous comment and also this provide some initial directions for this. If your fundamental assumptions change, it does not necessarily make sense to keep everything else as is. In this way, it's a starting point for the development of a new "paradigm" and this can take time. For example, EA still has arguably a mostly modern understanding of "progress", which may need to be revisited in a more systemic paradigm. There are some efforts ongoing in this direction, for example, under the label of "metamodernism". 

I personally also find the work of Daniel Schmachtenberger and the Civilization Research Institute quite interesting. They have a new article on this very topic that may be an interesting read: https://consilienceproject.org/development-in-progress/. 

However, there are many more people active in this space. The "Great Simplification" podcast by Nate Hagens has some interesting episodes with quite a few of them. Disclaimer: I am not naively endorsing all of the content on the podcast (e.g., I don't really listen to the "frankly" episodes) but I think it provides an interesting, useful, and often inspiring window on this emerging systemic perspective. If you are not too familiar with the planetary boundaries framework there is a recent episode with Johan Rockström that discusses it in broad strokes. 

I think the post was already acknowledging the difference in perspective and trying to make the case that the perspective that you are advocating for seems shortsighted from their perspective.

The key point here seems to be the consideration that is given to interconnectedness. Whereas “traditional” EA assumes stability in the Earth System and focuses “only” on marginal improvements ceteris paribus, the ecological perspective highlights the interconnectedness of “everything” and the need for a systemic focus on sustaining the entire Earth system rather than simply assuming it’s continued functioning in the face of ongoing disruption and destruction.

I think the argument is sound and does show a pretty big blind spot in “traditional” EA thinking. I think the post itself probably could have made the point in a way that is easier to digest for people with contrarian beliefs but the level of downvoting seems pretty harsh and ultimately self-defeating to me.

In terms of practical consequences, I would first of all expect more recognition of systemic perspectives in EA discourse and more openness to considering the value of ecosystems and earth systems in general. This seems worthwhile even just on instrumental grounds.

I have never said that how we treat nonhuman animals is “solely” due to differences in power. The point that I have made is that AIs are not humans and I have tried to illustrate that differences between species tend to matter in culture and social systems.

But we don’t even have to go to species differences, ethnic differences are already enough to create quite a bit of friction in our societies (e.g., racism, caste systems, etc.). Why don’t we all engage in mutually beneficial trade and cooperate to live happily ever after?

Because while we have mostly converging needs in a biological sense, we have different values and beliefs. It still roughly works out in the grand scheme of things because cultural checks and balances have evolved in environments where we had strongly overlapping values and interests. So most humans have comparable degrees of power or are kept in check by those checks and balances. That was basically our societal process of getting to value alignment but as you can probably tell by looking at the news, this process has not reached a satisfactory quality, yet. We have come far but it’s still a shit show out there. The powerful take what they can get and often only give a sh*t to the degree that they actually feel consequences from it.

So, my point is that your “loose” definition of value alignment is an illusion if you are talking about super powerful actors that have divergent needs and don’t share your values. They will play along as long as it suits them but will stop doing it as soon as an alternative more aligned with their needs and values is more convenient. And the key point here is that AIs are not humans and that they have very different needs from us. If they become much more powerful than us, only their values can keep them in check in the long run.

But what makes you think that this can be a longterm solution if the needs and capabilities of the involved parties are strongly divergent as in human vs AI scenarios?

I agree that trading can probably work for a couple of years, maybe decades, but if the AIs want something different from us in the long term what should stop them from getting this?

I don’t see a way around value alignment in the strict sense (ironically this could also involve AIs aligning our values to theirs similar to how we have aligned dogs).

The difference is that a superintelligence or even an AGI is not human and they will likely need very different environments from us to truly thrive. Ask factory farmed animals or basically any other kind of nonhuman animal if our world is in a state of violance or war… As soon as strong power differentials and diverging needs show up the value cocreation narrative starts to lose it’s magic. It works great for humans but it doesn’t really work with other species that are not very close and aligned with us. Dogs and cats have arguably fared quite well but only at the price of becoming strongly adapted to OUR needs and desires.

In the end, if you don’t have anything valuable to offer there is not much more you can do besides hoping for, or ideally ensuring, value alignment in the strict sense. Your scenario may work well for some time but it’s not a longterm solution.

This reminds me of the work on the Planungszelle in Germany but with some more bells and whistles. One difference that I see is that afaik the core idea in more traditional deliberation processes is that the process itself is also understandable by the average citizen. This gives it some grounding and legitimacy in that all people involved in the process can cross-check each other and make sure that the outcome is not manipulated. You seem to be diverging from this ideal a little bit in the sense that you seem to require the use of sophisticated statistical techniques, which potentially cannot be understood or cross-checked by a general cross-section of the population. 

Maybe it would make sense to use a two-stage procedure where in the first (preparation) stage you gain general agreement on what process to run in the second (work) stage? Or looking at your model to actually have the citizen assembly be involved in managing and controlling the expert modeling process or have at least multiple different expert teams provide models to the citizen assembly. Otherwise it seems like you have a single point of failure where the democratic aspect of the process can be neutralized potentially quite easily.

I am just speculating though, haven't had time to look at the white paper in detail. Maybe/probably you have thought about those aspects already!

The key point that I am trying to make is that you seem to argue against our common sense understanding that animals are sentient because they are anatomically similar to us in many respects and also demonstrate behavior that we would expect sentient creatures to have. Rather you come up with your own elaborate requirements that you argue are necessary for a being able to say something about qualia in other beings but then at some point (maybe at the point where you feel comfortable with your conclusions) you stop following your own line of argument through to the end (i.e., qualia somewhere in the causal structure != other humans have qualia) and just revert back to "common sense", which you have argued against just before as being insufficient in this case. So, your position seems somewhat selective and potentially self-serving with respect to supporting your own beliefs rather than intellectually superior to the common sense understanding.

But how can you assume that humans in general have qualia if all the talking about qualia tells you only that qualia exist somewhere in the causal structure? Maybe all talking about qualia derives from a single source? How would you know? For me, this seems to be a kind of a reductio ad absurdum for your entire line of argument.

Thanks for sharing your thoughts! I think you are onto an interesting angle here that could be worthwhile exploring if you are so inclined. 

One interesting line of work that you do not seem to be considering at the moment but could be interesting is the work done in the "metacrisis" (or polycrisis) space. See this presentation for an overview but I recommend diving deeper than this to get a better sense of the space. What this perspective is interested in is trying to understand and address the underlying patterns, which create the wicked situation we find ourselves in. They work a lot with concepts like "Moloch" (i.e., multi-polar traps in coordination games), the risk accelerating role of AI or different types of civilizational failure modes (e.g., dystopia vs. catastrophe) we should guard against.

Interesting for you may also be a working paper that I am working on with ALLFED, where we are looking at the digital transformation as a driver of systemic catastrophic risks. We do this based on a simulation model in specific scenarios but then generalize a framework where we suggest that the key features that make digital systems valuable also make them an inherent driver of what we called "the risk of digital fragility". Our work does not yet elaborate on the role of AI but only the pervasive use of digital systems and services in general. My next steps are to work out the role of AI more clearly and see if/how our digital fragility framework can be put to use to better understand how AI could be contributing to systemic catastrophic risks. You can reach out via PM if you are interested to have a chat about this.

Hey Daniel,

as I also stated in another reply to Nick, I didn’t really mean to diminish the point you raised but to highlight that this is really more of a „meta point“ that’s only tangential to the matter of the issue outlined. My critical reaction was not meant to be against you or the point you raised but the more general community practice / trend of focusing on those points at the expense of engaging the subject matter itself, in particular, when the topic is against mainstream thinking. This I think is somewhat demonstrated by the fact that your comment is by far the most upvoted on an issue that would have far reaching implications if accepted as having some merit.

Hope this makes it clearer. Don’t mean to criticize the object level of your argument, it’s just coincidental that I picked out your comment to illustrate a problematic development that I see.

P.S.: There is also some irony in me posting a meta critique of a meta critique to argue for more object level engagement but that’s life I guess.

Load more