Afaik it is pretty well established that you cannot really learn anything new without actually testing your new belief in practice, i.e., experiments. I mean how else would this work? Evidence does not grow on trees, it has to be created (i.e., data has to be carefully generated, selected and interpreted to become useful evidence).
While it might be true that this experimenting can sometimes be done using existing data, the point is that if you want to learn something new about the universe like “what is dark matter and can it be used for something?” existing data is unlikely to be enough to test any idea you come up with.
Even if you take data from published academic papers and synthesize some new theories from that, it is still not always (or even likely) the case that the theory you come up with can be tested with already existing data because any theory has unique requirements towards what counts as evidence against it. I mean thats the whole point why we continue to do experiments rather than just metanalyze the sh*t out of all the papers out there.
Of course, advanced AI could trick us into doing certain experiments or looking at ChatGPT plugins, we may just give it access to anything on the internet wholesale in due time so all of this may just be a short bump in the road. If we are lucky, we might avoid a FOOM style takeover though as long as advanced AI remains dependent on us to carry out its experiments for it simply because of the time those experiments will take. So even if it could bootstrap to nanotech quickly due to good understanding of physics based on our formulas and existing data, the first manufacturing machine / factory would still need to be built somehow and that may take some time.
I am sorry but I don’t really have time to check the document right now but I would love to get your perspective on the potential value of just giving all people standing to sue on the behalf of future people or even natural habitats against policies that harm their interests? This seems pretty easy to do but could have pretty big consequences if the legal system would need to start consider and weigh those perspectives as well. Any thoughts or reactions?
I think the point is not that it is not conceivable that progress can continue with humans still being alive but with the game theoretic dilemma that whatever we humans want to do is unlikely to be exactly what some super powerful advanced AI would want to do. And because the advanced AI does not need us or depend on us, we simply lose and get to be ingredients for whatever that advanced AI is up to.
Your example with humanity fails because humans have always and continue to be a social species that is dependent on each other. An unaligned advanced AI would not be so. A more appropriate example would be to look at the relationship between humans and insects. I don't know if you noticed but a lot of those are dying out right now because we simply don't care about or depend on them. The point with advanced AI would be that because it is potentially even more removed from us than we are from insects and also much more capable in achieving its goals that this whole competitive process which we all engage in is going to be much more competitive and faster when advanced AIs start playing in the game.
I don't want to be the bearer of bad news but I think it is not that easy to reject this analysis... it seems pretty simple and solid. I would love to know if there is some flaw in the reasoning. Would help me sleep better at night!
I would argue that an important component of your first argument still stands. Even though AlphaFold can predict structures to some level of accuracy based on some training data sets that may already exist, an AI would STILL need to check if what it learned is usable in practice for the purposes it is intended to. This logically requires experimentation. Also hold in mind that most data which already exists was not deliberately prepared to help a machine "do X". Any intelligence no matter how strong will still need to check its hypotheses and, thus, prepare data sets that can actually deliver the evidence necessary for drawing warranted conclusions.
I am not really sure what the consequences of this are, though.
Hey @JohannaE
interesting idea and project. Are you aware of other players in this space such as http://metabus.org/ or to some degree https://elicit.org ? I think metaBUS in particular aspires to do something similar to you but seems much further along the curve (e.g., https://www.sciencedirect.com/science/article/abs/pii/S1053482216300675). However, when I interacted with them a couple of years ago, they were still struggling to gain traction. This may be a tough nut to crack!
To me it seems like you have a wrong premise. A wellbeing focused perspective is explicitly highlighting the fact that Sentinelese and the modern Londoners may have similar levels of wellbeing. That's the point! This perspective aims to get you thinking about what is really valuable in life and what the grounds for your own beliefs about what is important are.
You seem to have a very strong opinion that something like technological progress is intrinsically valuable. Living in a more technically advanced society is "inherently better" and, thus, everyone who does not see this is "objectively wrong". That argument would seem strange to even the most orthodox utilitarian. Even if your argument is a little bit more nuanced in the sense that you are seeing technological progress only as instrumentally valuable to sustain larger population sizes at similar levels of wellbeing, this perspective is still somewhat naive because technological progress also has potentially devastating consequences such as climate change or AI risks. In that sense, one can actually make the case that the agricultural revolution was maybe the beginning of the end of the human race. So maybe if there would have been a way to grow our societies more deliberately and to optimize for wellbeing (rather than economic growth) from the beginning, it wouldn't have been such a bad idea? I just want to illustrate that the whole situation is not as clear cut as you make it out to be.
Altogether, I would encourage you to keep more of an open mind regarding other perspectives. The post but also this comment of yours make it seem like you might be very quick in dismissing perspectives and being vocal about it even if you have not really engaged with them deeply. This makes you come across as naive to a somewhat more knowledgeable person which could put you at a personal disadvantage in the future and, in addition, could also be contributing to bad epistemics in your community if the people you are talking to are less informed and, thus, not able to spot where you might be cutting corners. Hope you don't resent me for this personal side note, it's meant in a constructive spirit.
Just a short follow up: I just wrote a post on the hedonic treadmill and suggest that it is an interesting concept to reflect about in relation to life in general:
I think that it may be helpful to unpack the nature of perceived happiness and wellbeing a little bit more than this post does. I think the idea of hedonic adaptation is pretty well known—most of us have probably heard of the hedonic treadmill (see Brickmann & Campbell, 1971). The work on hedonic adaptation points to the fact that perceived happiness and wellbeing are relative constructs that largely depend on reference points which are invoked. To oversimplify things a little bit, if everyone around me is bad off, I may already be happy if I am only slightly better of than them. At the same time, I might be unhappy if I am pretty good off but everyone around me is much better off. As such, it is entirely reasonable to expect that hunter-gatherers when asked about their life feel quite good and happy about it as long as they don't feel like everyone else around them is much better off.
The conclusion of this post should not be that perceived happiness and wellbeing should not be used to compare the effects of interventions but that they simply measure something different than "objective measures". They aim to measure how people feel about their life in general as they compare it to others, not how they score on a particular metric in isolation. Whether you prefer one or the other approach largely depends on your perspective on what is valuable in life. Some people may find making progress on metrics that they find particularly valuable is the way to go and others prefer a more self-organizing perspective where the affected people themselves are more involved in determining what is valuable.
In sum, this post seems a little bit confused on what the WELLBY debate is about. I can recommend the cited article to get some idea on why something like a WELLBY approach may be interesting to consider even if one doesn't like it at first glance.
Brickman, P., & Campbell, D. (1971). Hedonic relativism and planning the good society. In M. H. Appley (Ed.), Adaptation-level theory: A symposium (pp. 287–305). Academic Press. https://archive.org/details/adaptationlevelt0000unse_x7d9/page/287/mode/2up
If you take this as your point of departure, I think that’s worth highlighting that the boundaries between community and organizations can become very blurry in EA. Projects pop up all the time and innocuous situations might turn controversial over time. I think those examples with second-order partners of polyamorous relationships being (more or less directly) involved in funding decisions are a prime example. There is probably no intent or planning behind this but conflicts of interest are bound to arise if the community is tight knit and highly “interconnected”.
While I think that you have a good starting point for a discussion here, I would expect the whole situation to be not as clear cut and easy as your argument suggests. So, I really agree with the post that getting to a state most people are happy with will require some muddling through.
I think the point of the virtue ethicist in this context would be that appropriate behavior is very much dependent on the situation. You cannot necessarily calculate the „right“ way in advance. You have to participate in the situation and „feel“, „live“ or „balance“ your way through it. There are too many nuances that cannot necessarily all be captured by language or explicit reasoning.