More active on Reddit as /u/Open_Thinker
Hey, thanks for the reply, it looks like there is a lot of interesting / useful information there. Also, it looks like you replied twice to me based on notifications, but I can only find this comment, so sorry if I missed something else.
With all due respect, I think there is a bit of a misunderstanding on your part (and others voting you up and me down).
If your focus is on rigorous, measurable, proven causes, great. I'm very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative causes which are less measurable but we think could have higher expected value, or at the very least, might help the EA movement gather valuable experimental data.
First of all, I am interested in rigorous & evidential efforts--what one could perhaps call "scientific" in method generally; however, I am not exclusively interested in such. It is therefore not correct to label that Open_Thinker is not "interested in more speculative causes" with deep, far-reaching potential x consequences, similarly to your apparent position. The difference (e.g. with OpenAI) is that the EA Hotel is a relatively minor organization (at least based on existing budget and staffing levels) in comparison which is less likely to have much of an effective impact, compared to other organizations such as DeepMind and OpenAI (as well as academic groups) with concrete achievements and credentials--e.g. AlphaGo and Elon Musk's achievements and former involvement, respectively. This should not be underestimated, as it is a critical component; the EA Hotel has a fairly non-existent achievements list relatively it looks like, and from what I can tell by skimming ongoing activities, it may remain so (at least for the near future). It is not only I who thinks this way, as it directly explains the funding gap.
So secondarily, no, I think we do not have a fundamental disagreement vis-a-vis fundamental priorities, although certainly we do regarding implementations, details, specific methods, etc.
Actually, I am being quite on topic, because the topic of this thread specifically is about funding the EA Hotel, which has been my focus throughout. Specifically within the context of the top comment that the EA Hotel is "the best use of EA money," which I was directly responding to and questioning. It is merely me expressing my skepticism and due diligence, is that wrong? Based on a specific claim like that, where is the evidence to support it?
So just so there is clear understanding, I am really only interested here in whether or not I personally should help fund the EA Hotel, because I am willing to do so if there is convincing logic or evidence to do so. Up until this point I still do not see it; however, the EA Hotel also has the funding it needs from other sources now, so perhaps we should just leave the matter--although I am still willing to continue, although increasingly less so, as there are diminishing apparent returns for all involved, in my estimation.
However, thank you again for the response and information above, I will take some time to peruse it.
This point is reasonable, and I fully acknowledge that the EA Hotel cannot have much measurable data yet in its ~1 year of existence. However, I don't think it is a particularly satisfying counter response.
If the nature of the EA Hotel's work is fundamentally immeasurable, how is one able to objectively quantify that it is in fact being altruistic effectively? If it is not fundamentally immeasurable but is not measured and could have been measured, then that is likely simply incompetence. Is it not? Either way, it would be impossible to evidentially state that the EA Hotel has good yield.
Further, the idea that the EA Hotel's work is immeasurable because it is a meta project or has some vague multipler effects is fundamentally dissatisfying to me. There is a page full of attempted calculations in update 3, so I do not believe the EA Hotel assumes it is immeasurable either, or at least originally did not. The more likely answer, a la Occam's Razor, is that there is simply insufficient effort in resolving the quantification. There are, after all, plenty of other more pressing and practical challenges to be met on a day-to-day basis; and [surprisingly] it does not seem to have been pressed much as a potential issue before (per the other response by Greg_Colbourn).
Even if it is difficult to measure, a project (particularly one which aspires to be effective--or greatly effective, or even the most effective) must as a requirement outline some clear goals against which its progress can be benchmarked in my opinion, so that it can determine its performance and broadcast this clearly. It is simply best practice to do so. This has not been done as far as I can tell--if I am mistaken, please point me to it and I will revise my opinions accordingly.
There are a couple additional points I would make. Firstly, as an EA Hotel occupant, you are highly likely to be positively biased in its favor. Therefore, you are naturally inclined to calculate more generously in its favor; and certainly the article you wrote and linked to is positive in its support indeed. Is this refutable? It is also likely an objective fact that your interests align with the EA Hotel's, and someone whose interests were less aligned could easily weight the considerations you stated less heavily. You are therefore not an objective or the best judge of the EA Hotel's value, despite (or because of) your first-hand experience.
The other point, which I think is common throughout the EA community, is that it is somewhat elitist in thinking that the EA way is the best (and perhaps only) way--there is some credibility to this claim I believe as it was noted on the recent EA survey. For example, is Bill Gates an EA? He does not visit the EA forum much AFAIK, focuses on efforts that differ somewhat from the EA's priorities, etc. But undeniably I would think that his net positive utility vastly outweighs the entire EA Forum's even if he does not follow EA, or at least does not follow strictly. Bill Gates does not (to my knowledge) support the EA Hotel, and if he does then not to a level to make it financially sustainable in perpetuity. Should he--and if he does not, is he wrong for not doing so? If you believe that the EA Hotel is the best use of funds (as has been claimed at the top of this thread and is supported in your article), then yes, you would probably conclude that he is wrong based on his inaccurate allocation of resources which results in a sub-ideal outcome in terms of ethical utility. This logic is misguided in my opinion.
Contrarily to EA puritanism, the fact in my opinion is that there are generally EAs commonly beyond EA borders, e.g. celebrities like Bill Gates and Elon Musk, but also plenty of anonymous people in general. Is the "Chasm" you described real? I am not sure that it is, or at least not so acutely. In particular for the EA Hotel's context, there are plenty of other organizations already which are richly-funded such as Google's DeepMind that are active and significantly contributing to the same fields which the EA Hotel is interested in (from my understanding). The EA Hotel's contributions in such an environment are therefore likely not to be a large multipler (although it is not impossible to the opposite, and I am open to that possibility), but instead small relatively. It is a possibility that contributing to the EA Hotel is actually suboptimal or even unethical because of its incremental contributions which yield diminished returns relative to what could be the result via alternative avenues. This is not a definite conclusion, but I am noting it for completeness or inclusivity of contrary viewpoints.
To be clear, none of what I have written is intended as an insult in any way. The point is only that it is not clear that the EA Hotel is able to substantiate its claim to being effectively altruistic (e.g. via lack of measurability, which seems to be your argument), particularly "very" or even "the most" effective (in terms of output per resource input). Based on this lack of clarity, I find that I cannot personally commit to supporting the project.
However, it looks like the EA Hotel already has the funding it needs now, so perhaps we may simply go our separate ways at this point. My aim throughout was to be constructive. Hopefully some of it was useful in some way.
The fact that others are not interested in such due diligence is itself a separate concern that support for the EA Hotel is perhaps not as rigorous as it should be; however, this is not a concern against you or the EA Hotel, but rather against your supporters. This to me seems like a basic requirement, particularly in the EA movement.
I think this is missing the point. The point of the EA Hotel is not to help the residents of the hotel, it is to help them help other people (and animals). AI Safety research, and other X-risk research in general, is ultimately about preventing the extinction of humanity (and other life). This is clearly a valuable thing to be aiming for. However, as I said before, it's hard to directly compare this kind of thing (and meta level work), with shovel-ready object level interventions like distributing mosquito nets in the developing world.
No, I recognize, understand, and appreciate this point fully; but I fundamentally do not agree, that is why I cannot in good conscience support this project currently. Because it is such a high value and potentially profitable family of fields (e.g. AI research in particular) to society, it has already attracted significant funding from entrenched institutions, e.g. Google's DeepMind. In general, there is a point of diminishing returns for incremental investments, which is a possibility in such a richly-funded area as this one specifically. Unless there is evidence or at least logic to the contrary, there is no reason to reject this concern or assume otherwise.
Also, as part of my review, I looked at the profiles of the EA Hotel's current and past occupants; the primary measurable output seems to be progressing in MOOCs and posting threads on the EA Forum; this output is frankly ~0 in my opinion--this is again not an insult at all, it is simply my assessment. It may be that the EA Hotel is actually fulfilling the role of remedial school for non-competitive researchers who are not attracting employment from the richly funded organizations in their fields; such an effort would likely be low yield--again, this is not an insult, it is just a possibility from market-based logic. There are certainly other, more positive potential roles (which I am very open to, otherwise I would not bother to be continuing this discussion to this thread depth), however these have not yet been proven.
Re: the measurement bias response, this is an incomplete answer. It is fine to not have much data in support as the project is only ~1 year old at this point, however some data should be generated; and more importantly the project should have a charter with an estimate or goal for which to anticipate measurable effects, against which data can be attempted (whether successfully or unsuccessfully) to record how well the organization is doing in its efforts. How else will you know if you are being effective or not, or successful or not?
How do you know that the EA Hotel is being effectively altruistic (again particularly against competing efforts), in the context of your given claim at the top about being effectively "the best use of money"?
These issues still remain open in my opinion. Hopefully these critiques will at least be some food for thought to strengthen the EA Hotel and future endeavors.
The 3rd update was reviewed, that was what led me to search for Part 2 which is expected in update 10. Frankly I am personally not interested in the 3rd update's calculations, because simply based on personal time allocations I would prefer a more concise estimate than having to tediously go through the calculations myself.
Please understand that this is not an insult, and I think it is a reasonable point in fact--for example, with other [in some ways competing] efforts (e.g. startups), it would not be acceptable to most venture capitalists to present slides of incomplete calculations in a pitch deck and ask them to go through it manually on their own time rather than having the conclusive points tidily summarized. It is likely just not worth the effort in most cases.
It looks like the EA Hotel has obtained funding through 2019 though now, so I congratulate your team on that. If you would like to continue the discussion, I suggest replying to my other comment (below) so that there are not diverging threads.
That is understandable; however it is dissatisfactory in my personal opinion--I cannot commit to funding on such indefinite and vague terms. You (and others) clearly think otherwise, but hopefully you can understand this contrary perspective.
Even if "the EA Hotel is a meta level project," which to be clear I can certainly understand, there should still be some understanding or estimate of what the anticipated multiplier should be, i.e. a range with a target within a +/- margin. From what I can see upon reviewing current and past guests' projects, I am not confident that there will be a high return of ethical utility for resource inputs.
Unless it can be demonstrated contrarily, my default inclination (similar to Bill Gates' strategy) is that projects in developing regions are generally (but certainly not always) significantly higher in yield than in developed regions; which is not unlike the logic that animal welfare efforts are higher yield than human efforts. The EA Hotel is in a developed society and seems focused on field(s) with some of the highest funding already, e.g. artificial intelligence (AI) research. Based on this, it seems perhaps incorrect (or even unethical) to allocate to this project. This isn't necessarily conclusive, but evidence to the opposite has not been clear.
Hopefully you understand that what I am describing above is not meant as a personal insult by any means or the result of rash emotions, but rather the result of rational consideration along ethical lines.
Thanks for asking, that's a good question.
It basically comes down to yield or return on investment (ROI). It seems quite common for utilitarianism and effective altruism to be related, and in the former to have some quantification of value and ratio of output per unit of input; one might say that the most ethical stance is to find the min / max optimization that produces the highest return. Whether EA demands or requires such an optimal maximum, or whether a suboptimal effectivity is still ethical, is an interesting but separate discussion.
So in a lot of the animal welfare threads, there is commonly some idea that they produce superior yield in ethical utility, usually because there is simply more biomass, it is much cheaper, etc. Even if I usually don't agree, there is still the basic quantification that provides a foundation for such an ethical claim.
Another example are environmental organizations such as Cool Earth, which gives x acres of rain forest preserved and y oxygen production or carbon sequestration for $ given. That is not exactly utility per se, but it is a good measure that could probably be converted into some units of generic utility.
For the EA Hotel, I am not sure what the yield is for $ given. In order to make a claim that x is the "best use" of resources, this sort of consideration is required and must be clear, IMO.
Consider that if allocation of resources for yield is an ethical decision, then asking for some clarification is not intended to be rude at all or an off-topic question, it is simply due diligence. Even if I have the funds to donate to the EA Hotel, if the yield is lower than the utility that can be produced by an alternative (and in fact competing) effort, then it is my ethical obligation to fund the alternative. Is it not?
Perhaps there is still a misunderstanding that my ask is overly aggressive or impolite; however, what is worse is for someone simply not to care and not to engage in the discussion. But from my perspective, the EA Forum is seemingly giving a pass to one of our own. Again, my ask is simply due diligence. For context, at £5.7k/month, I could fund the EA Hotel for over a year. However, does the EA Hotel provide better benefit than animal welfare efforts? Polio? Global warming? Political campaigns? Poverty alleviation?
The answer does not seem clear to me. Without that, it is difficult to proceed in making an ethical decision.
That is understandable, however when presenting information (i.e. linking to it on your homepage) there is an implicit endorsement of said information; else it should not be presented. This is irrespective of whether the source is formally affiliated or not--its simple presence is already an informal affiliation. The simple fact that the EA Hotel does not have a better presentation of information is itself meta information on the state of the organization and project.
However, this was not really the main point; it was only a "little" concern as I previously wrote. The more significant concern is that there does not seem to be a ready presentation of the effort's value (expected or real), and that this is not expected until update 10--as I wrote previously, this was in my opinion a mistake, as it should be one of the primary priorities for an EA organization.
Oh, found your 2nd reply to me.
This is an astute point, I fully acknowledge and recognize the validity of what you are saying.
However, it is not that simple, it depends on the expected yield curve of the specific effort and its specific context. In some cases that are already "well-funded" there is high value generation which is still below full potential and should be increasingly funded further, e.g. due to economies of scale; in other cases, there are diminishing returns and they should not be funded further.
Similarly, the same is true for "not well-funded" efforts. There are some efforts which have high potential and should be lifted off the ground, and there are others which should be neglected and left to die.
So that is the difference, in general terms. Which case a specific example is in takes some careful consideration to the details to determine.