M

MatthewDahlhausen

Research Engineer @ National Renewable Energy Laboratory
1013 karmaJoined Working (6-15 years)Denver, CO, USA

Bio

I develop software tools for the building energy efficiency industry. My background is in architectural and mechanical engineering (MS Penn State, PhD University of Maryland). I know quite a bit about indoor air quality and indoor infectious disease transfer, and closely follow all things related to climate change and the energy transition. I co-organize the local EA group in Denver, Colorado.

Comments
132

A useful test when moral theorizing about animals is to swap "animals" with "humans" and see if your answer changes substantially. In this example, if the answer changes, the relevant difference for you isn't about pure expected value consequentialism, it's about some salient difference between the rights or moral status of animals vs. humans. Vegans tend to give significant, even equivalent, moral status to some animals used for food. If you give near-equal moral status to animals, "offsetting meat eating by donating to animal welfare orgs" is similar to "donating to global health charities to offset hiring a hitman to target a group of humans". There are a series of rebuttals, counter-rebuttals, etc. to this line of reasoning. Not going to get into all of them. But suffice to say that in the animal welfare space, an animal welfarist carnivore is hesitantly trusted - it signals either a lack of commitment or discipline, a diet/health struggle, a discordant belief that animals deserve far less rights and moral status as humans, or (much rarer) a fanatic consequentialist ideology that thinks offsetting human killing is morally coherent and acceptable. A earnest carnivore that cares a lot about animal welfare is incredibly rare.

Maximization is not as simple as choosing the single action the produces the most benefit; actions are not necessary exclusive. If I go to the grocery store, I don't only by beans because I think they have the highest nutritional value per dollar. I buy other things too, and need to be because beans alone are insufficient. One can donate to animal welfare charities and be vegan; those aren't exclusive.

Can you elaborate on why you think we will never eradicate factory farming? You point to near-term trends that suggest it will get worse over the coming decades. What about on a century long time scale or longer? Factory farming has only been around for a few generations, and food habits have changed tremendously over that time.

I think it's important to consider how some strategies may make future work difficult. For example, Martha Nussbaum highlights how much of the legal theory in the animal rights movement has relied on showing similarities between human and animal intelligence. Such a "like us" comparison limits consideration to a small subset of vertebrates. They are impotent at helping animals like chickens, were much legal work is happening now. Other legal theories are much more robust to expansion and consideration of other animals as the science improves to understand their needs and behavior.

Using your line of argument applied to the analogy you provided would suggest that efforts like developing a malaria vaccine are misguided, because malaria will always be with us, and we should just focus on reducing infection rates and treatment.

ASHRAE has long had standards and working groups on UVC, and recently published standard 241 on Control of Infectious Aerosols. The goal is to reduce transmission risk, not to support any one particular technology. Filtration is usually cheaper than Far-UVC and easier to maintain for the same level of infection control. Far-UVC/UVC is better in some niches, particularly in healthcare settings that require high air flow rates.

I suggest getting involved in ASHRAE and the research community that has been working on and developing standards for infection control for over a century.

From an EA perspective, I think it is more effective to promote adoption of ASHRAE Std 241 than the adoption of Far-UVC specifically.

-perspective of a Ph.D. mechanical engineer and ASHRAE member with experience in IAQ

Academic freedom is not and has never been meant to protect professors on topics that have no relevance to their discipline: "Teachers are entitled to freedom in the classroom in discussing their subject, but they should be careful not to introduce into their teaching controversial matter which has no relation to their subject. Limitations of academic freedom because of religious or other aims of the institution should be clearly stated in writing at the time of the appointment."

If, say, a philosophy professor wants to express opinions on infanticide, that is covered under academic freedom. If they want to encourage students to drink bleach, saying it is good for their health, that is not covered.

We can and should have a strong standard of academic freedom for relevant, on-topic contributions. But race science is off topic and irrelevant to EA. It's closer to spam. Should the forum have no spam filter and rely on community members to downvote posts as the method of spam control?

A clear example of a post that would be banned under the rules: why-ea-will-be-anti-woke-or-die.

Reducing chronic health risks from indoor air pollution (mostly PM 2.5) generally entails different strategies than reducing infection risk from aerosols. Filtration can address both, but the airflow rates and costs can be quite different. UVC won't do anything about PM 2.5, and may contribute to it with ozone formation.

I recommend reading the supporting literature and history behind ASHRAE Std 62.1 and Std 241, which cover ventilation and control for infectious diseases in buildings. There are also several recent studies by the National Academies on air pollution and infectious aerosols. The indoor air and infectious disease communities are quite large - with their own funding sources and conferences. It seems a lot of the "gaps" presented here are not unknown to experts, but just to the EA community and amateur researchers.

Meta level question:

How does Manifest have anything to do with Effective Altruism, and why is this on the EA forum?

Shouldn't this post be on some other other channel internal to Manifest and the forecasting community?

It get there are some people that went to Manifest that are also in the EA movement, but it seems like the communities are quite distinct and have different goals. From comments and conversations, it seems pretty clear to me that this Manifest community has a strong hostility towards even considering the reputational risks platforming racist speakers has on the rest of the EA movement. Part of being a big tent movement means caring about not stinking up the tent for everyone else.

Let's please firewall the Manifest community from EA?

I think that longtermism relies on more popular, evidenced-based causes like global health and animal welfare to do its reputational laundering through the EA label. I don't see any benefit to global health and animal welfare causes from longtermism. And for that reason I think it would be better for the movement to split into "effective altruism" and "speculative altruism" so the more robust global health and animal welfare causes areas don't have to suffer the reputational risk and criticism that is almost entirely directed at the longtermism wing.

Given the movement is essentially driven by Open Philanthropy, and they aren't going to split, I don't see such a large movement split happening. So I may be inclined towards some version of, as you say, "Stop doing stuff that looks weird, even if it is perfectly defensible by longtermist lights, simply because I have neartermist values and disagree with it." The longtermist stuff is maybe like 20% of funding and 80% of reputational risk, and the most important longtermist concerns can be handled without the really weird speculative stuff.

But that's irrelevant, because I think this ought to be a pretty clear case of the grant not being defensible by longtermist standards. Paying bay area software development salaries to develop a video game (why not a cheap developer literally anywhere else?) that didn't even get published is hardly defensible. I get that the whole purpose of the fund is to do "hits based giving". But it's created an environment where nothing can be a mistake, because it is expected most things would fail. And if nothing is a mistake, how can the fund learn from mistakes?

A butterfly flaps its wings and causes a devastating hurricane to form in the tropics. Therefore, we must exterminate butterflies, because there is some small probability X that doing so will avert hurricane disaster.

But it is just as easily the case that the butterfly flaps prevent devastating hurricanes from forming. Therefore we must massively grown their population.

The point being, it can be practically impossible to understand the casual tree and get even the sign right around low probability events.

That's what I take issue with - it's not just the numbers, it's the structural uncertainty of cause and effect chains when you consider really low probability events. Expected value is a pretty bad tool for action relevant decision making when you are dealing with such numerical and structural uncertainty. It's perhaps better to pick a framework like "it's robust under multiple decision theories" or "pick something that has the least downside risk".

In our instance, two competing plausible structural theories among many are something like: "game teaches someone an AI safety concept -> makes them more knowledgeable or inspire them to take action -> they work on AI safety -> solve alignment problem -> future saved" vs. "people get interested in doing the most good -> sees community of people that claim to do that, but that they fund rich people to make video games -> causes widespread distrust of the movement -> strong social stigma developed against people that care about AI risk -> greatly narrowed range of people / worldviews because people don't want to associate -> makes it near impossible to solve alignment problem -> future destroyed"

The justifications for these grants tend to use some simple expected value calculation of a singular rosy hypothetical casual chain. The problem is it's possible to construct a hypothetical value chain to justify any sort of grant. So you have to do more than just make a rosy casual chain and multiply numbers through. I've commented before on some pretty bad ones that don't pass the laugh test among domain experts in the climate and air quality space.

The key lesson from early EA (evidenced based giving in global health) was that it is really hard to understand if the thing you are doing is having an impact, and what the valence of the impact is, for even short, measurable casual chains. EA's popular causes now (longtermism) seem to jettison that lesson, when it is even more unclear what the impact and sign is through complicated low probability casual chains.

So it's about a lot more than effect sizes.

Load more