D

Denis

101 karmaJoined

Comments
33

Indeed, 

A Safety manager (in a small company) or a Safety Department (in a larger company) needs to be independent of the department whose safety they monitor, so that they are not conflicted between Safety and other objectives like, say, an urgent production deadline (of course, in reality they will know people and so on, it's never perfect). Typically, they will have reporting lines that meet higher up (e.g. CEO or Vice President), and this senior manager will be responsible for resolving any disagreements. If the Safety Manager says "it's not safe" and the production department says "we need to do this," we do not want it to become a battle of wills. Instead, the Safety Manager focuses exclusively on the risk, and the senior manager decides if the company will accept that risk. Typically, this would not be "OK, we accept a 10% risk of a big explosion" but rather finding a way to enable it to be done safely, even if it meant making it much more expensive and slower. 

In a smaller company or a start-up, the Safety Manager will sometimes be a more experienced hire than most of the staff, and this too will give them a bit of authority. 

I think what you're describing as the people "put in charge of this stuff" are probably not the analogous people to Safety Managers. In every factory and lab, there would be junior people doing important safety work. The difference is that in addition to these, there would be a Safety Manager, one person who would be empowered to influence decisions. This person would typically also oversee the safety work done by more junior people, but that isn't always the case. 

Again, the difference is that people in engineering can point to historical incidences of oil-rigs exploding with multiple casualties, of buildings collapsing, ... and so they recognise that getting Safety wrong is a big deal, with catastrophic consequences. If I compare this to say, a chemistry lab, I see what you describe. Safety is still very much emphasised and spoken about, nobody would ever say "Safety isn't important", but it would be relatively common for someone (say the professor) to overrule the safety person without necessarily addressing the concerns. 

Also in a lab, to some extent it's true that each researcher's risks mostly impact themselves - if your vessel blows up or your toxic reagent spills, it's most likely going to be you personally who will be the victim. So there is sometimes a mentality that it's up to each person to decide what risks are acceptable - although the better and larger labs will have moved past this. 

I imagine that most people in biolabs still feel like they're in a lab situation. Maybe each researcher feels that the primary role of Safety is to keep them and their co-workers safe (which I'm sure is something they take very seriously), but they're not really focused on the potential of global-scale catastrophes which would justify putting someone in charge. 

I again emphasise that most of what I know about safety in biolabs comes from your post, so I do not want to suggest that I know, I'm only trying to make sense of it. Feel free to correct / enlighten me (anyone!).

Hi Rose,

To your second question first: I don't know if their are specific laws related to e.g. ASTM standards. But there are laws related to criminal negligence in every country. So if, say, you build a tank and it explodes, and it turns out that you didn't follow the appropriate regulations, you will be held criminally liable - you will pay fines and potentially end up in jail. You may believe that the approach you took was equally safe and/or that it was unrelated to the accident, but you're unlikely to succeed with this defence in court - it's like arguing "I was drunk, but that's not why I crashed my car."

And not just you, but a series of people, perhaps up to and including the CEO, and also the Safety Manager, will be liable. So these people are highly incentivised to follow the guidelines. And so, largely independently of whether there are actually criminal penalties for not following the standards even when there is no accident, the system kind of polices itself. As an engineer, you just follow the standards. You do not need further justification for a specific safety step or cost than that it is required by the relevant standard. 

But I'm very conscious that the situations are very different. In engineering, there are many years of experience, and there have been lots of accidents from which we've learned through experience and eventually based on which the standards have been modified. And the risk of any one accident, even the very worst kind like a building collapse or a major explosion, tends to be localised. In biohazard, we can imagine one incident, perhaps one that never happened before, which could be catastrophic for humanity. So we need to be more proactive. 

Now to the specific points:

Reporting:

For engineering (factories in general), there are typically two important mechanisms. 

  1. Every incident must be investigated and a detailed incident report must be filed, which would typically be written by the most qualified person (i.e. the engineer working directly on the system) and then reviewed over several steps by more senior managers, by the safety manager and eventually by a very senior manager, often the CEO. It would also be quite typical for the company to bring in external safety experts, even from a competitor (who might be best-placed to understand the risks) to ensure the analysis is complete. This report would need to provide a full analysis of what went wrong, what could have gone worse, why it went wrong, and how to ensure that the incident never reoccurs, or that similar incidents never happen again. It would not be unusual for the entire production unit to be closed while an investigation is being carried out, and for it to not be allowed to re-open until any concerns have been fully addressed to the satisfaction of the safety manager. And this is just the internal processes. There will also frequently be external reviewers of the report itself, sometimes from state-bodies. And all this is what happens when there is an incident which does not lead to any criminal procedures or negligence charges. If these are involved, the whole thing becomes much more involved. 
  2. Most companies have a "near miss" box (physical or virtual) in which any employee can report, anonymously if they choose, any case where an accident could have occurred, or a dangerous situation which could be allowed to occur. These are taken very seriously and typically the Safety manager will lead a full investigation into the risks or hazards identified. The fact that the accident didn't actually happen is not really a mitigating factor, since this might have been just good fortune. An example of this: if an Operator notices that the reactor is overheating and could potentially overflow, and so she turns on the cooling water to avoid the risk, this would be considered a near-miss if it was not that particular Operator's role to do this - they were lucky that she noticed it, but what would have happened if she hadn't? 

 

Proactive vs. Reactive:

  1. We start already with a very clear set of safety rules, which have been developed over the years. In my last company, I was one of the people who "developed" the initial safety procedures. But we didn't start from scratch - rather, we started from lots of excellent procedures which are available online and from government bodies, from material safety data sheets which are available for every chemical and give detailed safe-handling instructions, and so on. Even still, this process probably took about one month of my time, as Head of Engineering, and similar amounts from a couple of other employees, including the Safety Manager. And this was just for a lab and pilot-plant set-up. In a sense you could consider this to be the "reactive" part, this is the set of rules that have been built up over years of experience. The rest of what I describe below is the proactive part, in which the people best qualified to judge evaluate what could potentially go wrong:
  2. Starting from this as a baseline, we then do a detailed hazard analysis (HAZAN, HAZOP analyses) in which we study all the risks and hazards that exist in our lab or pilot plant, whether they be equipment or chemicals or operations. So, for example, if we use liquid Nitrogen, we'd already have (in step one) a detailed set of rules for handling liquid Nitrogen, and in step 2, we'd do a detailed analysis of what could possibly go wrong - what if the container gets dropped? what if an operator makes a mistake? what if a container has a leak? What if someone who isn't trained to work with liquid nitrogen were in the lab? etc. And we'd then need to develop procedures to ensure that none of these scenarios could lead to accidents or personal injuries. 
  3. Then, every time we want to do something new (e.g. we work with a new chemical, we want to try a new process-experiement, ...) we need to do a detailed risk analysis of that and get it approved by the safety manager before we can start. 
  4. In addition to the above, for major risks (e.g. solvent-handling that could lead to explosions) we would sometimes do additional safety risks - one example would be a "bowtie analysis" (a deceptively elegant name!) in which we create a picture like a bowtie. 
    1. In the centre (the knot) is the event itself, let's say an ignition event, but in a bio-hazard context, it could be say a researcher dropping a glass container with dangerous viruses in it. 
    2. To the left are all the steps that are taken to prevent this from occurring. For example, for an ignition/explosion risk, you need three things - an explosive atmosphere, a source of ignition (e.g. a spark) and the presence of oxygen - and typically we'd want to make sure at most one of these was present, so that we'd have two layers of security. So, on the bowtie diagram, you would start on the left and look at something that might happen (e.g. the solvent spills) and then look sequentially at what could happen, to make sure that the safety measures in place would take care of it. An especially important concern would be any occurence that could reduce the impact of more than one layer of security simultaneously - for example, if a research in a biolab takes a sample outside the safe-working-area, intentionally or by accident, this might mean that several layers of what appeared to be an impenetrable safe system are broken.
    3. To the right are the steps to minimise the consequences. If the explosion occurs, these are the steps that will minimise the injuries, the casualties, the damage. For example, appropriate PPE, fire-doors, fire-handling procedures, minimum number of people in the lab, etc. 

I think the idea is clear. I am sure that someone, probably many people, have done this for bio-hazards and bio-security. But what is maybe different is that for engineers, there is such a wealth of documentation and examples out there, and such a number of qualified, respected experts, that when doing this risk analysis, we have a lot to build on and we're not just relying on first-principles, although we use that too. 

For example, I can go on the internet and find some excellent guides for how to run a good HAZOP analysis, and I can easily get experienced safety experts to visit my pilot-plant and lead an external review of our HAZOP analysis, and typically these experts will be familiar with the specific risks we're dealing with. I'm sure people run HAZOP's for Biolabs too (I hope!!) but I'm not sure they would have the same quality of information and expertise available to help ensure they don't miss anything. 

From your analysis of biolabs, it feels much more haphazard. I'm sure every lab manager means well and does what they can, but it's so much easier for them to miss something, or maybe just not to have the right qualified person or the relevant information available to them. 

What I've described above is not a check-list, but it is a procedure that works in widely different scenarios, where you incorporate experience, understanding and risk-analysis to create the safest working environment possible. And even if details change, you will find more or less the same approach anywhere around the world, and anyone, anywhere, will have access to all this information online. 

And ... despite all this, we still have accidents in chemical factories and building that collapse. Luckily these do not threaten civilisation they way a bio-accident could. 

Hope this helps - happy to share more details or answer questions if that helps. 



 

Wow! This is a great report. So much information. 

It's also quite scary, and IMHO fully justifies Biosecurity and Biosafety being priorities for EA. We do know what regulations, standards and guidance should look like, because in fields like engineering there are very clear ASTM or ISO rules, and as an engineer, you can be put in jail or sued if you don't follow them. I realise that these have been developed over literally centuries of experience and experiment - yet given the severity of biohazards, it's not OK to just say "well, we did our best". 

I'm particularly worried by two points you make, which are very telling:

  1. The lack of reporting. How can we overcome this? It seems absolutely critical if we want to get on top of this.
  2. The reactive nature of the regulatory procedures in a field where things are changing so fast, where we desperately need to get ahead of the risks. 

It really does seem like the people working in the field understand the seriousness of this (as they would) and might even welcome regulation which was stronger if it were based on good science and more consistently applied globally. For example, if ISO 35001 is truly based on the best information available, it would seem very valuable to work towards getting this applied globally, including finding funding to enable labs in poorer countries to fulfil the obligations. 

You write that many people in the field view standards as an obstacle to research. In my experience, everyone in R&D has this impression to some extent. People working with explosive solvents complain about not being able to create explosive atmospheres. But mostly this is just words - deep down, they are happy that they are "forced" to work safely, even if sometimes the layers of safety feel excessive. And this is especially true when everyone is required to follow the same safety rules. 

This is not my field of expertise at all, but I know quite a bit about industrial chemical engineering safety, so I'm kind of looking at this with a bias. I'd love to hear what true experts in this field think! 

But thank you Rose for this really useful summary. I feel so much wiser than I was 30 minutes ago! 
 

This article was the "Classic Forum post" in the EA Forum Digest today. An excellent choice. Though an old post (in EA terms, 2017 is ancient history!), it asks a question that is fundamental to EA. If we want to measure and compare the impact (effectiveness) of two interventions quantitatively, we necessarily must multiply the objective impact measured on a group by some factor quantifying the relative value of that group - be it insects or future generations or chickens. 

Since joining EA, I've been impressed by how many people in this community do this, and how often it leads to surprising conclusions, for example in longtermism or animal rights. 

At the same time, I would hazard that the vast majority of people in the world today would essentially give "humans who are alive today" an infinitely larger value than animals or future generations. They wouldn't use those words, but that's how they'd view it. As in, they may be all in favour of animal rights, but would they be willing to sacrifice one human life to save one million cows? Most would not. Would they agree to sacrifice 100 people today to save  100 billion people who will live in the 24th century? Many would not. 

I struggle with questions like this - it seems to require a massive amount of confidence that I'm right, and I'm not sure I have that. 

So it's great that we look for opportunities (reducing x-risks, alternative protein, biosecurity, ...) which are win/win, but sometimes we'll be forced to choose. When I think of radical empathy, I don't just think of the "easy" part where we recognise the potential for suffering and the importance of quality of life, but also of the difficult part where we may have to make choices where one side of the balance has the lives of real, living human beings and the other side does not. 

Thanks Jason, I did not know this. On my tax returns, it is literally worth 45% of every donation. If I donate 100 euros, I get back 45 as a tax rebate. Or put another way, if I donate 200 euros, it only costs me 110 net. Which is quite a dramatic number. 

And while I agree with the other comments that this is less than the 100x efficiency differences we sometimes see, I would reply that the goal would be to donate the 200 euros to the 100x more efficient charity, and so get 200 x the benefit, which is still valuable. 

But fully agree with everyone who is saying that it's better to donate to a very efficient charity even without a tax-deduction than to limit myself to donating to charities which are approved for tax-decudtion. 

Thank you Luke for this great answer and so much valuable information. It may be the Belgium is just a few steps behind other countries, since most of the countries surrounding Belgium seem to have the option to donate tax-deductably. 

No, as far as I know it is NOT possible in general for Belgians to donate to Dutch charities, but there are schemes which enable donations to specific charities which are part of those schemes. 

For example, the website Transnationalgiving.eu seems to be a great resource on this, but it's still highlighting the complexity of cross-border donations within the EU. 

There is a process through which it is possible for someone from Belgium to donate, but only to charities which are registered with this scheme. There is GiveDirectly in UK and I think other direct donation charities (which I like), but I do not see any effective altruism charities on that list. If there are some, it would be great to make people aware of it. If there are none, maybe it would be worth contacting some EU / UK based charities to see if they can easily register. 

I am happy to try to do this - are there any specific charities you would recommend in the list of countries given on the website? 

Really appreciate your comment and advice, and sorry for the very slow reply, I had some heavy time-commitments recently. 

 

Thanks Amber and Juan,

This was really interesting for this chemical engineering looking at where I can best contribute. It's complex because there are so many urgent problems, and yet, while chemical engineers are quite a good fit for several of them, there isn't one obvious one in the same way as maybe an IT expert might immediately lean towards AI Safety or a biologist might lean towards one of the food projects like you're working on. 

I also really like your last comment about the science / policy interface, and perhaps chemical engineers can have a big role there. 

 

IMHO this is a very personal, case-by-case calculation. 

A person will donate what they can rather than just a fixed percentage. But this can depend on many factors, not just jobs / income, but also expenditures (do they have kids? are they paying off college loans? a mortgage? ...) and potential risks (what if they lose their job? what if one of the kids gets sick? ...). 

That said, I believe there is a huge opportunity to maximise the "what they can donate" with a more structured approach. Today we have a very simplistic all-or-nothing donation model. For every dollar or euro you have, you either donate it (and lose it forever) or you don't donate it at all. I believe there could also be a happy-medium, I've started a draft post on that ... 

This is a very interesting comment and reaction. 

I know what Kirsten means - it does feel like "friends doing stuff" compared to the way some other big movements are run. I didn't read it as being jarring and I don't think it was intended as a massive criticism. 

BUT "friends doing stuff" is good. We need to be trying stuff. And friends who know and trust each other and have a network and knowledge and understanding and who talk to each other and come up with ideas of things to try and actually try them: that is great. That is what so many large R&D organisations dream of but can never achieve, because they get stuck in formal structures and rigid policies. The EA movement is still very young, we need this mentality. 

The alternative would seem to be to make trying new things harder. I'm not convinced that would be helpful. 

The middle ground is probably where at a certain point in scaling up ideas (e.g. based on spend) there could be more scrutiny. 

HOWEVER, where I don't necessarily agree with Kirsten (based on my very limited experience) is on the questions of scrutiny or accountability. Having spent my career outside the EA environment, I can honestly say I have never before seen a group of people who more actively seek scrutiny, put there ideas out there and ask people to shoot at them. 

I see organisations putting their research or action plans on here and saying "guys, this is what we plan to do - before we start, please tell us anything that you disagree with" and then engaging actively and constructively with all the feedback. 

Maybe there are some formal accountability structures missing (because many organisations are like start-ups rather than big companies) - but I don't think you want that to start too early. I can't really comment on this, but I would imagine that most organisations would have some kind of review before investing a lot of money in scaling an idea - but might be happy to give someone $1000 and a few weeks to go and try something. 

The title of your post is very provocative and gets right to the point. 

A typical human has about 40 trillion microbes, presumably large mammals have similar quantities, numbers which are beyond our ability to comprehend. 

If we treat each microbe as sentient, then unless we can somehow demonstrate that my feelings are more than 40 trillion times more important than those of a microbe - very tough, because we have 500 times fewer neurons in our brains, so even if every neuron were united in suffering, how could we justify a factor of 40 trillion? - we could end up just calculating the importance of different species and their suffering purely in terms of the number of microbes they contain, on the assumption that if a mammal suffers and dies some fraction of the microbes in and on their body will also suffer and some will die. 

In such a calculus, it seems highly unlikely that we could justify the continued existence of humans, if only based on the number of animals we harm, directly and indirectly, and the microbes in and on those animals.

I believe we will resolve this dilemma sometime in the future (existential risks permitting) with some experimentally and theoretically derived scale by which we can estimate sentience and the potential for suffering based on some quantitative, measurable parameters.

I tend to believe (without evidence) that there is a point below which suffering is not possible, possibly based on the minimum complexity required to create consciousness as an emergent phenomenon. (yes, I realise that sounds like a list of big words cobbled together randomly to give the illusion of understanding). 

We're not there yet, but we will reach a point where we can fully understand the workings of the simplest microbes in terms of chemical equilibria and chemical potential and thermodynamics - what appears as their "desire" to do X or Y will be shown to be no different to the "desire" of a positive ion to approach a negative ion, but without any reason to evolve consciousness. 

The assumption behind this is that one day we will understand consciousness in something other than a hand-waving manner. Right now, given that we don't, it is very difficult to quantify anything, and so we need to err on the side of caution. 

Load more