Hide table of contents

TL;DR: When we're unsure about what to do, we sometimes naively take the "average" of the obvious options — despite the fact that a different strategy is often better. For example, if you're not sure if you're in the right job, continuing to do your job as before but with less energy ("going half-speed") is probably not the best approach. Note, however, that sometimes speed itself is the problem, in which case "half-speed" can be totally reasonable — I discuss this and some other considerations below

I've referenced this phenomenon in some conversations recently, so I'm sharing a relevant post from 2016 — The correct response to uncertainty is *not* half-speed — and sketching out some examples I've seen. 

The correct response to uncertainty is *not* half-speed

The central example in the post is a time when the author was driving along a long stretch of road and started wondering if she’d passed her hotel. So she continued at half-speed, trying to decide if she should keep going or turn around. After a while, she realized: 

If the hotel was ahead of me, I'd get there fastest if I kept going 60mph.  And if the hotel was behind me, I'd get there fastest by heading at 60 miles per hour in the other direction.  And if I wasn't going to turn around yet -- if my best bet given the uncertainty was to check N more miles of highway first, before I turned around -- then, again, I'd get there fastest by choosing a value of N, speeding along at 60 miles per hour until my odometer said I'd gone N miles, and then turning around and heading at 60 miles per hour in the opposite direction.  

Either way, fullspeed was best.  My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward.  So, since I'm uncertain, I should go forward at half-speed!"  But averages don't actually work that way.[1] 

[...] [From a comment] Often a person should hedge bets in some fashion, or should take some action under uncertainty that is different from the action one would take if one were certain of model 1 or of model 2. The point is that "hedging" or "acting under uncertainty" in this way is different in many particulars from the sort of "kind of working" that people often end up accidentally doing, from a naiver sort of average. Often it e.g. involves running info-gathering tests at full speed, one after another.

… 

Opinions expressed here are mine, not my employer’s, not the Forum’s, etc. I wrote this fast, so it’s definitely not an exhaustive list of examples or considerations and is probably wrong in important places. 

Assorted links that seem related to the post

Where I’ve seen the “half-speed” phenomenon recently[1]

I think that I’ve seen multiple instances of each of these in the past few months. I’m not sure that all of these directly stem from the phenomenon described above — there might be better descriptions for what’s going on — but they seem quite related.

  1. Jobs. Someone is unsure if their role is a good fit (or if it's the most impactful option, etc.) for them. So they continue working in it, but put less energy into it. 
    • What you might do instead: set aside time to evaluate your options and fit (and switch jobs based on that), consider setting up some tests, see if you can change or improve things in your current job (talk to your manager, etc.), decide that it’s a bad time to think about this and that you'll re-evaluate at a set time (schedule it in), etc.
  2. Resting. Someone is tired and worried about burning out, but they also think it’s important for them to work right now. So they do work-related things that they think are kinda relaxing, but which are neither as useful nor as relaxing as the most useful or the most relaxing things respectively. (Half-speed resting.)
    • They should probably decide one way or another, and then go all-in. Although it’s possible that this sort of middle ground is a useful hack for the person to actually take a break if they’re having trouble letting themselves do that.
    • See also: Rest in motion 
  3. Community-building. Someone is uncertain about the value of working on some kind of community-building or field-building. So they think that they or the people working on it should slow the community-building down.
    • I’d guess that it’s generally better to investigate whether the community-building should be stopped altogether (as much as possible). (You could also decide that it shouldn't be stopped, but should be seriously modified.) If you're actively involved in community-building, this might mean that you should stop what you're currently doing in order to investigate. Alternatively, you might think that you're very unlikely to make progress on this investigation — consider thinking about how your beliefs on this front have been changing after you started thinking about it, in which case I think you should decide one way or another now and commit to that until you have new information. 
    • Note that it is right to slow down if you think that the speed is the thing causing problems — I discuss this a bit below.
  4. Cause prioritization. Someone has changed their mind about cause prioritization and is now quite unsure if they’re working on the right problem. Maybe they’ve decided that animal welfare is probably significantly more important than they used to think, and they’re unsure about whether they should switch into that cause area. But they’re already on an unrelated path and it seems hard to switch, so they make their current work slightly animal-welfare-related.
    • This is similar to the role-switching example above; I think they should usually make time to explore what problem(s) they think they should be working on (from both a cause-oriented and fit-oriented perspective). 
    • Related writing on multipliers and aptitudes.

Some notes / adding nuance

  1. When speed itself is part of the problem, going “half-speed” can be totally reasonable
    • If going fast means being more careless or if it has some other costs, then you should absolutely consider slowing down. This isn’t the same as responding to uncertainty by doing something like averaging your options. 
    • For instance, if you’re unsure about the value of growing your organization in part because growing it fast might make it harder to maintain good team culture (or is straining management capacity, etc.), slowing down could be totally reasonable. 
    • (This is a sub-case of (2) below.)
  2. There are many things that can seem like “half-speed” that I think don’t face the problems described here
  3. Going “half-speed” could be a low-cost way of getting some more capacity for figuring out what to do. (Related to (1) in this list.)
    • Maybe you’re unsure if you’re in the right role, but it would be really costly for you to drop all of your current responsibilities to try a different job, or to take a month to investigate what you should do. It might make sense for you to reduce hours at your current job or put things into maintenance mode and make space for side projects and thinking about your career.
    • In the driving-to-a-hotel example above where you’ve just realized that you might have missed your hotel, half-speed might be reasonable if: 
      • It’s taking mental energy to drive fast (e.g if you’re worried that you’ll miss the hotel)
      • It would be pretty hard to pull over (e.g. you’re on a big highway)
      • And you can make progress on the problem of “what’s the best way to get to my hotel?” just by thinking or doing something that doesn’t involve driving past where your hotel is (whether that’s in front or behind of you) — like pulling out a GPS, just being more attentive to your surroundings, or giving your brain more bandwidth to figure out what your next steps should be. (If you can’t, I think you should probably just keep driving fast, commit to some distance that you’ll drive forward, then go back if there’s no hotel.)
Image created with Midjourney
  1. ^

    There are more examples in the original post.

Comments9


Sorted by Click to highlight new comments since:

This post really resonates with me. Over winter 2021/22 I went on a retreat run by folks in the CFAR, MIRI, Lightcone cluster, and came away with some pretty crippling uncertainty about the sign of EA community building.[1] In retrospect, the appropriate response would have been one of the following:

  1. stop to investigate
  2. commit to community building (but maybe stop to investigate upon encountering new information or after some predetermined period of time)
  3. switch jobs

Instead, I continued on in my community building role, but with less energy, and with a constant cloud of uncertainty hanging over me. This was not good for my work outputs, my mental health, or my interactions and relationships with community building colleagues. Accordingly, “the correct response to uncertainty is *not* half-speed” is perhaps the number one piece of advice I’d give to my start-of-2022 self. I’m happy to see this advice so well elucidated here.

  1. ^

    To be clear, I don’t believe the retreat was “bad” in any particular way, or that it was designed to propagate any particular views regarding community building, and I have a lot of respect for these Rationalist folks.

If you can, could you elaborate more on what caused this uncertainty at/after the retreat?

Firstly, I'll say that I chose not to elaborate in my initial comment because Lizka’s post here is about what to do when faced with uncertainty in general, and I didn’t wish to turn the comments section into a rehash of the various arguments on whether community building in particular – either as a whole in its current state, or specific parts of it – is net positive or negative and to what degree. I’ve also personally moved on from my period of somewhat-debilitating uncertainty, and so I didn’t really want to be faced with replies and thus something of an obligation to re-engage with this debate. On top of this, experience has taught me to tread lightly, since the EA community is tight-knit and many people in this community have jobs in or adjacent to community building.

However, as well as your reply I’ve received two direct messages since posting my comment, from community builders who sound like they’re in similar situations to the one I was in, so perhaps there is value in me elaborating. I’ll try to do that now.

(Note: I think this retreat catalyzed my processing of considerations and related uncertainties that I'd already been harboring. In other words, I don’t think I was hit with a bunch of completely new considerations about community building, or that I overly deferred. Note also: As I mention above, I’ve personally moved on from this topic with respect to my own career choices, and I’m glad to no longer have this weight of uncertainty on my shoulders. Therefore, I probably won’t engage further in the comment thread, if there are more replies. I realize this could be frustrating from an epistemics standpoint, and for that I apologize. Also, the points I list out below are just what I can think of off the top of my head right now, and so they should be viewed as an unpolished part of the picture rather than anything that’s close to complete or authoritative. Finally, I notice that what I write below is pretty critical stuff, and I very much hope that I don’t come across as disparaging of the great efforts being made by many community builders.)

Elaboration:

  1. In a way, community building seemed to me to resemble a Ponzi scheme. My anecdata suggested that EA fellowships/intro programmes tended to disproportionately engage people who enjoy talking about moral philosophy and EA ideas, and who enjoy getting more people to think about EA ideas. A simplistic model here is that EA fellowships result in more EA community builders which results in more EA fellowships which result in more EA community builders, and so on. Meanwhile, the actual problems, such as factory farming and x-risk, haven’t gone away. Me-at-the-time began to feel skeptical that the community building cycle I was witnessing was an efficient way to make progress on solving the actual problems.
  2. Community building is a blast. I think this makes it easy to motivated-reason one’s way into pursuing it. At the time, my alternative to community building was a research role. Research, however, is hard work. (For me, at least. There may well be better researchers out there who don’t relate.) On the other hand, doing community building meant connecting with lots of cool new people and having interesting conversations and going on fun retreats around the world with other community builders. I also met my last two romantic partners through community building.[1]
  3. To me, the direction of community building while I was involved felt like a fairly indiscriminate “more programs, more participants, more events, more hype”. (I’ll avoid giving concrete examples publicly, since that feels like straying into personal attack territory.) My sense was that there wasn’t enough application of cause prio or enough serious thought going into understanding the pipelines and talent bottlenecks within cause areas. I felt deeply uncertain about whether the proxy goals I was being encouraged to aim for – running a certain number of workshops, or attracting and retaining a certain number of participants – mapped all that well onto solving the actual problems.
    1. My attempts to raise this concern with other community builders, including those above me, were mostly dismissed. This worried me. It seemed like the community building machine was not open to the hypothesis that (some of) what it was doing might be ineffective, or, worse, net negative. (More on the latter below.) On top of this, there seemed to be a tricky second-order effect at play: evaporative cooling whereby the community builders who developed concerns like mine exited, only to be replaced by more bullish community builders. The result: a disproportionately bullish community building machine. And there didn’t appear to be any countermeasures in place. For example, there was plenty of funding available if one wanted a paid role doing community building. But, in addition to the social disincentive, there was no funding available for evaluating/critiquing the impact of community building – at least, not that I was aware of.
  4. I’d grown uncertain about whether the EA and AI safety communities had done more good than harm to date. Therefore, based on reference class forecasting, I’d grown uncertain as to whether the sign of future EA and AI safety activities would be net positive. I had significant (maybe ~33%) credence in these communities, if they continued to exist in roughly the same form, being negative for the world. ‘Shutting Down the Lightcone Offices’ expresses similar thoughts to those I was having.

 

Edit (Nov 8, 2024): I no longer fully endorse points 1 and 3. To a degree, I was arguing in those points against ‘principles-first’ community building, and for ‘cause-first’ community building. However, I’ve since been persuaded (by Zachary Robinson’s ‘CEA will continue to take a “principles-first” approach to EA’   and Peter Wildeford’s ‘EA is three radical ideas I want to protect’) that the principles-first approach deserves a place in our portfolio.[2] (The strongest argument, for me, is that the principles-first approach encourages people to think hard about big picture questions re. how to do the most good, and leads to identification of new causes—like ECL and ASI governance—that are potentially even more important than the current top-ranked causes.)

  1. ^

    For a related comment I wrote, see here.

  2. ^

    Edit (Nov 10, 2024): Interestingly (to me), I just read this post, and it sounds like KuhanJ has independently made the same update in thinking as I have:

    Kuhan: I think since writing the post that Gabe was referring to, I've probably moved more toward thinking that EA community building has real value and creates something special that you don't get from cause-specific movement building. Specifically, I've been surprised at how much the people who have gotten really involved in our AI safety group also got into EA afterwards, or were already EAs (and that's what led to them getting involved in AI safety). I think there's something about taking ideas seriously and actually taking action on your beliefs that the EA community cultivates in a way that is pretty rare.

    Also, as I mentioned earlier, as AI becomes more mainstream, it might be the case that it's much more impactful to work on issues that aren't clearly important to everyone in the world, in the way that existential risk is. I'm not sure how true that is. But it wouldn't surprise me if major world governments and others become pretty concerned about AI existential risk, but very few people care about morally relevant digital sentience, and what to do about that (along with all the other issues).

Lizka - very interesting points, and generally good advice.

This seems metaphorically related to the principle of 'force concentration' in warfare. Wherever one thinks the enemy is likely to be massing, it's better to keep all of one's forces together, and go to where one thinks they are most likely to be -- rather than splitting one's forces, which often makes the likelihood of defeat much higher, wherever they end up being. This is especially true given firearms that can engage from a distance, as described by Lanchester's N-square law

One caveat to add: naive readers interested in AI X-risk could misinterpret you as saying that it's better to go full steam ahead with AI, and then reverse at full speed if we decide that's the wrong way to proceed, rather than going more slowly & cautiously along the way. I know that's not what you intended, but it may be worth emphasizing, as you mentioned, that 'When speed itself is part of the problem, going “half-speed” can be totally reasonable'.

Thanks for pointing this out — I've made some edits in response (mostly to the very beginning of the post). 

There is a corporate motto: "10% of decisions need to be right. 90% of decisions just need to be taken!" which resonates perfectly with this post. 

To put this in an EA context - if you're unsure which of two initiatives to work on, that probably means that (to the best of your available knowledge) they are likely to have similar impacts. So, in the grand scheme of things, it probably doesn't matter which you choose. But the time you spend deciding is time that you are NOT dedicating to either of the initiatives. 

However, this is a good rule-of-thumb, but you need to be wary of exceptions. There are those 10% of cases where your decision matters a lot. In my case, as a chemical engineer, decisions about safety would typically be in that 10%. In an EA context, maybe it's decisions where you really are not sure if a particular initiative might be doing more harm than good which fall into this 10%. 

How to decide whether you can already take a decision?

  1. Does any decision have potentially very bad consequences? Not just wasted time, but actual harm, or major investments wasted or whatever. 
  2. How much of a difference is there likely to be depending on which decision you take?
  3. What new information are you likely to get (and when) which could help you make a better decision? 
  4. Put the pros and cons on a sheet of paper and discuss with a friend or colleague. Often times, this exercise alone, even before you discuss, will enable you to make a decision. 

Nice post!

Where I’ve seen the “half-speed” phenomenon recently

I think donations are another common example. If the goal is maximising the impact of a donation, for small donors, it makes sense to donate to what we think has the highest marginal cost-effectiveness at the margin.

That post is one of my all-time favourites from LW. I've struggled a lot with cause uncertainty, but sometimes the best way to resolve it is just to go full speed on one thing until you discover that it was not for you. The compromise is usually less effective than any of the options. :)

I like this point, and I think it explains some of the mistakes I see people make (and have made myself). One additional related link: deliberate once.

Curated and popular this week
 ·  · 11m read
 · 
Does a food carbon tax increase animal deaths and/or the total time of suffering of cows, pigs, chickens, and fish? Theoretically, this is possible, as a carbon tax could lead consumers to substitute, for example, beef with chicken. However, this is not per se the case, as animal products are not perfect substitutes.  I'm presenting the results of my master's thesis in Environmental Economics, which I re-worked and published on SSRN as a pre-print. My thesis develops a model of animal product substitution after a carbon tax, slaughter tax, and a meat tax. When I calibrate[1] this model for the U.S., there is a decrease in animal deaths and duration of suffering following a carbon tax. This suggests that a carbon tax can reduce animal suffering. Key points * Some animal products are carbon-intensive, like beef, but causes relatively few animal deaths or total time of suffering because the animals are large. Other animal products, like chicken, causes relatively many animal deaths or total time of suffering because the animals are small, but cause relatively low greenhouse gas emissions. * A carbon tax will make some animal products, like beef, much more expensive. As a result, people may buy more chicken. This would increase animal suffering, assuming that farm animals suffer. However, this is not per se the case. It is also possible that the direct negative effect of a carbon tax on chicken consumption is stronger than the indirect (positive) substitution effect from carbon-intensive products to chicken. * I developed a non-linear market model to predict the consumption of different animal products after a tax, based on own-price and cross-price elasticities. * When calibrated for the United States, this model predicts a decrease in the consumption of all animal products considered (beef, chicken, pork, and farmed fish). Therefore, the modelled carbon tax is actually good for animal welfare, assuming that animals live net-negative lives. * A slaughter tax (a
MarieF🔸
 ·  · 4m read
 · 
Summary * After >2 years at Hi-Med, I have decided to step down from my role. * This allows me to complete my medical residency for long-term career resilience, whilst still allowing part-time flexibility for direct charity work. It also allows me to donate more again. * Hi-Med is now looking to appoint its next Executive Director; the application deadline is 26 January 2025. * I will join Hi-Med’s governing board once we have appointed the next Executive Director. Before the role When I graduated from medical school in 2017, I had already started to give 10% of my income to effective charities, but I was unsure as to how I could best use my medical degree to make this world a better place. After dipping my toe into nonprofit fundraising (with Doctors Without Borders) and working in a medical career-related start-up to upskill, a talk given by Dixon Chibanda at EAG London 2018 deeply inspired me. I formed a rough plan to later found an organisation that would teach Post-traumatic stress disorder (PTSD)-specific psychotherapeutic techniques to lay people to make evidence-based treatment of PTSD scalable. I started my medical residency in psychosomatic medicine in 2019, working for a specialised clinic for PTSD treatment until 2021, then rotated to child and adolescent psychiatry for a year and was half a year into the continuation of my specialisation training at a third hospital, when Akhil Bansal, whom I met at a recent EAG in London, reached out and encouraged me to apply for the ED position at Hi-Med - an organisation that I knew through my participation in their introductory fellowship (an academic paper about the outcomes of this first cohort can be found here). I seized the opportunity, applied, was offered the position, and started working full-time in November 2022.  During the role I feel truly privileged to have had the opportunity to lead High Impact Medicine for the past two years. My learning curve was steep - there were so many new things to
Ozzie Gooen
 ·  · 9m read
 · 
We’re releasing Squiggle AI, a tool that generates probabilistic models using the Squiggle language. This can provide early cost-effectiveness models and other kinds of probabilistic programs. No prior Squiggle knowledge is required to use Squiggle AI. Simply ask for whatever you want to estimate, and the results should be fairly understandable. The Squiggle programming language acts as an adjustable backend, but isn’t mandatory to learn. Beyond being directly useful, we’re interested in Squiggle AI as an experiment in epistemic reasoning with LLMs. We hope it will help highlight potential strengths, weaknesses, and directions for the field. Screenshots The “Playground” view after it finishes a successful workflow. Form on the left, code in the middle, code output on the right.The “Steps” page. Shows all of the steps that the workflow went through, next to the form on the left. For each, shows a simplified view of recent messages to and from the LLM. Motivation Organizations in the effective altruism and rationalist communities regularly rely on cost-effectiveness analyses and fermi estimates to guide their decisions. QURI's mission is to make these probabilistic tools more accessible and reliable for altruistic causes. However, our experience with tools like Squiggle and Guesstimate has revealed a significant challenge: even highly skilled domain experts frequently struggle with the basic programming requirements and often make errors in their models. This suggests a need for alternative approaches. Language models seem particularly well-suited to address these difficulties. Fermi estimates typically follow straightforward patterns and rely on common assumptions, making them potentially ideal candidates for LLM assistance. Previous direct experiments with Claude and ChatGPT alone proved insufficient, but with substantial iteration, we've developed a framework that significantly improves the output quality and user experience. We're focusing specifically on
Recent opportunities in Building effective altruism
31
cescorza
· · 2m read