This is a special post for quick takes by Joseph Lemien. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I'm currently reading a lot of content to prepare for HR certification exams (from HRCI and SHRM), and in a section about staffing I came across this:

some disadvantages are associated with relying solely on promotion from within to fill positions of increasing responsibility:
■ There is the danger that employees with little experience outside the organization will 
have a myopic view of the industry

Just the other day I had a conversation about the tendency of EA organizations to over-weight how "EA" a job candidate is,[1] so it particularly stuck me to come across this today. We had joked about how a recent grad with no work experience would try figuring out how to do accounting from first principles (the unspoken alternative was to hire an accountant). So perhaps I would interpret the above quotation in the context of EA as "employees with little experience outside of EA are more likely to have a myopic view of the non-EA world." In a very simplistic sense, if we imagine EA as one large organization with many independent divisions/departments, a lot of the hiring (although certainly not all) is internal hiring.[2]

And I'm wondering how much expertise, skill, or experience is not utilized within EA as a result of favoring "internal" hires. I think that I have learned a lot about EA over the past three years or so, but I suspect that would perform better in most EA jobs if I had instead spent 10% of that time learning about EA and 90% of it learning about [project management, accounting, bookkeeping, EEO laws and immigration law]. Nonetheless, I also suspect that if I had spent less time delving into EA, I would be a less appealing job candidate for EA orgs, who heavily weigh EA-relevant experience.[3]

It does seem almost comical how we (people involved in EA) try to invent many things for ourselves rather than simply using the practices and tools that exist. We don't need to constantly re-invent the wheel. It is easy to joke about hiring for a position that doesn't require someone to be highly EA, and then using "be very EA" as an selection criteria (which eliminates qualified candidates). I'll return to my mainstay: make sure the criteria you are using for selection are actually related to ability to perform the job. If you are hiring a head of communications to manage public relations for EA, then I think it makes sense that this role needs to understand a lot of EA. If you are hiring an office manager or a data analyst, I think that it makes less sense (although I can certainly imagine exceptions).

I'm imagining a 0-10 scale for "how EA someone is," and I think right now most roles require candidates to be a 7 or 8 or 9 on the scale. I think there are some roles where someone being a 3 or a 4 on the scale would be fine, and would actually allow a more competitive candidate pool to be considered. This is all quite fuzzy, and I think there is a decent chance that I could be wrong.[4]

  1. ^

    "How EA someone is" is a very sloppy term for a variety of interconnected things: mission-alignment, demonstrated interaction with the EA community, reads lots of EA content, ability to use frequently used terms like "counterfactual" and "marginal," up-to-date with trends and happenings within EA, social connections with EAs... 

  2. ^

    Actually, I wonder if there are stats on this. It would be curious to get some actual estimates regarding what percent of hires made are from people who are within EA. There would certainly be some subjective judgement calls, but I would view being "within EA" as having worked/interned/volunteered for an EA org, or having run or having been heavily involved in an EA club/group.

  3. ^

    I have a vague feeling that heavily weighing EA-relevant experience over non-EA experience is fairly common. I did have one person in a influential position at a central EA org mention that a candidate with a graduate degree (or maybe the words spoken were "MBA"? I don't recall exactly) gets a bit less consideration. Nonetheless, I don't know how much this actually happens, but I hope not often.

  4. ^

    Especially since "how EA someone is" conflates several things: belief in a mission, communication styles, working preferences, and several other things that are actually independent/distinct. People have told me that non EAs have had trouble understanding the context of meetings and trouble communicating with team members. Could we take a generic project manager with 10 years of work experience, have them do two virtual programs, and then toss them into an EA org?

I think that the worries about hiring non-EAs are slightly more subtly than this.

Sure, they may be perfectly good at fulfilling the job description, but how does hiring someone with different values affect your organisational culture? It seems like in some cases it may be net-beneficial having someone around with a different perspective, but it can also have subtle costs in terms of weakening the team spirit.

Then you get into the issue where if you have some roles you are fine hiring EAs for and some you want them to be value-aligned for, then you may have an employee who you would not want to receive certain promotions or be elevated into certain positions, which isn't the best position to be in.

Not to mention, often a lot of time ends up being invested in skilling up an employee and if they are value-aligned then you don't necessarily lose all of this value when they leave.

Chris, would you be willing to talk more about this issue? I'd love to hear about some of the specific situations you've encountered, as well as to explore broad themes or general trends. Would it be okay if I messaged you to arrange a time to talk?

Sorry, I’m pretty busy. But feel free to chat if we ever run into each other at an EA event or to B book a 1-on-1 at an EA Global.

I wish that people wouldn't use "rat" as shorthand for "rationalist."

For people who aren't already aware of the lingo/jargon it makes things a bit harder to read and understand. Unlike terms like "moral patienthood" or "mesa-optimizers" or "expected value," a person can't just search Google to easily find out what is meant by a "rat org" or a "rat house."[1] This is a rough idea, but I'll put it out there: the minimum a community needs to do in order to be welcoming to newcomers is to allow newcomers to figure out what you are saying.

Of course, I don't expect that reality will change to meet my desires, and even writing my thoughts here makes me feel a little silly, like a linguistic prescriptivist tell people to avoid dangling participles.

  1. ^

    Try searching Google for what is rat in effective altruism and see how far down you have to go before you find something explaining that rat means rationalist. If you didn't know it already and a writer didn't make it clear from context that "rat" means "rationalist", it would be really hard to figure out what "rat" means.

For what it’s worth, gpt4 knows what rat means in this context: https://chat.openai.com/share/bc612fec-eeb8-455e-8893-aa91cc317f7d

(I'm writing with a joking, playful, tongue-in-cheek intention) If we are setting the bar at "to join our community you need to be at least as well read at GPT4," then I think we are setting the bar too high.

More seriously: I agree that it isn't impossible for someone to figure out what it means, it is just a bit harder than I would like. Like when someone told me to do a "bow tech" and I had no idea what she was talking about, but it turns out she was just using a different name for a Fermi estimate (a BOTEC).

I agree that we should tolerate people who are less well read than GPT-4 :P

I have the opposite stance,

it is a cool and cute shorthand, so I'd like for it to be the widely accepted meaning of rat.

I want to provide an alternative to Ben West's post about the benefits of being rejected. This isn't related to CEA's online team specifically, but is just my general thoughts from my own experience doing hiring over the years.

While I agree that "the people grading applications will probably not remember people whose applications they reject," two scenarios[1] come to mind for job applicants that I remember[2]:

  • The application is much worse than I expected. This would happen if somebody had a nice resume, a well-put together cover letter, and then showed up to an interview looking slovenly. Or if they said they were good at something, and then were unable to demonstrate it when prompted.[3]
  • Something about the application is noticeably abnormal (usually bad). This could be the MBA with 20 years of work experience who applied for an entry level part-time role in a different city & country than where he lived[4]. This could be the French guy I interviewed years ago who claimed to speak unaccented American English, but clearly didn't.[5] It could be the intern who came in for an interview and requested a daily stipend that was higher than the salary of anyone on my team. If you are rude, I'll probably remember it. I remember the cover letter that actually had the wrong company name at the top (I assume he had recently applied to that company and just attached the wrong file). I also remember the guy I almost hired who had started a bibimbap delivery service for students at his college, so impressive/good things can also get you remembered.

A big caveat here is that memories are fuzzy. If John Doe applies to a job and I reject him and three months later we meet somehow and he says "Hi, I'm John Doe" I probably wouldn't remember that John Doe applied, nor that I rejected him (unless his name was abnormally memorable, or there was something otherwise notable to spark my memory). But if he says "Hi, I'm John Doe. I do THING, and I used to ACCOMPLISHMENT," then maybe I'd remember looking at his resume or that he mentioned ACCOMPLISHMENT in a cover letter. But I would expect more than 90% of applications I look at fade completely from my mind within a few days.

I think that it is rare. I have memories of less than a dozen specific applications out of the 1000s I've looked at over the years, and if you are self-aware enough to be reading this type of content then you probably won't have an application bad enough for me to remember.

The other thing I would gently disagree with Ben West on is about how getting rejected can be substantially positive.[6] My rough perspective (not based on data, just based on impressions) is that it is very rare that getting rejected from a job application is a good thing. I imagine that there are some scenarios in which a strong candidate doesn't get hired, and then the hiring manager refers the candidate to another position. That would be great, but I also think that it doesn't happen very often. I don't have data on "of candidates that reach the 3rd stage or further of a hiring process but are not hired, what percent have some specific positive result from the hiring process," but my guess is that it is a low percentage.

Nonetheless, my impression is that the hiring rounds Ben runs are better than most, and the fact that he is willing to give feedback or make referrals for some rejected candidates already puts his hiring rounds in the top quartile or decile by my judgement.

To the extent that the general claim is "if you think you are a reasonable candidate, please apply," I agree. You miss 100% of the shots you don't take. If you are nervous about applying to EA organizations because you think a rejection could damage your at that and other organizations, if your application is better than the bottom 5-10%, you have nothing to worry about. Have a few different people check your resume to make sure you haven't got any low hanging fruit, and go for it.

  1. ^

    Actually, it is just two variations of a single "application is bad" scenario.

  2. ^

    I'm thinking about real applications I've seen for each of these things that I mention. But they are all several years old, from before I became aware of EA.

  3. ^

    I remember interviewing somebody in 2017 or so who was talking about his machine learning project, but then when I poked and prodded he had just cobbled together templates from a tutorial. And I've had this linguistically a few times, when a resume /cover letter claims a high level of competence in a language (bilingual, fluency, "practically native"), or something similarly high, yet the person struggles to converse in that language.

  4. ^

    I'm 100% open to people taking part-time jobs if they want them, and I don't mind someone "overqualified" doing a job. But if the job is in-person and requires you to speak the local language, you'll have to at least convince me why you are a good fit.

  5. ^

    His English was very good, far better than my French, and I assume that he spent many hours practicing and studying. But it was noticeably not American English, and that particular job required incumbents to be native English speakers.

  6. ^

    There is the general idea getting rejected from MEDIOCRE_COMPANY enabled you to apply and get hired at GREAT_COMPANY. But that seems bland/obvious enough that I'll set it aside.

Anyone can call themselves a part of the EA movement.

I sort of don't agree with this idea, and I'm trying to figure out why. It is so different from a formal membership (like being a part of a professional association like PMI), in which you have a list of members and maybe a card or payment.

Here is my current perspective, which I'm not sure that I fully endorse: on the 'ladder' or being an EA (or of any other informal identity) you don't have to be on the very top rung to be considered part of the group. You probably don't even have to be on the top handful of rungs. Is halfway up the ladder enough? I'm not sure. But I do think that you need to be higher than the bottom rung or two. You can't just read Doing Good Better and claim to be an EA without any additional action. Maybe you aren't able to change your career due to family and life circumstances. Maybe you don't earn very much money, and thus aren't donating. I think I could still consider you an EA if you read a lot of the content and are somehow engaged/active. But there has to be something. You can't just take one step up the ladder, then claim the identity and wander off.

My brain tends to jump to analogies, so I'll use these to try and illustrate my examples:

  • If I visit your city and watch your local sports team for an hour, and then never watch them play again, I can't really claim that I'm a fan of your team, can I? The fans are people who watch the matches regularly, who know something about the team, who really feel a sense of connection.
  • If I started lifting weights twice per week, and I started this week, is it too early for me to identify as a weight lifter? Nobody is going to police the use of the term "weight lifter," but it feels premature. I'd feel better waiting until I have a regular habit of this activity before I publicly claim the identity.
  • If I go to yoga classes, which sometimes involve meditation, and I don't do any other meditation outside of ~5 every now and then, can I call myself a meditator? Meh... If a person never intentionally or actively does meditation, and they just happen to do it when it is part of a yoga class, I would lean toward "no."

To give more colour to this. During the hype of FTX Future Fund a lot of people called themselves EAs in order to try show value alignment to try get funding and it was painfully awkward and obvious. I think the feeling you're naming is something like a fair-weather EA effect that dilutes trust within the community and the self-commitment of the label.

That is a good point, and I like the phrasing of fair-weather EA.

I interpreted it in a more literal way, like it's just true that anyone can literally call themselves part of EA. That doesn't mean other people consider it accurate.

I get the sentiment, but what's the alternative? 

I don't think you can define who gets to identify as something, whether that's gender or religion or group membership.

I'm a Christian and I think anyone should be able to call themselves call themselves a Christian, no issue with that at all no matter what they believe or whatever their level of commitment or how good or bad they are as a person. 

Any alternative means that someone else has to make a judgement call based on objective or subjective criteria, which I'm not comfortable with.

TBH I doubt people will be clamouring for the EA title for status or popularity haha.

Yeah, I think you are right in implying there aren't really any good alternatives. We could try having a formal list of members who all pay dues to a central organization, but (having put almost no thought into it) I assume that would come with it's own set of problems. And I also feel comfortability with an implication that we should have someone else making a judgment based on externally visible criteria. I probably wouldn't make the cut! (I hardly donate at all, and my career hasn't been particularly impactful either)

Your example of Christianity makes me think about EA being a somewhat "action-based identity." This is what I mean: I can verbally claim a particular identity (Christianity, or EA, or something else), and that matters to an extent. But what I do matters a lot also, especially if it is not congruent with the identity I claim. If I claim to be Christian but I fail to treat my fellow man with love and instead I am cruel, other people might (rightly) question how Christian I am. If I claim to be an EA but I behave in anti-EA ways (maybe I eat lots of meat, I fail to donate discretionary funds, I don't work toward reducing suffering, etc.) I won't have a lot of credibility as an EA.

I'm not sure how to parse the difference between a claimed identity and a demonstrated identity, but I'd guess that I could find some good thoughts about it if I were willing to spend several hours diving into some sociology literature about identity. I am curious about it, but I am 20-minutes curious, not 8-hours curious. Haha.

 

EDIT: after mulling over this for a few more minutes, I've made this VERY simplistic framework that roughly illustrated my current thinking. There is a lot of interpretation to be made regarding what behavior counts as in accordance with an EA identity or incongruent with an EA identity (eating meat? donating only 2%? not changing your career?). I'm not certain that I fully endorse this, but it gives me a starting point for thinking about it.

100% I really like this. You can claim any identity, but how much credibility you have with that identity depends on your "demonstrated identity". There is risk though to the movement with this kind of all takers appoach. Before I would have thought that the odd regular person behaving badly while claiming to be EA wasn't a big threat.

Then there was SBF and the sexual abuse scandals. These however were not so much an issue of fringe, non-committed people claiming to EA and tarnishing the movement, but mostly high profile central figures tarnishing the movement.

Reflecting on this, perhaps the actions of high profile or "core" people matter more than people on the edge, who might claim to be EA without serious committment.

I mean I think it'll come in waves. As I said in my comment below when FTX Future Fund was up and regrants were abound I had many people around me fake the EA label with hilarious epistemic tripwires abound. Then when FTX collapsed those people were quiet. I think as AI Safety gets more prominent this will happen again in waves. I know a few humanities people pivoting to talking about AI Safety and AI bias people thinking of how to get grant money. 

In a recent post on the EA forum (Why I Spoke to TIME Magazine, and My Experience as a Female AI Researcher in Silicon Valley), I couldn't help but notice that a comments from famous and/or well-known people got lots more upvotes than comments by less well-known people, even though the content of the comments was largely similar.

I'm wondering to what extent this serves as one small data point in support of the "too much hero worship/celebrity idolization in EA" hypothesis, and (if so) to what extent we should do something about it. I feel kind of conflicted, because in a very real sense reputation can be a result of hard work over time,[1] and it seems unreasonable to say that people shouldn't benefit from that. But it also seems antithetical to the pursuit of truth, philosophy, and doing good to weigh to the messenger so heavily over the message.

I'm mulling this over, but it is a complex and interconnected enough issue that I doubt I will create any novel ideas with some casual thought.

Perhaps just changing the upvote buttons to something more like this content creates nurtures a discussion space that lines up with the principles of EA? I'm not confident that would change much.

 

 

  1. ^

    Although not always. Sometimes a person is just in the right place at the right time. Big issues of genetic lottery and class matter. But in a very simplistic example, my highest ranking post on the EA forum is not one of the posts that I spent hours and hours thinking about and writing, but instead is one where I simply linked to a article about EA in the popular press and basically said "hey guys, look how cool this is!"

I'm not convinced by this example; in addition to expressing the view, Toby's message is a speech act that serves to ostracize behaviour in a way that messages from random people do not. Since his comment achieves something the others do not it makes sense for people to treat it differently. This is similar to the way people get more excited when a judge agrees with them that they were wronged than when a random person does; it is not just because of the prestige of the judge, but because of the consequences of that agreement.

I'm glad that you mentioned this. This makes sense to me, and I think it weakens the idea of this particular circumstance as an example of "celebrity idolization."

If the EA forum had little emoji reactions for this made me change my mind or this made me update a bit, I would use them here. 😁

I agree as to the upvotes but don't find the explanation as convincing on the agreevotes. Maybe many people's internal business process is to only consider whether to agreevote after having decided to upvote?

Yeah, and in general there's an extremely high correlation between upvotes and agreevotes, perhaps higher than there should be. It's also possible that some people don't scroll to the bottom and read all the comments.

I definitely think you should expect a strong correlation between "number of agree-votes" and "number of approval-votes", since those are both dependent on someone choosing to engage with a comment in the first place, my guess is this explains most of the correlation.

And then yeah, I still expect a pretty substantial remaining correlation. 

I wish that it was possible for agree votes to be disabled on comments that aren't making any claim or proposal. When I write a comment saying "thank you" or "this has given me a lot to think about" and people agree vote (or disagree vote!), it feels to odd: there isn't even anything to agree or disagree with there!

In those cases I would interpret agree votes as "I'm also thankful" or "this has also given me a lot to think about"

If we interpret an up-vote as "I want to see more of this kind of thing", is it so surprising that people want to see more such supportive statements from high-status people?

I would feel more worried if we had examples of e.g. the same argument being made by different people and the higher-status person getting rewarded more. Even then - perhaps we do really want to see more of high-status people reasoning well in public.

Generally, insofar as karma is a lever for rewarding behaviour, we probably care more about the behaviour of high-status people and so we should expect to see them getting more karma when they behave well, and also losing more when they behave badly (which I think we do!). Of course, if we want karma to be something other than an expression of what people want to see more of then it's more problematic.

Toby's average karma-per-comment definitely seems higher than average, but it isn't so much higher than that of other (non-famous) quality posters I spot-checked as to suggest that there are a lot of people regularly upvoting his comments due to hero worship/celebrity idolization. I can't get the usual karma leaderboard to load to more easily point to actual numbers as opposed to impressionistic ones.

I have this concept I've been calling "kayfabe inversion" where attempts to create a social reality that $P$ accidentally enforces $\not P$. The EA vibe of "minimize deference, always criticize your leaders" may just be, by inscrutable social pressures, increasing deference and hero worship and so on. Spurred by my housemate's view of DoD and it's ecosystem of contractors (because their dad has a long career in it) that perhaps the military's explicit deference and hierarchies actually make it easier to do meaningful criticism of or disagreement with leaders, compared to the implicit hierarchies that emerge when you say that you want to minimize deference. 

Something along these lines.

Perhaps this hypothesis is made clear by a close reading of tyranny of structurelessness, idk. 

Could I bother you to rephrase "$P$ accidentally enforces $\not P$"? I don't know what you mean by using these symbols.

Oh sorry I just meant a general form for "any arbitrary quality a community may wish to cultivate" 

This is in relation to the Keep EA high-trust idea, but it seemed tangential enough and butterfly idea-ish that it didn't make sense to share this as a comment on that post.

Rough thoughts: focus a bit less on people and a bit more on systems. Some failures are 'bad actors,' but my rough impression is that far more often bad things happen because either:

  • the system/structures/incentives nudge people toward bad behavior, or
  • the system/structures/incentives allow bad behavior

It very much reminds me of "Good engineering eliminates users being able to do the wrong thing as much as possible. . . . You don't design a feature that invites misuse and then use instructions to try to prevent that misuse." I've also just learned about the hierarchy of hazard controls, which seems like a nice framework for thinking about 'bad things.'

I think it is great to be able to trust people, but I also want institutions designed in such a way that it is okay if someone is in the 70th percentile of trustworthiness rather than the 95th percentile of trustworthiness.

Low confidence guess: small failures often occur not because people are malicious or selfish, but because they aren't aware of better ways to do things. An employee that isn't aware of EEO in the United States is more likely to make costly mistakes. A manager who has not received good training on how to be a manager is going to fumble more often.

I don't want to imply that designing systems well is easy, not that I am somehow an expert in it. But my (very) rough impression is that in EA we trust individuals a lot, and we don't spend as much time thinking about organizational design.

Decoding the Gurus is a podcast in which an anthropologist and a psychologist critique popular guru-like figures (Jordan Peterson, Nassim N. Taleb, Brené Brown, Imbram X. Kendi, Sam Harris, etc.). I've listened to two or three previous episodes, and my general impression is that the hosts are too rambly/joking/jovial, and that the interpretations are harsh but fair. I find the description of their episode on Nassim N. Taleb to be fairly representative:

Taleb is a smart guy and quite fun to read and listen to. But he's also an infinite singularity of arrogance and hyperbole. Matt and Chris can't help but notice how convenient this pose is, when confronted with difficult-to-handle rebuttals.

Taleb is a fun mixed bag of solid and dubious claims. But it's worth thinking about the degree to which those solid ideas were already well... solid. Many seem to have been known for decades even by all the 'morons, frauds and assholes' that Taleb hates.

To what degree does Taleb's reputation rest on hyperbole and intuitive-sounding hot-takes?

A few weeks ago they released an episode about Eliezer Yudkowksy titled Eliezer Yudkowksy: AI is going to kill us all. I'm only partway through listening to it, but so far they have reasonable but not rock-solid critiques (such as noting how it is a red flag for someone to list off a variety of fields that they claim expertise in, or highlighting the behavior that lines up with a Cassandra complex).

The difficulty I have in issues like this parallels the difficulty I perceive in evaluating any other "end of the world" claim: the fact that many other individuals have been wrong about each of their own "end of the world" claims doesn't really demonstrate that this one is wrong. It perhaps suggests that I should not accept it at face value and I should interrogate the claim, but it certainly doesn't prove falsehood.

You're right, but it does feel like some pretty strong induction, though not just to not accepting the claim at face value, but for demanding some extraordinary evidence. I'm speaking from the p.o.v. of a person ignorant of the topic, and just making the inference from the perennially recurring apocalyptic discourses.

It perhaps suggests that I should not accept it at face value and I should interrogate the claim, but it certainly doesn't prove falsehood.

True, but you only have a finite amount of time to spend investigating claims of apocalypses. If you do a deep dive into the arguments of one of the main proponents of a theory, and find that it relies on dubious reasoning and poor science (like the "mix proteins to make diamondoid bacteria" scenario), then dismissal is a fairly understandable response. 

If AI safety wants to avoid this sort of thing from happening, they should pick better arguments and better spokespeople, and be more willing to call out bad reasoning when it happens. 

I didn't learn about Stanislav Petrov until I saw announcements about Petrov Day a few years ago on the EA Forum. My initial thought was "what is so special about Stanislav Petrov? Why not celebrate Vasily Arkhipov?"

I had known about Vasily Arkhipovfor years, but the reality is that I don't think one of them is more worthy of respect or idolization than the other. My point here is more about something like founder effects, path dependency, and cultural norms. You see, at some point someone in EA (I'm guessing) arbitrarily decided that Stanislav Petrov was more worth knowing and celebrating than Vasily Arkhipov, and now knowledge of Stanislav Petrovis widespread (within this very narrow community). But that seems pretty arbitrary. There are other things like this, right? Things that people hold dear or believe that are little more than cultural norms, passed on because "that is the way we do things here."

I think a lot about culture and norms, probably as a result of studying other cultures and then living in other countries (non-anglophone countries) for most of my adult life. I'm wondering what other things exist in EA that are like Stanislav Petrov: things that we do for no good reason other than that other people do them.

The origin of Petrov Day, as an idea for an actual holiday, is this post by Eliezer Yudkowsky. Arkhipov got a shout-out in the comments almost immediately, but "Petrov Day" was the post title, and it's one syllable shorter.

There are many other things like Petrov Day, in this and every culture — arbitrary decisions that became tradition. 

But of course, "started for no good reason" doesn't have to mean "continued for no good reason". Norms that survive tend to survive because people find them valuable. And there are plenty of things that used to be EA/rationalist norms that are now much less influential than they were, or even mostly forgotten. The first examples that come to mind for me:

  • Early EA groups sometimes did "live below the line" events where participants would try to live on a dollar a day (or some other small amount) for a time. This didn't last long, because there were a bunch of problems with the idea and its implementation, and the whole thing faded out of EA pretty quickly (though it still exists elsewhere).
  • The Giving What We Can pledge used to be a central focus of student EA groups; it was thought to be really important and valuable to get your members to sign up. Over time, people realized this led students to feel pressure to make a lifelong decision too early on, some of whom regretted the decision later. The pledge gradually attained an (IMO) healthier status — a cool part of EA that lots of people are happy to take part in, but not an "EA default" that people implicitly expect you to do.

I would be happy to celebrate an Arkhipov Day. Is there anything that could distinguish the rituals and themes of the day? Arkhipov was in a submarine and had to disagree with two other officers IIRC? (Also when is it?)

Haha, I don't think we need another holiday for Soviet military men who prevented what could have been WWIII. More so, I think we should ask ourselves (often) "Why do we do things the way we do, and should we do things that way?"

As Aaron notes, the "Petrov Day" tradition started with a post by Yudkowsky. It is indeed somewhat strange that Petrov was singled out like this, but I guess the thought was that we want to designate one day of the year as the "do not destroy the world day", and "Petrov Day" was as good a name for it as any.

Note that this doesn't seem representative of the degree of appreciation for Petrov vs. Arkhipov within the EA community. For example, the Future of Humanity Institute has both a Petrov Room and an Arkhipov Room (a fact that causes many people to mix them up), and the Future of Life Award was given both to Arkhipov (in 2017) and to Petrov (in 2018).

I think Arkhipov's actions are in a sense perhaps even more consequential than Petrov's, because it was truly by chance that he was present in that particular nuclear submarine, rather than in any of the other subs from the flotilla. This fact justifies the statement that, if history had repeated itself, the decision to launch a nuclear torpedo would likely not have been vetoed. The counterfactual for Petrov is not so clear.

I'm reading Brotopia: Breaking Up the Boys' Club of Silicon Valley, and this paragraph stuck in my head. I'm wondering about EA and "mission alignment" and similar things.

Which brings me to a point the PayPal Mafia member Keith Rabois raised early in this book: he told me that it’s important to hire people who agree with your “first principles”—for example, whether to focus on growth or profitability and, more broadly, the company’s mission and how to pursue it. I’d agree. If your mission is to encourage people to share more online, you shouldn’t hire someone who believes people don’t really want to make their private lives public, or you’ll spend a lot of time arguing, time you don’t have to waste when you’re trying to build a company. But those who believe in your mission and how to execute it aren’t limited to people who look and act like you. To combat this tendency, you must first be explicit about what your first principles are. And then, for all of the reasons we discussed, go out of your way to find people who agree with your first principles and who don’t look like you. Because if you don’t build a diverse team when you start, as you scale, it will be incomparably harder to do so.

The parallels seem pretty obvious to me, and here is my altered version:

If your mission is to improve the long-term future, you shouldn’t hire someone who believes that most of the value is in the next 0 to 50 years. If your mission is to reduce animal suffering, you shouldn’t hire someone who hates animals. But those who believe in your mission and how to execute it aren’t limited to people who look and act like you.

I think this leads me back to two ideas that I've been bouncing around. First, be clear about to what extent if a particular role needs to be mission-aligned. Second, be clear to what level/extent a particular role needs to be mission aligned (3 out of 10? 8 out of 10?). Does the person you hire to handle physical security need to care about AI safety risk scenarios

If your mission is to reduce animal suffering, should you hire someone that wants to do that but is simply less intense about it? A person who spends 5% of their free time thinking about this when you spend 60% of your free time thinking about this? I do think that mission alignment is important for some roles, but it is hard to specify without really understanding the work.[1] 

  1. ^

    As an example of "understanding the work," my superficial guess is that someone planning an EAG event probably doesn't need to know all about EA in order to book conference rooms, arrange catering, set up sound & lighting, etc. But I don't know, because I haven't done that job or managed that job or closely observed that job. Maybe lot of EA context really is necessary in order to make lots of little decisions which otherwise would make the event a noticeably worse experience for the attendees. Indeed, pretty much the only thing that I am confident in in relation to this is that we can't make strong claims about a role unless we really understand the work.

Random musing from reading a reddit comment:

Some jobs are proactive: you have to be the one doing the calls and you have to make the work yourself and no matter how much you do you're always expected to carry on making more, you're never finished. Some jobs are reactive: The work comes in, you do it, then you wait for more work and repeat.

Proactive roles are things like business development, writing, marketing, research, many types of sales. You can always do more, and there isn't really an end point unless you want to impose an arbitrary end point: I'll stop when I finish this book, or I'll take a break after this research paper. I imagine[1] that a type of stress present in sales and business development is that you are always pushing for more, like the difference between someone who wants to accumulate X dollars for retirement as opposed to someone who simply wants lots of dollars for retirement.

Reactive roles are things like payroll, being the cook in a restaurant (or being the waiter in a restaurant), legal counsel, office manager, teacher. There is an 'inflow' of tasks or work or customers, and you respond to that inflow. But if there are times when there isn't any inflow, then you just wait for work to arrive[2]. Imagine being the cook in a restaurant, and there is a 30-minute period when there are no new order placed. Once everything is clean and you are ready for orders to come in, what can you do?

  1. ^

    I've never worked in sales and I don't think I've ever even had conversations about it, so I am really just guessing here.

  2. ^

    It isn't always so simplistic of course. Maybe the waiter has some other tasks on 'standby' for when there are no customer's coming in. Maybe the payroll person has some lower priority tasks that are now the highest priority available task (back burner tasks) to do when there isn't any payroll work to do. Often there are ways that you can do something other than sit around an twiddle your thumbs, and this is also a great way to get noticed and get attention from managers. But it seems to be a very slippery slope into busy work with a lot of low-prestige jobs: how often does that supply closet really need to be reorganized, and how often does this glass door need to be cleaned.

I've been reading about performance management, and a section of the textbook I'm reading focuses on The Nature of the Performance Distribution. It reminded me a little of Max Daniel's and Ben Todd's How much does performance differ between people?, so I thought I'd share it here for anyone who is interested.

The focus is less on true outputs and more on evaluated performance within an organization. It is a fairly short and light introduction, but I've put the content here if you are interested.

A theme that jumps out at me is situational specificity, as it seems some scenarios follow a normal distribution, some scenarios are heavy tailed, and some probably have a strict upper limit. This echoes the emphasis that an anonymous commented shared on the Max's and Ben 's post:

My point is more "context matters," even if you're talking about a specific skill like programming, and that the contexts that generated the examples in this post may be meaningfully different from the contexts that EA organizations are working in.

I'm roughly imaging an organization in which there is a floor to performance (maybe people beneath a certain performance level aren't hired), and there is some type of barrier that creates a ceiling to performance (maybe people who perform beyond a certain level would rather go start their own consultancy rather than work for this organization, or they get promoted to a different department/team). But the floor or the ceiling could be more more naturally related to the nature of the work as well, as in the scenario of an assembly worker who can't go faster than the speed of the assembly line.

This idea of situational specificity is paralleled in hiring/personnel selection, in which a particular assessment might be highly predictive of performance in one context, and much less so in a different context. This is the reason why we shouldn't simply use GMA and conscientiousness to evaluate every single employee at every single organization.

Very interesting. Another discussion of the performance distribution here

Thanks for sharing this. I found this to be quite interesting.

I remember being very confused by the idea of an unconference. I didn't understand what it was and why it had a special name distinct from a conference. Once I learned that it was a conference in which the talks/discussions were planned by participants, I was a little bit less confused, but I still didn't understand why it had a special name. To me, that was simply a conference. The conferences and conventions I had been to had involved participants putting on workshops. It was only when I realized that many conferences lack participative elements that I realized my primary experience of conferences was non-representative of conferences in this particular way.

I had a similar struggle understanding the idea of Software as a Service (SaaS). I had never had any interactions with old corporate software that required people to come and install it on your servers. The first time I heard the term SaaS as someone explained to me what it meant, I was puzzled. "Isn't that all software?" I thought. "Why call it SaaS instead of simply calling it software?" All of the software I had experienced and was aware of was in the category of SaaS.

I'm writing this mainly just to put my own thoughts down somewhere, but if anyone is reading this I'll try to put a "what you can take from this" spin on it:

  1. If your entire experience of X falls within X_type1, and you are barely even aware of the existence of X_type2, then you will simply think of X_type1 as X, and you will be perplexed when people call it X_type1.
  2. If you are speaking to someone who is confused by X_type1, don't automatically assume they don't know what X_type1 is. It might be that they simply don't know why you are using such an odd name for (what they view as X).

Silly example: Imagine growing up in the USA, never travelling outside of the USA, and telling people that you speak "American English." Most people in the USA don't think of their language as American English; they just think of it as English. (Side note: over the years I have had many people tell me that they don't have an accent)

I just finished reading Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. I think the book is worth reading for anyone interested in truth and the figuring out what is real, but I especially liked the aspiration Mertonian norms, a concept I had never encountered before, and which served as a theme throughout the book.

I'll quote directly from the book to explain, but I'll alter the formatting a bit to make it easier to read:

In 1942, Merton set out four scientific values, now known as the ‘Mertonian Norms’. None of them have snappy names, but all of them are good aspirations for scientists.

  1. First, universalism: scientific knowledge is scientific knowledge, no matter who comes up with it – so long as their methods for finding that knowledge are sound. The race, sex, age, gender, sexuality, income, social background, nationality, popularity, or any other status of a scientist should have no bearing on how their factual claims are assessed. You also can’t judge someone’s research based on what a pleasant or unpleasant person they are – which should come as a relief for some of my more disagreeable colleagues. 
  2. Second, and relatedly, disinterestedness: scientists aren’t in it for the money, for political or ideological reasons, or to enhance their own ego or reputation (or the reputation of their university, country, or anything else). They’re in it to advance our understanding of the universe by discovering things and making things – full stop. As Charles Darwin once wrote, a scientist ‘ought to have no wishes, no affections, – a mere heart of stone.’ The next two norms remind us of the social nature of science.
  3. The third is communality: scientists should share knowledge with each other. This principle underlies the whole idea of publishing your results in a journal for others to see – we’re all in this together; we have to know the details of other scientists’ work so that we can assess and build on it.
  4. Lastly, there’s organised scepticism: nothing is sacred, and a scientific claim should never be accepted at face value. We should suspend judgement on any given finding until we’ve properly checked all the data and methodology. The most obvious embodiment of the norm of organised scepticism is peer review itself.

Although there are lots of differences between the goals of EA and the goals of science, in the areas of similarity I think there might be benefit in more awareness of these norms and more establishment of these as standards. Much of it seems to line up with broad ideas of scout mindset and epistemic rationality.

My vague impressions are that the EA community generally  holds up fairly well when measured against these norms. I suspect there is some struggle with organized skepticism (ideas from high-status people often get accepted at face value) and there are a lot of difficulties with disinterestedness (people need resources to survive and to pursue their goals, and most of us have a desire for social desirability), but overall I think we are doing decently well.

I was recently reminded about BookMooch, and read a short interview with the creator, John Buckman.

I think that the interface looks a bit dated, but it works well: you send people books you have that you don't want, and other people send you books that you want but you don't have. I used to use BookMooch a lot from around 2006 to 2010, but when I moved outside of the USA in 2010 I stopped using it. One thing I like is that it feels very organic and non-corporate: it doesn't cost a monthly membership, there are no fees for sending and receiving books,[1] and it isn't full of superfluous functions. There is a pretty simple system to prevent people from abusing the system, which is basically just transparency and having a "give:mooch ratio" visible. Although it is registered as a for-profit corporation, John Buckman runs it without trying to maximize profits. BookMooch earns a bit of money by using Amazon affiliate fees if people want to buy a book immediately rather than mooch the mooch the book, but the site doesn't have advertisements or any other revenue.[2]

I love this, and it makes me think about creating value in the world. In my mind, this is kind of the ideal of a startup: you have an idea and you implement it, literally making value out of nothing. There really was an unrealized "market" for second-hand books, but there was no way to "liberate" it. And I also love that this is simply providing a service to the world. I wonder what similar yet-to-be-realized ventures there are that would create more impact than merely the joy of getting a book you want.

Now that I am in the USA again I think I'll start using BookMooch again. I probably won't use it as much as I used to, with how I've become more adapted to reading PDFs and EPUBs and listening to audiobooks, but I'll use it some for books that I haven't been able to get digital copies of.

  1. ^

    You need to pay the post office to send the book, but what I mean is that BookMooch doesn't charge any fees.

  2. ^

    I had an anarchist streak when I was younger, and the fact that this corporation lacks so many of the trappings of standard extractive capitalism is emotionally quite appealing. If a bunch of hippies had created silicon valley instead of venture capitalists, maybe the big tech firm would look more like this.

I guess shortform is now quick takes. I feel a small amount of negative reaction, but my best guess its that this reaction is nothing more than a general human "change is bad" feeling.

Is quick takes a better name for this function that shortform? I'm not sure. I'm leaning toward yes.

I wonder if this will have an effect to nudge people to not write longer posts using the quick takes function.

This is just for my own purposes. I want to save this info somewhere so I don't lose it. This has practically nothing to do with effective altruism, and should be viewed as my own personal blog post/ramblings.

I read the blog post What Trait Affects Income the Most?, written by Blair Fix, a few years ago, I really enjoyed seeing some data on it. At some point later I wanted to find it and I couldn't find it, and today I stumbled upon it again.  The very short and simplistic summary is that hierarchy (a fuzzy concept that I understand to be roughly "class," including how wealthy your parents were, were you were born, and other factors) is the biggest influence on lifetime earnings[1]. This isn't a huge surprise, but it is nice to see some references to research comparing class, education, occupation, race, and other factors.

 Opportunity, equity, justice/fairness... these are topics that I probably think about too much for my own good.[2]

  1. ^

    Of course, like most research, this isn't rock solid, and lacking the breadth of knowledge I'm not able to make a sound critique of the research. I also want to be wary of confirmation bias, since this is basically a blog post telling me that what I want to be true it true, so there is another grain of salt I should keep in mind.

  2. ^

    I would probably think about them less if I had been born into an upper-middle class family, or if I suddenly inherited $500,000. Just like a well-fed person doesn't think about food, or a person with career stability isn't anxious about their job. However, I think that if write about or talk about what leads to success in life then I will be perceived as angry/bitter/envious (especially since I don't have any solutions or actions, other than a vague "fortunate people be more humble"), and that isn't how I want people to perceive me. Thus, I generally try to avoid bringing up these topics.

I vaguely remember reading something about buying property with a longtermism perspective, but I can't remember the justification against doing it. This is basically using people's inclination to choose immediate rewards over rewards that come later in the future. The scenario was (very roughly) something like this:

You want to buy a house, and I offer to help you buy it. I will pay for 75% of the house, you will pay for 25% of the house. You get to own/use the house for 50 years, and starting in year 51 ownership transfers to me. You get a huge discount to own the house for 50 years, and I get a big discount to own the house forever (starting in year 51).

This feels like a very naïve question, but if I had enough money to support myself and I also had excess funds outside of that, why not do something like this as a step toward building an enormous pool of resources for the future? Could anyone link me to the original post?

That's like what is known as a "life estate" except for a fixed term of years. It has similiarities to offering a long-term lease for an upfront payment . . and many of the same problems. The temporary possessor doesn't care about the value of the property in year 51, so has every incentive to defer maintenance and otherwise maximize their cost/benefit ratio. Just ask anyone in an old condo association about the tendency to defer major costs until someone else owns their unit . . .

If you handle the maintenance, then this isn't much different than a lease . . . better to get a bank loan and be an ordinary lessor, because the 50-year term and upfront cash requirement are going to depress how much you make. If you plan on enforcing maintenance requirements for the other person, that will be a headache and could be costly.

Would anyone find it interesting/useful for me to share a forum post about hiring, recruiting, and general personnel selection? I have some experience running hiring for small companies, and I have been recently reading a lot of academic papers from the Journal of Personnel Psychology regarding the research of most effective hiring practices. I'm thinking of creating a sequence about hiring, or maybe about HR and managing people more broadly.

Please do! I'd absolutely love to read that :)

[comment deleted]1
0
0

I'm grappling with an idea of how to schedule tasks/projects, how to prioritize, and how to set deadlines. I'm looking for advice, recommending readings, thoughts, etc.

The core question here is "how should we schedule and prioritize tasks whose result becomes gradually less valuable over time?" The rest of this post is just exploring that idea, explaining context, and sharing examples.


Here is a simple model of the world: many tasks that we do at work (or maybe also in other parts of life?) fall into either sharp decrease to zero or sharp reduction in value.

  • The sharp decrease to zero category. These have a particular deadline beyond which they offer no value, so you should really do the task before that point.
    • If you want to put me in touch with a great landlord to rent from, you need to do that before I sign a 12-month lease for a different apartment; at that point the value of the connection is zero.
    • If you want to book a hotel room prior to a convention, you need to do it before the hotel is fully booked; if you wait until the hotel is fully booked, calling to make that reservation is useless.
    • If you want to sharing the meeting agenda to allow attendees to prepare for a meeting, you have to share it prior to the meeting starting.
  • The sharp reduction in value  category. You should do these tasks before the sharp reduction in value. Thus, the deadline is when value is about to sharply decrease.
    • Giving me food falls into the sharp sharp reduction  category, because if you wait until I've I'm already satiated by eating a full meal, the additional food that you give me has far less value than if you had given it to me before my meal.

Setting deadlines for these kinds of tasks is, in a certain sense, simple: do it at some point before the decrease in value. But what about tasks that decrease gradually in value over time?

  • We can label these as the gradual reduction category.
    • Examples include an advertisement for a product that launched today and will be sold for the next 100 days. If I do this task today I will get 100% of it's value, or if I do it tomorrow I will get 99% of it's value, and so on, all the way to last day that will add any value.
    • I could start funding my retirement savings today or tomorrow, and the difference is negligible. In fact, the difference between any two days is tiny. But if I delay for years, then the difference will be massive. This is kind of a "drops of water in a bucket" issue: a single drop doesn't matter, but all together they add up to a lot.
    • Should you start exercising today or tomorrow? Doesn't really matter. Or start next week? No problem. Start 15 years from now? That is probably a lot worse.
    • If you want to stop smoking, what difference does a day make?

Which sort of leads us back to the core question. If the value decreases gradually rather than decreasing sharply, then when do you do the task?

I suppose one answer is to do the task immediately, before it has any reduction in value. But that also seems like it isn't what we actually do. In terms of prioritizing, instead of doing everything immediately,  people seem to push tasks back to the point just before they would cause problems. If I am prioritizing, I will probably try hard to to the sharp reduction in value task (orange in the below graph) before it has the reduction in value, and then I'll prioritize the sharp decrease to zero task (blue in the graph), finally starting on my lowest priority task once the other two are finished. But that doesn't seem optimal, right?

Evidence-Based Management

What? Isn't it all evidence-based? Who would take actions without evidence? Well, often people make decisions based on an idea they got from a pop-business book (I am guilty of this), off of gut feelings (I am guilty of this), or off of what worked in a different context (I am definitely guilty of this).

Rank-and-yank (I've also heard it called forced distribution and forced ranking, and Wikipedia describes it as vitality curve) is an easy example to pick on, but we could easily look at some other management practice in hiring, marketing, people management, etc. 

I like one-on-ones. I think that one-on-ones are a great way to build a relationship with the people on your team, and they also provide a venue for people to bring you issues. But where is the evidence? I've never seen any research or data to suggest that one-on-ones lead to particular outcomes. I've heard other people describe how they are good, and I've read blog posts about why they are a best practice, but I've never seen anything stronger than anecdote and people recommending them from their own experience.

It was an HBR article from 2006 (which I found as a result of a paper titled Evidence-Based I–O Psychology: Not There Yet) that I recently read that got me thinking about this more, but I'm considering reading into the area more and writing a more in-depth post about it. It lines up nicely with two different areas of interest of mine: how we often make poor decisions even when we have plenty of opportunities to make better decisions, and learning how to run organizations well.

I'm curious if you have evidence-based answers to Ben West's question here.

I haven't read any research or evidence demonstrating one leadership style is better than another.  My intuitions and other people's anecdotes that I've heard tell me that certain behaviors are more likely or less likely to lead to success, but I haven't got anything more solid to go on that that at the moment.

Similarly, I haven't read any research showing (in a fairly statistically rigorous way) that lean, or agile, or the Toyota Production System, or other similar concepts are effective. Anecdote tells me that they are, and the reasoning for why they work makes sense to me, but I haven't seen anything more rigorous.

Nicholas Bloom's research is great, and I am glad to see his study of consulting in India referenced on the EA forum. I would love to see more research measuring impacts of particular management practices, and if I was filthy rich that is probably one of the things that I would fund.

I'm assuming that there are studies about smaller-level actions/behaviors, but it is a lot easier to A-B test what color a button on a homepage should be than to A-B test having a cooperative work culture or a competitive work culture.

I think of the the tricky things is how context matters to much. Just because practice A is more effective than practice B in a particular culture/industry/function, doesn't mean it will apply to all situations. As a very simplistic example, rapid iteration is great for a website's design, but imagine how horrible it would be for payroll policy.

I've been reading a few academic papers on my "to-read" list, and The Crisis of Confidence in Research Findings in Psychology: Is Lack of Replication the Real Problem? Or Is It Something Else? has a section that made me think about epistemics, knowledge, and how we try to make the world a better place. I'll include the exact quote below, but my rough summary of it would be that multiple studies found no relationship between the presence or absence of highway shoulders and accidents/deaths, and thus they weren't built. Unfortunately, none of the studies had sufficient statistical power, and thus the conclusions drawn were inaccurate. I suppose that absence of evidence is not evidence of absence might be somewhat relevant here. Lo and behold, later on a meta-analysis was done, finding that having highway shoulders reduced accidents/deaths. So my understanding is that inaccurate knowledge (shoulders don't help) led to choices (don't build shoulders) that led to accidents/deaths that wouldn't otherwise have happened.

I'm wondering if there are other areas of life that we can find facing similar issues. These wouldn't necessarily be new cause areas, but the general idea of identify an area that involves life/death decisions, and then either make sure the knowledge is accurate or attempt to bring accurate knowledge to the decision-makers would be incredibly helpful. Hard though. Probably not very tractable.

For anyone curious, here is the relevant excerpt that prompted my musings:

A number of studies had been conducted to determine whether highway shoulders, which allow drivers to pull over to the side of the road and stop if they need to, reduce accidents and deaths. None of these inadequately powered studies found a statistically significant relationship between the presence or absence of shoulders and accidents or deaths. Traffic safety engineers concluded that shoulders have no effect, and as a result fewer shoulders were built in most states. Hauer’s (2004) meta-analysis of these studies showed clearly that shoulders reduced both accidents and deaths. In this case, people died as a result of failure to understand sampling error and statistical power.

Curated and popular this week
Relevant opportunities