Application forms for EA jobs often give an estimate for how long you should expect it to take; often these estimates are *wildly* too low ime. (And others I know have said this too). This is bad because it makes the estimates unhelpful for planning, and because it probably makes people feel bad about themselves, or worry that they're unusually slow, when they take longer than the estimate.
Imo, if something involves any sort of writing from scratch, you should expect applicants to take at least an hour, and possibly more. (For context, I've seen application forms which say 'this application should take 10 minutes' and more commonly ones estimating 20 minutes or 30 minutes).
It doesn’t take long to type 300 words if you already know what you’re going to say and don’t particularly care about polish (I wrote this post in less than an hour probably). But job application questions —even ‘basic’ ones like ‘why do you want this job?’ and ‘why would you be a good fit?’-- take more time. You may feel intuitively that you’d be a good fit for the job, but take a while to articulate why. You have to think about how your skills might help with the job, perhaps cross-referencing with the job description. And you have to express everything in appropriately-formal and clear language.
Job applications are also very high-stakes, and many people find them difficult or ‘ugh-y’, which means applicants are likely to take longer to do them than they “should”, due to being stuck or procrastinating.
Maybe hirers put these time estimates because they don’t want applicants to spend too long on the first-stage form (for most of them, it won’t pay off, after all!) This respect for people’s time is laudable. But if someone really wants the job, they *will* feel motivated to put effort into the application form.
There’s a kind of coordination problem here too. Let's imagine there's an application for a job that I really want, and on the form it says 'this application should take you appr
Reddit user blueshoesrcool discovered that Effective Ventures (the umbrella organization for the Centre for Effective Altruism, 80000 hours, GWWC, etc) has missed its charity reporting deadline by 27 days.
Given that there's already a regulatory inquiry into Effective Ventures Foundation, maybe someone should look into this.
Forum team update: Shortform is now called “Quick takes”, has a section on the Frontpage, and changed in some other smaller ways.
Here’s what’s new:
* Shortform is now called “Quick takes” (Shortform was confusing to many people)
* There’s a section for Quick takes on the Frontpage to improve visibility (you can still post Quick takes that don’t show up on the Frontpage; just deselect the “Frontpage” tag). Those will only show up in the separate view
* Some other design changes to make things clearer and easier to use:
* There’s an input field in the section on the Frontpage, so you can add a Quick take there directly
* Improvements to the page where all of an author’s Quick takes are shown
* Other visual changes to the Quick takes and the creation flow
Rationale/context:
In some cases, Forum users want to share (and read) less polished ideas or other content that doesn’t seem like a full post on the Forum. Shortform was designed years back to fill this gap, but the Shortform feature was tucked away, hard to read, and had a name that most users didn’t understand.
Over the past few months, we’ve been exploring ways to encourage lower-barrier discussions, culminating in this latest version of Quick takes.
As always, we’d love feedback on these changes. You can comment on my Quick take (or email us if you prefer). We’ll also monitor how this feature gets used and improve it over time.
The Happier Lives Institute have helped many people (including me) open their eyes to Subjective Wellbeing and perhaps even update us to the potential value of SWB. The recent heavy discussion (60+ comments) on their fundraising thread disheartened me. Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair. I'm not sure exactly why I felt this way, but here are a few ideas.
* (High certainty) HLI have openly published their research and ideas, posted almost everything on the forum and engaged deeply with criticism which is amazing - more than perhaps any other org I have seen. This may (uncertain) have hurt them more than it has helped them.
* (High certainty) When other orgs are criticised or asked questions, they often don't reply at all, or get surprisingly little criticism for what I and many EAs might consider poor epistemics and defensiveness in their posts (for charity I'm not going to link to the handful I can think of). Why does HLI get such a hard time while others get a pass? Especially when HLI's funding is less than many of orgs that have not been scrutinised as much.
* (Low certainty) The degree of scrutiny and analysis of some development orgs in general like HLI seems to exceed that of AI orgs, Funding orgs and Community building orgs. This scrutiny has been intense- more than one amazing statistician has picked apart their analysis. This expert-level scrutiny is fantastic, I just wish it could be applied to other orgs as well. Very few EA orgs (at least that have been posted on the forum) produce full papers with publishable level deep statistical analysis like HLI have at least attempted to do. Does there need to be a "scrutiny rebalancing" of sorts. I would rather other orgs got more scrutiny, rather than development orgs getting less.
Other orgs might see threads like the HLI funding thread hammering and compare it with other threads where orgs are criticised and don't eng
Some post-EAG thoughts on journalists
For context, CEA accepted at EAG Bay Area 2023 a journalist who has at times written critically of EA and individual EAs, and who is very much not a community member. I am deliberately not naming the journalist, because they haven't done anything wrong and I'm still trying to work out my own thoughts.
On one hand, "journalists who write nice things get to go to the events, journalists who write mean things get excluded" is at best ethically problematic. It's very very very normal: political campaigns do it, industry events do it, individuals do it. "Access journalism" is the norm more than it is the exception. But that doesn't mean that we should. One solution is to be very very careful about maintaining the differentiation between "community member" and "critical or not". Dylan Matthews is straightforwardly an EA and has reported critically on a past EAG: if he was excluded for this I would be deeply concerned.
On the other hand, I think that, when hosting an EA event, an EA organization has certain obligations to the people at that event. One of them is protecting their safety and privacy. EAs who are journalists can, I think, generally be relied upon to be fair and to respect the privacy of individuals. That is not a trust I extend to journalists who are not community members: the linked example is particularly egregious, but tabloid reporting happens.
EAG is a gathering of community members. People go to advance their goals: see friends, network, be networked at, give advice, get advice, learn interesting things, and more. In a healthy movement, I think that EAGs should be a professional obligation, good for the individual, or fun for the individual. It doesn't have to be all of them, but it shouldn't harm them on any axis.
Someone might be out about being bi at an after-party with friends, but not want to see that detail being confirmed by a fact-checker for a national paper. This doesn't seem particularly unusual. The
On Socioeconomic Diversity:
I want to describe how the discourse on sexual misconduct may be reducing the specific type of socioeconomic diversity I am personally familiar with.
I’m a white female American who worked as an HVAC technician with co-workers mostly from racial minorities before going to college. Most of the sexual misconduct incidents discussed in the Time article have likely differed from standard workplace discussions in my former career only in that the higher status person expressed romantic/sexual attraction, making their statement much more vulnerable than the trash-talk I’m familiar with. In the places most of my workplace experience comes from, people of all genders and statuses make sexual jokes about coworkers of all genders and statuses not only in their field, but while on the clock. I had tremendous fun participating in these conversations. It didn’t feel sexist to me because I gave as good as I got. My experience generalizes well; Even when Donald Trump made a joke about sexual assault that many upper-class Americans believed disqualified him, immediately before the election he won, Republican women were no more likely to think he should drop out of the race than Republican voters in general. Donald Trump has been able to maintain much of his popularity despite denying the legitimacy of a legitimate election in part because he identified the gatekeeping elements of upper-class American norms as classist. I am strongly against Trump, but believe we should note that many female Americans from poorer backgrounds enjoy these conversations, and many more oppose the kind of punishments popular in upper class American communities. This means strongly disliking these conversations is not an intrinsic virtue, but a decision EA culture has made that is about more than simple morality.
When I post about EA on social media, many of my co-workers from my blue-collar days think it sounds really cool. If any of them decided to engage further and mad
Proposing a change to how Karma is accrued:
I recently reached over 1,000 Karma, meaning my upvotes now give 2 Karma and my strong upvotes give 6 Karma. I'm most proud of my contributions to the forum about economics, but almost all of my increased ability to influence discourse now is from participating in the discussions on sexual misconduct. An upvote from me on Global Health & Development (my primary cause area) now counts twice as much as an upvote from 12 out of 19 of the authors of posts with 200-300 Karma with the Global Health & Development tag. They are generally experts in their field working at major EA organizations, whereas I am an electrical engineering undergraduate.
I think these kinds of people should have far more ability to influence the discussion via the power of their upvotes than me. They will notice things about the merits of the cases people are making that I won't until I'm a lot smarter and wiser and farther along in my career. I don't think the ability to say something popular about culture wars translates well into having insights about the object level content. It is very easy to get Karma by participating in community discussions, so a lot of people are now probably in my position after the increased activity in that area around the scandals. I really want the people with more expertise in their field to be the ones influencing how visible posts and comments about their field are.
I propose that Karma earned from comments on posts with the community tag accrues at a slower rate.
Edit: I just noticed a post by moderators that does a better job of explaining why karma is so easy to accumulate in community posts:
https://forum.effectivealtruism.org/posts/dDudLPHv7AgPLrzef/karma-overrates-some-topics-resulting-issues-and-potential
I mostly haven't been thinking about what the ideal effective altruism community would look like, because it seems like most of the value of effective altruism might just get approximated to what impact it has on steering the world towards better AGI futures. But I think even in worlds where AI risk wasn't a problem, the effective altruism movement seems lackluster in some ways.
I am thinking especially of the effect that it often has on university students and younger people. My sense is that EA sometimes influences those people to be closed-minded or at least doesn't contribute to making them as ambitious or interested in exploring things outside "conventional EA" as I think would be ideal. Students who come across EA often become too attached to specific EA organisations or paths to impact suggested by existing EA institutions.
In an EA community that was more ambitiously impactful, there would be a higher proportion of folks at least strongly considering doing things like starting startups that could be really big, traveling to various parts of the world to form a view about how poverty affects welfare, having long google docs with their current best guesses for how to get rid of factory farming, looking at non-"EA" sources to figure out what more effective interventions GiveWell might be missing perhaps because they're somewhat controversial, doing more effective science/medical research, writing something on the topic of better thinking and decision-making that could be as influential as Eliezer's sequences, expressing curiosity about the question of whether charity is even the best way to improve human welfare, trying to fix science.
And a lower proportion of these folks would be applying to jobs on the 80,000 Hours job board or choosing to spend more time within the EA community rather than interacting with the most ambitious, intelligent, and interesting people amongst their general peers.