Views expressed here do not represent the views of any organizations I am affiliated with, unless mentioned otherwise.
I think this excerpt from the Ben Todd on the core of effective altruism (80k podcast) sort of answers your question:
Ben Todd: Well yeah, just quickly on the definition, my definition didn’t have “Using evidence and reason” actually as part of the fundamental definition. I’m just saying we should seek the best ways of helping others through whatever means are best to find those things. And obviously, I’m pretty keen on using evidence and reason, but I wouldn’t foreground it.
Arden Koehler: If it turns out that we should consult a crystal ball in order to find out if that’s the best way, then we should do that?
Ben Todd: Yeah.
Arden Koehler: Okay. Yeah. So again, very abstract: whatever it is that turns out to be the best way of figuring out how to do the most good.
Ben Todd: Yeah. I mean, in general, you have this just big question of how narrow or broad to make the definition of effective altruism and it is a difficult thing to say.
I don't think this is an "official definition" (for example, endorsed by CEA) but I think (or atleast hope!) that CEA is working out a more complete definition for EA.
Task Y candidate: Fellowship facilitator for EA Virtual Programs
EA Virtual Programs runs intro fellowships, in-depth fellowships, and The Precipice reading groups (plus occasional other programs). The time commitment for facilitators is generally 2-5 hours per week (depending on the particular program).
EA intro fellowships (and similar programs) have been successful at minting engaged EAs. There are large diminishing returns even in selecting applicants with a not-so-strong application since the application process does not predict future engagement well (see this and this). Thus, if a fellowship/reading group has to reject people, that's significant value lost. Rejected applicants generally re-apply at low rates (despite being encouraged to!).
Uncertainties:
I know of atleast a few non-student working professionals who are facilitators for EA Virtual Programs, which I will take as evidence that this can be a Task Y.
I think rationality should not be considered as a seperate cause area, but perhaps deserves to be a sub-cause area of EA movement building and AI safety.
Also, the post title is misleading since an interpretation of it could be that making people more rational is intrinsically valuable (or that due to increased rationality they would live happier lives). While this is likely true, this would probably be an ineffective intervention.
Do you have a preference on whether to contact you or contact JP Addison (the programmer of the EA Forum) for technical bugs?
Yeah, I agree. I don't have anything in mind as such. I think only Ben can answer this :P