Off and on projects in epistemic public goods, AI alignment (mostly interested in multipolar scenarios, cooperative AI, ARCHES, etc.), and community building. I've probably done my best work as a research engineer on a cryptography team, I'm pretty bad at product engineering.
longtermism and politics both seem "error bars so wide that expected value theory is probably super useless or an excuse for motivated reasoning or both". But I don't think this is damning toward EA because downsides of a misfire in a brittle theory of change don't seem super important for most longtermist interventions (your pandemic preparedness scheme might accidentally abolish endemic flu, so your miscalculation about the harm or likelihood of a way-worse-than-covid is sort of fine). Whereas in politics the brittleness of the theory of change means you can be well-meaningly harmful, which is kinda the central point of anything involving "politics" at all.
Certainly this is not robust to all longtermist interventions, but I find very convincing for the average case.
https://www.lesswrong.com/tag/complexity-of-value
I'm roughly comfortable sort of leaving it here, though how different people really get convinced of it is not obvious. They're right to question speciesism or whatever, and I hope it becomes salient to them that their mistakes aren't simply disloyalty.
Withholding the current score of a post till after a vote is cast (but the casting is committal) should be enough to prevent strategic behavior. But it comes with many downsides (I think feed ordering / recsys could work with private information, so the scores may be in principle inferrable from patterns in your feed, but you probably won't actually do it. The worse problem is commitment, I do like to edit my votes quite a bit after initial impressions).
I imagine there's a more subtle instrument, withholding the current score until committal votes have been cast seems almost like a limit case.
I'm extremely upset about recent divergence from ForumMagnum / LessWrong.
I'm neutral on QuickTakes rebrand: I'm a huge fan of shortform overall (if I was Dictator of Big EA I would ban twitter and facebook and move everybody to to shortform/quicktake!), I trust y'all to do whatever you can to increase adoption.
I tend to think common knowledge of overall ambivalence or laziness in vetting writers, evidenced by the magazines behind Torres' clickbait career, is worth promoting https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty tho I don't know anything about this mark fuentes character or if he's trustworthy.
I guess I can say what I've always said: that the value of sneer/dunk culture is publicity and antiselection. People who think sneer/dunk culture makes bad writing become attracted to us and people who think it's good writing don't.
I think a separate but plausibly better point is the "memetic gradient" is characterized in known awful ways for politics, and many longtermist theories of change offer an opportunity for something better. If you pursue a political theory of change, you're consenting to a relentless onslaught of people begging you to make your epistemics worse on purpose. This is a perfectly good reason not to sign up for politics, the longtermist ecosystem is not immune to similar issues but it seems certainly like there's a fighting chance or that it's the least bad of all options.