The age of dankness in effective altruism is upon us. It's unstoppable. I don't mean this as the go-to administrator of the Dank EA Memes Facebook group, or as one of the more reputed online memelords/shitposters in the EA movement.
I don't just mean that as someone other effective altruists turn to for answers in the face of the onslaught of dank memes becoming a primary vector/medium shaping public perception of EA, both within and outside of the effective altruism community itself, for better or worse. I emphasize this as the go-to person other effective altruists turn to as potentially capable of using my influence to reign in the excessive or adverse impact of dank memes on EA. I don't have that capability.
I can't reign in dank memes in EA or do whatever anyone else might want me to do. They're beyond me or any other mods/admins of the Dank EA Memes Facebook group.
The trend is unstoppable. It's inevitable. The future of EA is dank.
To emphasize just how final this reality is, here is a very incomplete list of leaders in EA who I'm aware have embraced this uncanny reality
Kudos to Rob and Eliezer as early adopters of this trend.
For years now, much ink has been spilled about the promise and peril of the portents, for effective altruism, of dank memes. Many have singled me out as the person best suited to speak to this controversy. I've heard, listened, and taken all such sentiments to heart. This is the year I've opted to finally to complete a full-spectrum analysis of the role of dank memes as a primary form of outreach and community-building.
This won't be a set of shitposts on other social media websites. This will be a sober evaluation of dank EA memes, composed of at least one post, if not a series of multiple posts, on the EA Forum. They may be so serious that few, if any, memes will be featured at all. It is time for the dank EA memes to come home.
I don't understand this post. It seems like based on the title that these are individuals who may have died of covid or related complications but that's not clarified. Most of the people listed were ageing, so it's easy to imagine might have them may have incidentally died during the pandemic for other reasons.
Most of these people are ones who effective altruists would find inspiring figures, though a few of them appear to just be your favourite musicians. I'm guessing overall they're people you personally find inspiring who died during the pandemic, for reasons that may or may not be related to covid.
I don't feel like there's anything wrong with making a post like that on the EA Forum. It's just not clear to me which of those things this post is about. (I tend to miss social subtleties often obvious to most others, so please pardon me if this post seems strange.)
In hindsight, the number of weirdness points we have can be increased. This is especially true if some of the supporters of causes at one point considered weird later become very wealthy and patronize that niche, unusual cause to the tune of tens of millions of dollars.
On the other hand, as much as the size of the pool of weirdness points is theoretically unlimited, it's still hard to practically increase the number of available weirdness points at an arbitrary rate. It's still very possible to spend one's weirdness points too quickly, and hurt one's reputation in the process, so the reservoir of weirdness points should still be spent wisely.
Several people we thought deeply shared our values have been charged with conducting one of the biggest financial frauds in history (one of whom has pled guilty).
Update July 2023
As of now, two other former executives at FTX and/or Alameda Research have plead guilty. The 3 who have plead guilty are the 3 former execs who are known as the most complicit in SBF's alleged crimes. That the 3 of them have plead guilty, while SBF is still denying all charges, seems like it will be one of the most interesting parts of his upcoming trial.
In terms of Elon Musk specifically, I feel like it affirms what most of already thought of his relationship with AI safety (AIS). Even among billionaire technologists conscious of AIS and who achieved fame and fortune in Silicon Valley, Musk is an ambitious and exceptionally impactful personality. This of course extends to all his business ventures, philanthropy and political influence.
Regardless of whether it's ultimately positive or negative, I expect the impact of xAI, including for AIS, will be significant. What the quality of the impact will be strikes me as ambivalent. I.e., it's not a simple question of whether it will be positive or negative.
I expect xAI, at least at this early stage, will be perceived as having some mildly positive influence on AIS. There are of course already some more pessimistic predictions. I expect within a couple years the pessimistic predictions may be vindicated as apparently negative impacts on AIS will outweigh whatever positive impacts xAI may have. The kind of position I'm summarizing here seems well-reflected in this post on Astral Codex Ten that Scott Alexander posted last week about why he expects xAI's alignment plan will fail. I've not learned much about xAI yet, though my own model for having an ambivalent but somewhat pessimistic expectation for the company is based on:
I haven't watched a recording of the debate yet and I intend to, though I feel like I'm familiar enough with arguments on both sides that I may not learn much. This review was informative. It helped me notice that understanding the nature of public dialogue and understanding of AGI risk may be the greatest learning value from the debate.
I agree with all the lessons you've drawn from this debate and laid out at the end of this post. I've got other reactions lengthy enough I may make them into their own top-level posts, though here's some quicker feedback.
In Bengio and Tegmark were always respectful and polite, even when Tegmark challenged Mitchell. This may have further increased LeCun’s credibility, since there were no attacks on him and he didn’t attack anyone himself. [...] It seems like a good idea to have a mix of differing opinions on your side, even somewhat extreme (though grounded in rationality) positions – these will strengthen the more moderate stances. In this specific case, a combination of Bengio and e.g. Yudkowsky may have been more effective.
I'd distinguish here between how extreme the real positions someone takes are, and how extreme their rhetoric is. For example, even if Yudkowsky were to take more moderate positions, I expect he'd still come across as an extremist based on the rhetoric he often invokes.
While my impression is that Yudkowsky is not as hyperbolic in direct conversations, like on podcasts, his reputation as among the more disrespectful and impolite proponents of AGI risk persists. I expect he'd conduct himself in a debate like this much the way Mitchell conducted herself, except in the opposite direction.
To be fair, there are probably some more bombastic than Yudkowsky. Yet I'd trust neither them nor him to do better in a debate like this.
I'm writing up some other responses in reaction to this post, though I've noticed a frustrating theme across my different takes. There are better and worse arguments against the legitimacy of AGI risk as a concern, though it's maddening that LeCunn and Mitchell mostly stuck to making the worse arguments.
Strongly downvoted.
This post quotes Scott Alexander on a tangent about as much as it does Richard Hannania to bolster minor points made in Hannania's post by appealing to a bias in favour of Scott Alexander among effective altruists.
By linking to and so selectively quoting Hanania prominently, you're trying to create an impression that the post should be trustworthy to effective altruists in spite of the errors and falsehoods about effective altruism in particular and just in general. Assuming you've made this post in reinforcing a truth-seeking agenda in a truth-seeking agenda, you've failed by propagating an abysmal perspective.
There are anti-woke viewpoints that have been well-received on the EA Forum but this isn't one of them. Some of them haven't been anonymous, so the fact that you had no reason to worry more about your reputation than 'truth-seeking' isn't an excuse.
You would, could and should have done better if you had shared an original viewpoint really more familiar with effective altruism than Hannania is. May you take heed of this lesson for the next time you try to resolve disputes.