(cross-posted on EA Forum and my website)
Hi!
I’m Rai and I’m a furry (specifically, dragon). The last couple years, I’ve been running a Furry Rationalists Telegram group. It looks like we exist, and not everyone who should know we exist does yet, so I wanted to just write this to advertise that this furry+EA/rationality corner exists, and if you’re furry-adjacent & rationality-adjacent and nice, you’re invited to join us :)
Here’s the invite link for the Furry Rationalists group: https://adgn.link/furry-rationalists-telegram
There’s ~50 of us and we’re chill - we have self-improvement, science, and cute animal GIFs. If you’d like a preview, here’s the guidelines + meta doc. We’re 18+, but we’re not adult-oriented - we’re 18+ just so that we can talk about adult stuff if it does come up. If you happen to be <18 and wanna join, let me know, we might update this.
If you’re reading this a while later, and the link expired, contact me (via some method on my website, agentydragon.com), or look us up on https://www.furry-telegram-groups.net/, a search for “rationality” should find us.
There’s also a smaller Effective Anthropomorphism Discord server, run by bird: https://adgn.link/effective-anthropomorphism-discord
Come say hi, and feel free to share if you know anyone who’d be interested!
Strong upvoted because I think it's important to preserve whatever embers of weirdness and anti-professionalism we have left in EA, and safeguard it as if it were our last bastion of hope against the forces of bureaucratic stagnation. (Though I'd be happy to discuss this.)
I'd be curious to know why people downvoted this. I don't think we can claim to be good at inclusive diversity unless we support the kind of diversity that doesn't immediately feel like our ingroup. If you can tolerate anything other than your outgroup, you aren't actually tolerating anything.[1]
Although if the group itself is pernicious in some important way, then I'd change my mind about upvoting. Right now, however, all I know is that they have a weird niche and a corner for EAs to keep in touch.
Strengthening the association between "rationalist" and "furry" decreases the probability that AI research organizations will adopt AI safety proposals proposed by "rationalists".
The poster is currently a resident at OpenAI on the reinforcement learning team.
And?
..
..
..
Just joking! I'm joking, sorry!
*pulls on rainbow dash costume*
Strengthening the association may enable a larger slice of the rationalists to think and communicate clearly without being bogged down by professional constraints. I suspect professionalism is much more lethal than I think most people think, so that might be a crux. If we lighten the pressure towards professionalism, people have more slack and are less likely to end up optimising for proxies such as impressiveness, technicality, relevancy-to-other-literature, "comprehensiveness", "hard work", etc.