I do think "EA is plagued with sexism, racism, and abuse" is a very very granular first approximation for what's actually going on.
A better, second approximation may look like this description of "the confluence":
"The broad community I speak of here are insular, interconnected subgroups that are involved most of these categories: tech, EA (Effective Altruists), rationalists, Burning Man camps, secret parties, and coliving houses. I’ve heard it referred to as a “clusterf**k” and “confluence”, I usually call it a community. The community is centered in the San Francisco bay area but branch out into Berlin, London/Oxford, Seattle, and New York. The group is purpose-driven, with strong allegiance to non-mainstream morals and ideas about shaping the future of society and humanity" (Source)
There is probably an even better third approximation out there.
I do think that these toxic dynamics largely got tied to EA because EA is the most coherent subculture that overlaps with "the confluence." Plus, EA was in the news cycle, which made journalists incentivized to write articles about it, especially when "SBF" and "FTX" get picked up by search engines and recommender systems. EA is a convenient tag word for a different (but overlapping) community that is far more sinister.
As I wrote in my response above, I'm mainly sad that my experience of EA was through the this distorted lens. It also seems clear to me that there are large swathes (perhaps the majority?) of EA that are healthy and well-meaning, and I am happy this has been your experience!
One of my motives for writing this post was giving people a better "second approximation" than EA itself being the problem. I do believe people put too much blame on EA, and one could perhaps make the argument that more responsibility could be put on surrounding AI companies, such as OpenAI/Anthropic, some of whose employees may be involved in these dynamics through the hacker house scene.
This is a great list! I think this one is extremely valuable and something that men may be better equipped to do than I would:
Try to find a way to talk to and understand the men who have conflicted feelings about gender equality etc. (to anyone who might read this: please let me know if you would like to talk - I understand trust can be an issue but I think we can work through that)
I'd love to write another post about this too, targeted at men who have conflicted feeling about gender equality, sexual violence, etc. The problem with this current post is it may be preaching to the choir :) Someone (probably me) needs to shill AI Twitter with these ideas, but rebranded to the average mid-twenties male AI researcher. "Fighting bad actors in AI" has been one message I've been playing with.
There are exceptions, to be sure. For instance, some sorts of conduct implicate fitness to hold certain roles (e.g., a professional truck driver who drives drunk off the clock, someone with significant discretionary authority over personnel matters who engages in racist conduct).
When do these exceptions apply? They may here, if the same people who showed such poor judgement in other contexts also have decision-making power over high-leverage systems.
Yeah, this is interesting. I would invoke some of the content from Citadels of Pride here, where we can draw an analogy between Silicon Valley and Hollywood.
I would argue that hacker houses are being used as professional grounds. There is such an extent of AI-related networking, job-searching, brainstorming, ideating, startup founding, angel investing, and hackathoning that one could make an argument that hacker houses are an extension of an office. Sometimes, hacker houses literally are the offices of early stage startups. This also relates to Silicon Valley startup culture's lack of distinction between work and life.
This puts a vulnerable person trying to break into AI in a precarious position. Being in these environments becomes somewhat necessary to break in; however, one has none of the legal protections of "official" networking environments, including HR departments for sexual harassment. The upside for an aspirant could be a research position at her dream AI company through a connection she makes here. The downside could be getting drugged and raped if her new "acquaintance" decides to put LSD in her water.
Hacker houses would then give the AI company's employees informal networking grounds to conduct AI-related practices while the companies derisk themselves from liability. Which makes this a very different situation from criminal activity at the local grocery score.
Thank you for this comment. You've made some things explicit that I've been thinking about for a long time. It feels analogous to saying the emperor has no clothes.
I am growing increasingly concerned that the people supposedly working to protect us from unaligned AI have such weak ethics. I am wondering if a case can be made for it being better to have a small group of high integrity people work on AI safety than to have even a twice as large group comprised 50% of low-integrity individuals. I wouldn't want a bank robber to safeguard democracy, for example.
The idea of having fewer AI alignment researchers, but those researchers having more intensive ethical training, is compelling.
Actually, some of my best mentors around sexuality has been my female friends. I really recommend men foster deep, meaningful friendships with heterosexual women. When they tell you about their dating experiences you will very quickly understand how to behave around women you are interested in sexually.
There is currently a huge vacuum in mentorship for men about how to interact with women (hence the previously burgeoning market of red pill, dating coaches, Jordan Peterson, etc). More thought leadership by men who have healthy relationships with women would be a service to civilization. Maybe you should write some blog posts :).
As for the rest of your comment, I responded below to Rebecca.
Thanks for this, this is interesting.
I am sure there are cleaner cases, like your "Bob works for BigAI" example, where taking legal action, and amplifying in media, could produce a Streisand effect that gives cultural awareness to the more ambiguous cases. Some comments:
Silicon Valley is one "big, borderless workplace"
Silicon Valley is unique in that it's one "big, borderless workplace" (quoting Nussbaum). As she puts it:
Therefore, policing along clean company lines becomes complicated really fast. Even if Bob isn't directly recruiting for BigAI (but works for BigAI), being in Bob's favor could improve your chances of working at to SmallAI, which Bob has invested in.
The "borderless workplace" nature of Silicon Valley, where company lines are somewhat illusory, and high-trust social networks are what really matter, is Silicon Valley's magic and function. But when it comes to policing bad behavior, it is Silicon Valley's downfall.
An example that's close to scenarios that I've seen
Proposal
I propose that the high-status companies and VC firms in Silicon Valley (e.g. OpenAI, Anthropic, Sequoia, etc) could make more explicit that they are aware of Silicon Valley's "big, borderless workplace" nature. Sexual harassment at industry-related hacker houses, co-working spaces, and events, even when not on direct company grounds, reflects the company to some extent, and it is not acceptable.
While I don't believe these statements will deter the most severe offenders, pressure from institutions/companies could weaken the prevalent bystander culture, which currently allows these perpetrators to continue harassing/assaulting.