Ozzie Gooen

9821 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
882

Topic contributions
4

Some ideas:
1. What are the main big mistakes that EAs are making? Maybe have a few people give 30-minute talks or something.
2. A summary of the funding ecosystem and key strategic considerations around EA. Who are the most powerful actors, how competent are they, what are our main bottlenecks at the moment?
3. I'd like frank discussions about how to grow funding in the EA ecosystem, outside of the current donors. I think this is pretty key.
4. It would be neat to have a debate or similar on AI policy legislation. We're facing a lot of resistance here, and some of it is uncertain.
5. Is there any decent 5-10 year plan of what EA itself should be? Right now most of the funding ultimately comes from OP, and there's very little non-OP community funding or power. Are there ideas/plans to change this?

I generally think that EA Globals have had far too little disagreeable content. It feels like they've been very focused on making things seem positive for new people, instead of focusing more on candid and more raw disagreements and improvement ideas.

Answer by Ozzie Gooen44
19
0
1

I really would like to see more communication with the Global Catastrophic Risks Capacity Building Team at Open Philanthropy, given that they're the ones in charge of funding much of the EA space. Ideally there would be a lot of capacity for Q&A here. 

Quick point - I think the relationship between CEA and Leverage was pretty complicated during a lot of this period.

There was typically a large segment of EAs who were suspicious of Leverage, ever since their founding. But Leverage did collaborate with EAs on some specific things early on (like the first EA Summit). It felt like an uncomfortable alliance type situation. If you go back on the forum / Lesswrong, you can read artifacts.

I think the period of 2018 or so was unusual. This was a period where a few powerful people at CEA (Kerry, Larissa) were unusually pro-Leverage and got to power fairly quickly (Tara left, somewhat suddenly). I think there was a lot of tension around this decision, and when they left (I think this period lasted around 1 year), I think CEA became much less collaborative with Leverage.

One way to square this a bit is that CEA was just not very powerful for a long time (arguably, its periods of "having real ability/agency to do new things" have been very limited). There were periods where Leverage had more employees (I'm pretty sure). The fact that CEA went through so many different leaders, each with different stances and strategies, makes it more confusing to look back on.

I would really love for a decent journalist to do a long story on this history, I think it's pretty interesting.

I think Garry Tan is more left-wing, but I'm not sure. A lot of the e/acc community fights with EA, and my impression is that many of them are leftists.

I think that the right-wing techies are often the loudest, but there are also lefties in this camp too. 

(Honestly though, the right-wing techies and left-wing techies often share many of the same policy ideas. But they seem to disagree on Trump and a few other narrow things. Many of the recent Trump-aligned techies used to be more left-coded.)

Random Tweet from today: https://x.com/garrytan/status/1820997176136495167

Garry Tan is the head of YCombinator, which is basically the most important/influential tech incubator out there. Around 8 years back, relations were much better, and 80k and CEA actually went through YCombinator.

I'd flag that Garry specifically is kind of wacky on Twitter, compared to previous heads of YC. So I definitely am not saying it's "EA's fault" - I'm just flagging that there is a stigma here. 

I personally would be much more hesitant to apply to YC knowing this, and I'd expect YC would be less inclined to bring in AI safety folk and likely EAs. 

My personal take is that there are a bunch of better trade-offs between the two that we could be making. I think that the narrow subset of risks is where most of the value is, so from that standpoint, that could be a good trade-off. 

Also, I suspect that the current EA AI policy arm could find ways to be more diplomatic and cooperative

My impression is that the current EA AI policy arm isn't having much active dialogue with the VC community and the like. I see Twitter spats that look pretty ugly, I suspect that this relationship could be improved on with more work.

At a higher level, I suspect that there could be a fair bit of policy work that both EAs and many of these VCs and others would be more okay with than what is currently being pushed. My impression is that we should be focused on narrow subsets of risks that matter a lot to EAs, but don't matter much to others, so we can essentially trade and come out better than we are now. 

I think that certain EA actions in ai policy are getting a lot of flak.

On Twitter, a lot of VCs and techies have ranted heavily about how much they dislike EAs. 


See this segment from Marc Andreeson, where he talks about the dangers of Eliezer and EA. Marc seems incredibly paranoid about the EA crowd now.
 
 (Go to 1 hour, 11min in, for the key part. I tried linking to the timestamp, but couldn't get it to work in this editor after a few minutes of attempts)


I also came across this transcript, from Amjad Masad, CEO of Replit, on Tucker Carlson, recently
https://www.happyscribe.com/public/the-tucker-carlson-show/amjad-masad-the-cults-of-silicon-valley-woke-ai-and-tech-billionaires-turning-to-trump
 



[00:24:49]

Organized, yes. And so this starts with a mailing list. In the nineties is a transhumanist mailing list called the extropions. And these extropions, they might have got them wrong, extropia or something like that, but they believe in the singularity. So the singularity is a moment of time where AI is progressing so fast, or technology in general progressing so fast that you can't predict what happens. It's self evolving and it just. All bets are off. We're entering a new world where you.

[00:25:27]

Just can't predict it, where technology can't.

[00:25:29]

Be controlled, technology can't be controlled. It's going to remake, remake everything. And those people believe that's a good thing because the world now sucks so much and we are imperfect and unethical and all sorts of irrational whatever. And so they really wanted for the singularity to happen. And there's this young guy on this list, his name's Iliezer Itkowski, and he claims he can write this AI and he would write really long essays about how to build this AIH suspiciously. He never really publishes code, and it's all just prose about how he's going to be able to build AI anyways. He's able to fundraise. They started this thing called the Singularity Institute. A lot of people were excited about the future, kind of invested in him. Peter Thiel, most famously. And he spent a few years trying to build an AI again, never published code, never published any real progress. And then came out of it saying that not only you can't build AI, but if you build it, it will kill everyone. So he switched from being this optimist. Singularity is great to actually, AI will for sure kill everyone. And then he was like, okay, the reason I made this mistake is because I was irrational.

[00:26:49]

And the way to get people to understand that AI is going to kill everyone is to make them rational. So he started this blog called less wrong and less wrong walks you through steps to becoming more rational. Look at your biases, examine yourself, sit down, meditate on all the irrational decisions you've made and try to correct them. And then they start this thing called center for Advanced Rationality or something like that. Cifar. And they're giving seminars about rationality, but.

[00:27:18]

The intention seminar about rationality, what's that like?

[00:27:22]

I've never been to one, but my guess would be they will talk about the biases, whatever, but they have also weird things where they have this almost struggle session like thing called debugging. A lot of people wrote blog posts about how that was demeaning and it caused psychosis in some people. 2017, that community, there was collective psychosis. A lot of people were kind of going crazy. And this all written about it on the Internet, debugging.

[00:27:48]

So that would be kind of your classic cult technique where you have to strip yourself bare, like auditing and Scientology or. It's very common, yes.

[00:27:57]

Yeah.

[00:27:59]

It's a constant in cults.

[00:28:00]

Yes.

[00:28:01]

Is that what you're describing?

[00:28:02]

Yeah, I mean, that's what I read on these accounts. They will sit down and they will, like, audit your mind and tell you where you're wrong and all of that. And it caused people huge distress on young guys all the time talk about how going into that community has caused them huge distress. And there were, like, offshoots of this community where there were so suicides, there were murders, there were a lot of really dark and deep shit. And the other thing is, they kind of teach you about rationality. They recruit you to AI risk, because if you're rational, you're a group. We're all rational now. We learned the art of rationality, and we agree that AI is going to kill everyone. Therefore, everyone outside of this group is wrong, and we have to protect them. AI is going to kill everyone. But also they believe other things. Like, they believe that polyamory is rational and everyone that.

[00:28:57]

Polyamory?

[00:28:57]

Yeah, you can have sex with multiple partners, essentially, but they think that's.

[00:29:03]

I mean, I think it's certainly a natural desire, if you're a man, to sleep with more indifferent women, for sure. But it's rational in the sense how, like, you've never meth happy, polyamorous, long term, and I've known a lot of them, not a single one.

[00:29:21]

So how would it might be self serving, you think, to recruit more impressionable.

[00:29:27]

People into and their hot girlfriends?

[00:29:29]

Yes.

[00:29:30]

Right. So that's rational.

[00:29:34]

Yeah, supposedly. And so they, you know, they convince each other of all these cult like behavior. And the crazy thing is this group ends up being super influential because they recruit a lot of people that are interested in AI. And the AI labs and the people who are starting these companies were reading all this stuff. So Elon famously read a lot of Nick Bostrom as kind of an adjacent figure to the rationale community. He was part of the original mailing list. I think he would call himself a rationale part of the rational community. But he wrote a book about AI and how AI is going to kill everyone, essentially. I think he monitored his views more recently, but originally he was one of the people that are kind of banging the alarm. And the foundation of OpenAI was based on a lot of these fears. Elon had fears of AI killing everyone. He was afraid that Google was going to do that. And so they group of people, I don't think everyone at OpenAI really believed that. But some of the original founding story was that, and they were recruiting from that community so much.

[00:30:46]

So when Sam Altman got fired recently, he was fired by someone from that community, someone who started with effective altruism, which is another offshoot from that community, really. And so the AI labs are intermarried in a lot of ways with this community. And so it ends up, they kind of borrowed a lot of their talking points, by the way, a lot of these companies are great companies now, and I think they're cleaning up house.

[00:31:17]

But there is, I mean, I'll just use the term. It sounds like a cult to me. Yeah, I mean, it has the hallmarks of it in your description. And can we just push a little deeper on what they believe? You say they are transhumanists.

[00:31:31]

Yes.

[00:31:31]

What is that?

[00:31:32]

Well, I think they're just unsatisfied with human nature, unsatisfied with the current ways we're constructed, and that we're irrational, we're unethical. And so they long for the world where we can become more rational, more ethical, by transforming ourselves, either by merging with AI via chips or what have you, changing our bodies and fixing fundamental issues that they perceive with humans via modifications and merging with machines.

[00:32:11]

It's just so interesting because. And so shallow and silly. Like a lot of those people I have known are not that smart, actually, because the best things, I mean, reason is important, and we should, in my view, given us by God. And it's really important. And being irrational is bad. On the other hand, the best things about people, their best impulses, are not rational.

[00:32:35]

I believe so, too.

[00:32:36]

There is no rational justification for giving something you need to another person.

[00:32:41]

Yes.

[00:32:42]

For spending an inordinate amount of time helping someone, for loving someone. Those are all irrational. Now, banging someone's hot girlfriend, I guess that's rational. But that's kind of the lowest impulse that we have, actually.

[00:32:53]

We'll wait till you hear about effective altruism. So they think our natural impulses that you just talked about are indeed irrational. And there's a guy, his name is Peter Singer, a philosopher from Australia.

[00:33:05]

The infanticide guy.

[00:33:07]

Yes.

[00:33:07]

He's so ethical. He's for killing children.

[00:33:09]

Yeah. I mean, so their philosophy is utilitarian. Utilitarianism is that you can calculate ethics and you can start to apply it, and you get into really weird territory. Like, you know, if there's all these problems, all these thought experiments, like, you know, you have two people at the hospital requiring some organs of another third person that came in for a regular checkup or they will die. You're ethically, you're supposed to kill that guy, get his organ, and put it into the other two. And so it gets. I don't think people believe that, per se. I mean, but there's so many problems with that. There's another belief that they have.

[00:33:57]

But can I say that belief or that conclusion grows out of the core belief, which is that you're God. Like, a normal person realizes, sure, it would help more people if I killed that person and gave his organs to a number of people. Like, that's just a math question. True, but I'm not allowed to do that because I didn't create life. I don't have the power. I'm not allowed to make decisions like that because I'm just a silly human being who can't see the future and is not omnipotent because I'm not God. I feel like all of these conclusions stem from the misconception that people are gods.

[00:34:33]

Yes.

[00:34:34]

Does that sound right?

[00:34:34]

No, I agree. I mean, a lot of the. I think it's, you know, they're at roots. They're just fundamentally unsatisfied with humans and maybe perhaps hate, hate humans.

[00:34:50]

Well, they're deeply disappointed.

[00:34:52]

Yes.

[00:34:53]

I think that's such a. I've never heard anyone say that as well, that they're disappointed with human nature, they're disappointed with human condition, they're disappointed with people's flaws. And I feel like that's the. I mean, on one level, of course. I mean, you know, we should be better, but that, we used to call that judgment, which we're not allowed to do, by the way. That's just super judgy. Actually, what they're saying is, you know, you suck, and it's just a short hop from there to, you should be killed, I think. I mean, that's a total lack of love. Whereas a normal person, a loving person, says, you kind of suck. I kind of suck, too. But I love you anyway, and you love me anyway, and I'm grateful for your love. Right? That's right.

[00:35:35]

That's right. Well, they'll say, you suck. Join our rationality community. Have sex with us. So.

[00:35:43]

But can I just clarify? These aren't just like, you know, support staff at these companies? Like, are there?

[00:35:50]

So, you know, you've heard about SBF and FDX, of course.

[00:35:52]

Yeah.

[00:35:52]

They had what's called a polycule.

[00:35:54]

Yeah.

[00:35:55]

Right. They were all having sex with each other.

[00:35:58]

Given. Now, I just want to be super catty and shallow, but given some of the people they were having sex with, that was not rational. No rational person would do that. Come on now.

[00:36:08]

Yeah, that's true. Yeah. Well, so, you know. Yeah. It's what's even more disturbing, there's another ethical component to their I philosophy called longtermism, and this comes from the effective altruist branch of rationality, long termism. Long termism. What they think is, in the future, if we made the right steps, there's going to be a trillion humans, trillion minds. They might not be humans, that might be AI, but they're going to be trillion minds who can experience utility, who can experience good things, fun things, whatever. If you're a utilitarian, you have to put a lot of weight on it, and maybe you discount that, sort of like discounted cash flows. Uh, but you still, you know, have to pause it that, you know, you know, if. If there are trillions, perhaps many more people in the future, you need to value that very highly. Even if you discount it a lot, it ends up being valued very highly. So a lot of these communities end up all focusing on AI safety, because they think that AI, because they're rational. They arrived, and we can talk about their arguments in a second. They arrived at the conclusion that AI is going to kill everyone.

[00:37:24]

Therefore, effective altruists and rational community, all these branches, they're all kind of focused on AI safety, because that's the most important thing, because we want a trillion people in the future to be great. But when you're assigning value that high, it's sort of a form of Pascal's wager. It is sort of. You can justify anything, including terrorism, including doing really bad things, if you're really convinced that AI is going to kill everyone and the future holds so much value, more value than any living human today has value. You might justify really doing anything. And so built into that, it's a.

[00:38:15]

Dangerous framework, but it's the same framework of every genocidal movement from at least the French Revolution. To present a glorious future justifies a bloody present.

[00:38:28]

Yes.

[00:38:30]

And look, I'm not accusing them of genocidal intent, by the way. I don't know them, but those ideas lead very quickly to the camps.

[00:38:37]

I feel kind of weird just talking about people, because generally I like to talk about ideas about things, but if they were just like a silly Berkeley cult or whatever, and they didn't have any real impact on the world, I wouldn't care about them. But what's happening is that they were able to convince a lot of billionaires of these ideas. I think Elon maybe changed his mind, but at some point he was convinced of these ideas. I don't know if he gave them money. I think there was a story at some point, Wall Street Journal, that he was thinking about it. But a lot of other billionaires, billionaires gave them money, and now they're organized, and they're in DC lobbying for AI regulation. They're behind the AI regulation in California and actually profiting from it. There was a story in pirate wares where the main sponsor, Dan Hendrix, behind SB 1047, started a company at the same time that certifies the safety of AI. And as part of the bill, it says that you have to get certified by a third party. So there's aspects of it that are kind of. Let's profit from it.

[00:39:45]

By the way, this is all allegedly based on this article. I don't know for sure. I think Senator Scott Weiner was trying to do the right thing with the bill, but he was listening to a lot of these cult members, let's call them, and they're very well organized, and also a lot of them still have connections to the big AI labs, and some of them work there, and they would want to create a situation where there's no competition in AI regulatory capture, per se. I'm not saying that these are the direct motivations. All of them are true believers. But you might infiltrate this group and direct it in a way that benefits these corporations.

[00:40:32]

Yeah, well, I'm from DC, so I've seen a lot of instances where my bank account aligns with my beliefs. Thank heaven. Just kind of happens. It winds up that way. It's funny. Climate is the perfect example. There's never one climate solution that makes the person who proposes it poorer or less powerful.

I'm thinking of around 5 cases. I think in around 2-3 they were told, the others it was strongly inferred. 

Thanks, good to hear! Looking forward to seeing progress here.

Load more