JK

Jan_Kulveit

4674 karmaJoined

Bio

Studying behaviour and interactions of boundedly rational agents, AI alignment and complex systems.

Research fellow at Future of Humanity Institute, Oxford. Other projects: European Summer Program on Rationality. Human-aligned AI Summer School. Epistea Lab.

Sequences
1

Learning from crisis

Comments
219

Seems plausible the impact of that single individual act is so negative that aggregate impact of EA is negative.

I think people should reflect seriously upon this possibility and not fall prey to wishful thinking (let's hope speeding up the AI race and making it superpower powered is the best intervention! it's better if everyone warning about this was wrong and Leopold is right!).

The broader story here is that EA prioritization methodology is really good for finding highly leveraged spots in the world, but there isn't a good methodology for figuring out what to do in such places, and there also isn't a robust pipeline for promoting virtues and virtuous actors to such places.

I don't think so. I think in practice

I. - Some people don't like the big R community very much.

AND

2a. - Some people don't think improving the EA community small-r rationality/epistemics should be one of top ~3-5 EA priorities. 
OR
2b.  - Some people do agree this is important, but don't clearly see the extent to which the EA community imported healthy epistemic vigilance and norms from Rationalist or Rationality-adjacent circles

=>

- As a consequence, they are at risk of distancing from small r rationality as a collateral damage / by neglect


Also I think many people in the EA community don't think it's important to try hard at being small-r rational at the level of aliefs.  No matter what is the actual situation revealed by actual decisions, I would expect the EA community to at least pay lip service to epistemics and reason, so I don't think stated preferences are strong evidence. 

"Being against small-r rationality is like being against kindness or virtue; no one thinks of themselves as taking that stand." 
Yes I do agree almost no one thinks about themselves that way. I think it is maybe somewhat similar to "Being against effective charity" - I would be surprised if people though about themselves that way. 

Reducing rationality to "understand most of Kahneman and Tversky's work" and cognitive psychology would be extremely narrow and miss most of the topic.

To quickly get some independent perspective, I recommend reading the Overview of the handbook part of "The Handbook of Rationality"  (2021, MIT Press, open access). For an extremely crude calibration: the Handbook has 65 chapters. I'm happy to argue at least half of them cover topics relevant to the EA project. About ~3 are directly about Kahneman and Tversky's work. So, by this proxy, you would miss about 90% of whats relevant.


 

Sorry for sarcasm, but what about returning to the same level of non-involvement and non-interaction between EA and Rationality as you describe was happening in Sydney? I.e. EA events are just co-hosted with LW Rationality and Transhumanism, and the level of Rationality idea non-influence is kept on par with Transhumanism? 

It would be indeed very strange if people made the distinction, thought about the problem carefully, and advocated for distancing from 'small r' rationality in particular.

I would expect real cases to look like
- someone is deciding about an EAGx conference program; a talk on prediction markets sounds subtly Rationality-coded, and is not put on schedule
- someone applies to OP for funding to create rationality training website; this is not funded because making the distinction between Rationality and rationality would require too much nuance
- someone is deciding about what intro level materials to link to; some links to LessWrong are not included

The crux is really what's at the end of my text - if people do steps like above, and nothing else, they are distancing also from the 'small r' thing. 

Obviously part of the problem for the separation plan is Rationality and Rationality-adjacent community actually made meaningful progress on rationality and rationality education; a funny example here in the comments ... Radical Empath Ismam advocates for the split and suggests EAs should draw from the "scientific skepticism" tradition instead of Bay Rationality. Well, if I take that suggestion seriously, and start looking for what could be good intro materials relevant to the EA project (which "debunking claims about telekinesis" advocacy content probably isn't) .... I'll find New York City Skeptics and their podcast, Rationally Speaking. Run by Julia Galef, who also later wrote Scout Mindset. Excellent. And also, co-founded CFAR. 

(crossposted from twitter) Main thoughts: 
1. Maps pull the territory 
2. Beware what maps you summon 

Leopold Aschenbrenners series of essays is a fascinating read: there is a ton of locally valid observations and arguments. Lot of the content is the type of stuff mostly discussed in private. Many of the high-level observations are correct. 

At the same time, my overall impression is the set of maps sketched pulls toward existential catastrophe, and this is true not only for the 'this is how things can go wrong' part, but also for the 'this is how we solve things' part. 

Leopold is likely aware of the this angle of criticism, and deflects it with 'this is just realism' and 'I don't wish things were like this, but they most likely are'. I basically don't buy that claim.

FWIW ... in my opinion, retaining the property might have been a more beneficial decision. 

Also, I think some people working in the space should not make an update against plans like "have a permanent venue", but plausibly should make some updates about the "major donors". My guess this almost certainly means Open Philanthropy, and also likely they had most of the actual power in this decision. 

Before delving further, it's important to outline some potential conflicts of interest and biases:
- I did co-organize or participated at multiple events at Wytham. For example, in 2023, ACS organized a private research retreat aimed at increasing the surface area between Active Inference and AI Alignment communities. The event succeeded to attracts some of the best people from both sides and was pretty valuable for the direction of alignment research I do care about, and the Oxford location was very useful for that. I regret running events like that in future will be more difficult.
- I have friends in all orgs or sides involved - Wytham project, Open Phil, EV, EAs who disapproved the purchase,...
- I lead an org funded by Open Philanthropy 
- I also lead an org which was fiscally sponsoring a different project of venue purchase, which was funded by FTX regrant (won't comment on that for legal reasons)

Also, without more details published, my current opinion is personal speculation, partially based on my reading of the vibes. 

My impression-from-a-distance is part of the decision is driven by a factor which I think should not be given undue weight, and a factor where I likely disagree

Factor where I possibly disagree is aesthetics. As far as I can tell, the current preferred  EA aesthetics is something more similar to how recent EAG Bay looked like. At EAG Bay, my impression of the vibes of venue was... quite dystopian - the main space is a giant hall with unpleasant artificial lights, no natural light, no colours, endless rows of identical black tables utilized by people having endless rows of 1:1s. In some vague aesthetic space, nearby vibes vectors are faceless bureaucracies, borgs and the scifi portrayals of heart-less technocratic baddies. Also something about naive utilitarianism and army.

Wytham seemed to stand in stark contrast to this aesthetics: the building was old and full of quirks. The vibes were more like an old Oxford college. 

Factor which I would guess was part of the decision and I suspect had weight was PR concerns. Wytham definitely got some negative media coverage both in traditional media, social media, and on this forum. 

What I dislike about this, these concerns often seemed to be mostly on the Simulacra levels 3 and 4, detached from the reality of running events in Oxford, or actual concern about costs. (Why I do think so? Because of approximately zero amount of negative PR, forum criticisms etc. anyone and anything in the ecosystem is getting for renting properties, even if they are more expensive per day or per person.)

To be clear 
- I don't think these were the only or main(?) factors. 
- * I would expect somewhere also exists some spreadsheet with some estimates of "value" of events at Wytham. If this is the case, I probably also disagree about some of the generative opinions about what's valuable.

Still, given the amount of speculative criticism the purchase of Wytham generated on the forum, it seems good for transparency to also express critical view about the sale

In my view the basic problem with this analysis is you probably can't lump all the camps together as one thing and evaluate them together as one entity. Format, structure, leadership and participants seem to have been very different.

Based on public criticisms of their work and also reading some documents about a case where we were deciding whether to admit someone to some event (and they forwarded their communication with CH). It's a limited evidence, but still some evidence.

 

This is a bit tangential/meta, but looking at the comment counter makes me want to express gratitude to the Community Health Team at CEA. 

I think here we see a 'practical demonstration' of the counterfactuals of their work:
- insane amount of attention sucked by this
- the court of public opinions on fora seems basically strictly worse at all relevant dimensions like fairness, respect of privacy or compassion to people involved

As 'something like this'  would be quite often the counterfactual to CH to trying to deal with stuff ...it makes it clear how much value they are creating by dealing with these problems, even if their process is imperfect

Load more