New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Leopold Aschenbrenner is starting a cross between a hedge fund and a think tank for AGI. I have read only the sections of Situational Awareness most relevant to this project, and I don't feel nearly like I understand all the implications, so I could end up being quite wrong. Indeed, I’ve already updated towards a better and more nuanced understanding of Aschenbrenner's points, in ways that have made me less concerned than I was to begin with. But I want to say publicly that the hedge fund idea makes me nervous. Before I give my reasons, I want to say that it seems likely most of the relevant impact comes not from the hedge fund but from the influence the ideas from Situational Awareness have on policymakers and various governments, as well as the influence and power Aschenbrenner and any cohort he builds wield. This influence may come from this hedge fund or be entirely incidental to it. I mostly do not address this here, but it does make all of the below less important.  I also believe that some (though not all) of my concerns about the hedge fund are based on specific disagreements with Aschenbrenner’s views. I discuss some of those below, but a full rebuttal this is not (and many of the points of disagreement I don’t yet feel confident in my view on). There is still plenty to do to hash out the actual empirical questions at hand. Why I am nervous  A hedge fund investing in AI related investments means Aschenbrenner and his investors will gain financially from more and accelerated AGI progress. This seems to me to be one of the most important dynamics (excluding the points about influence above). That creates an incentive to create more AGI progress, even at the cost of safety, which seems quite concerning. I will say that Leopold has a good track record here around turning down money in not signing an NDA at Open AI despite loss of equity. Aschenbrenner expresses strong support for the liberal democratic world to maintain a lead on AI advancement, and ensure that China does not reach an AI-based decisive military advantage over the United States[1]. The hedge fund, then, presumably aims to both support the goal of maintaining an AI lead over China and profit off of it. In my current view, this approach increases race dynamics and increases the risks of the worst outcomes (though my view on this has softened somewhat since my first draft, for reasons similar to what Zvi clarifies here[2]).  I especially think that it risks unnecessary competition when cooperation - the best outcome - could still be possible. It seems notable, for example, that no Chinese version of the Situational Awareness piece has come to my attention; going first in such a game both ensures you are first and that the game is played at all.  It’s also important that the investors (e.g. Patrick Collison) appear to be more focused on economic and technological development, and less concerned about risks from AI. The incentives of this hedge fund are therefore likely to point towards progress and away from slowing down for safety reasons.  There are other potential lines of thought here I have not yet fleshed out including:  * The value of aiming to orient the US government and military attention to AGI (seems like a huge move with unclear sign) * The degree to which this move is unilateralist on Aschenbrenner’s part * How much money could be made and how much power the relevant people (e.g. Aschenbrenner and his investors) will have through investment and being connected to important decisions.  * If a lot of money and/or power could be acquired, especially over AGI development, then there’s a healthy default skepticism I think should be applied to their actions and decision-making.  * Specifics about Aschenbrenner himself. Different people in the same role would take very different actions, so specifics about his views, ways of thinking, and profile of strengths and weaknesses may be relevant. Ways that the hedge fund could in fact be a good idea: EA and AI causes could really use funder diversification. If Aschenbrenner intends to use the money he makes to support these issues, that could be very valuable (though I’ve certainly become somewhat more concerned with moonshot “become a billionaire to save the world” plans than I used to be). The hedge fund could position Aschenbrenner to have a deep understanding of and connections within the AI landscape, making the think tank outputs very good, and causing important future decisions to be made better.  Aschenbrenner of course could be right about the value of the US government’s involvement, maintaining a US lead, and the importance of avoiding Chinese military supremacy over the US. In that case, him achieving his goals would of course be good. Cruxes include the likelihood of international cooperation, the possibility of international bans, probability of catastrophic outcomes from AI and the likelihood of “muddling through” on alignment. I’m interested in hearing takes, ways I could be wrong, fleshing out of my arguments, or any other thoughts people have relevant to this. Happy to have private chats in DMs to discuss as well. 1. ^  To be clear, Aschenbrenner wants that lead to exist to avoid a tight race in which safety and caution are thrown to the winds. If we can achieve that lead primarily through infosecurity (something he emphasizes), then added risks are low; but I think the views expressed in Situational Awareness also imply the importance of staying technologically ahead of China as their AI research improves. This comes with precisely the risks of creating and accelerating a race of this nature. Additionally, when I read his description of the importance of even a two month lead, it implied to me that if the longer, more comfortable lead is lost, there will be strong reasons for the US to advance quickly so as to avoid China reaching superintelligence and subsequent military dominance first (which doesn’t mean he thinks we should actually do this if the time came). This seems to fairly explicitly describe the tight race scenario. I don’t think Aschenbrenner believes this would be a good situation to be in, but nonetheless thinks that’s what the true picture is.  2. ^ From Zvi’s post: “He confirms he very much is NOT saying this: The race to ASI is all that matters. The race is inevitable. We might lose. We have to win. Trying to win won’t mean all of humanity loses. Therefore, we should do everything in our power to win. I strongly disagree with this first argument. But so does Leopold.  Instead, he is saying something more like this: ASI, how it is built and what we do with it, will be all that matters. ASI is inevitable. A close race to ASI between nations or labs almost certainly ends badly. Our rivals getting to ASI first would also be very bad. Along the way we by default face proliferation and WMDs, potential descent into chaos. The only way to avoid a race is (at least soft) nationalization of the ASI effort. With proper USG-level cybersecurity we can then maintain our lead.  We can then use that lead to ensure a margin of safety during the super risky and scary transition to superintelligence, and to negotiate from a position of strength.”  
Hey everyone, my name is Jacques, I'm an independent technical alignment researcher (primarily focused on evaluations, interpretability, and scalable oversight). I'm now focusing more of my attention on building an Alignment Research Assistant. I'm looking for people who would like to contribute to the project. This project will be private unless I say otherwise. Side note: I helped build the Alignment Research Dataset ~2 years ago. It has been used at OpenAI (by someone on the alignment team), (as far as I know) at Anthropic for evals, and is now used as the backend for Stampy.ai. If you are interested in potentially helping out (or know someone who might be!), send me a DM with a bit of your background and why you'd like to help out. To keep things focused, I may or may not accept. I have written up the vision and core features for the project here. I expect to see it evolve in terms of features, but the vision will likely remain the same. I'm currently working on some of the features and have delegated some tasks to others (tasks are in a private GitHub project board). I'm also collaborating with different groups. For now, the focus is to build core features that can be used individually but will eventually work together into the core product. In 2-3 months, I want to get it to a place where I know whether this is useful for other researchers and if we should apply for additional funding to turn it into a serious project.
AI Safety Needs To Get Serious About Chinese Political Culture I worry that Leopold Aschenbrenner's "China will use AI to install a global dystopia" take is based on crudely analogising the CCP to the USSR, or perhaps even to American cultural imperialism / expansionism, and isn't based on an even superficially informed analysis of either how China is currently actually thinking about AI, or what China's long term political goals or values are. I'm no more of an expert myself, but my impression is that China is much more interested in its own national security interests and its own ideological notions of the ethnic Chinese people and Chinese territory, so that beyond e.g. Taiwan there isn't an interest in global domination except to the extent that it prevents them being threatened by other expansionist powers. This or a number of other heuristics / judgements / perspectives could change substantially how we think about whether China would race for AGI, and/or be receptive to an argument that AGI development is dangerous and should be suppressed. China clearly has a lot to gain from harnessing AGI, but they have a lot to lose too, just like the West. Currently, this is a pretty superficial impression of mine, so I don't think it would be fair to write an article yet. I need to do my homework first: * I need to actually read Leopold's own writing about this, instead of making impressions based on summaries of it, * I've been recommended to look into what CSET and Brian Tse have written about China, * Perhaps there are other things I should hear about this, feel free to make recommendations. Alternatively, as always, I'd be really happy for someone who's already done the homework to write about this, particularly anyone specifically with expertise in Chinese political culture or international relations. Even if I write the article, all it'll really be able to be is an appeal to listen to experts in the field, or for one or more of those experts to step forward and give us some principles to spread in how to think clearly and accurately about this topic. I think having even like, undergrad-level textbook mainstream summaries of China's political mission and beliefs posted on the Forum could end up being really valuable if it puts those ideas more in the cultural and intellectual background of AI safety people in general. This seems like a really crucial question that inevitably takes a central role in our overall strategy, and Leopold's take isn't the only one I'm worried about. I think people are already pushing national security concerns about China to the US Government in an effort to push e.g. stronger cybersecurity controls or export controls on AI. I think that's a noble end but if the China angle becomes inappropriately charged we're really risking causing more harm than good. (For the avoidance of doubt, I think the Chinese government is inhumane, and that all undemocratic governments are fundamentally illegitimate. I think exporting democracy and freedom to the world is a good thing, so I'm not against cultural expansionism per se. Nevertheless, assuming China wants to do it when they don't could be a really serious mistake.)
On the recent post on Manifest, there’s been another instance of a large voting group (30-40ish [edit to clarify: 30-40ish karma, not 30-40ish individuals])arriving and downvoting any progressive-valenced comments (there were upvotes and downvotes prior to this, but in a more stochastic pattern). This is similar to what occured with the eugenics-related posts last year. Wanted to flag it to give a picture to later readers on the dynamics at play.
If you believe that: - ASI might come fairly soon - ASI will either fix most of the easy problems quickly, or wipe us out - You have no plausible way of robustly shaping the outcome of the arrival of ASI for the better does it follow that you should spend a lot more on near-term cause areas now? Are people doing this? I see some people argue for increasing consumption now, but surely this would apply even more so to donations to near-term cause areas?

Popular comments

Recent discussion

Jason commented on Ben Stewart's quick take

On the recent post on Manifest, there’s been another instance of a large voting group (30-40ish [edit to clarify: 30-40ish karma, not 30-40ish individuals])arriving and downvoting any progressive-valenced comments (there were upvotes and downvotes prior to this, but in ...

Continue reading

There really should be a limit on the quantity of strong upvotes/downvotes one can deploy on comments to a particular post -- perhaps both "within a specific amount of time" and "in total." A voting group of ~half a dozen users should not be able to exert that much control over the karma distribution on a post. To be clear, I view (at least strong) targeted "downvoting [of] any progressive-valenced comments" as inconsistent with Forum voting norms.

At present, the only semi-practical fix would be for users on the other side of the debate to go back through the comments, guess which ones had been the targets of the voting group, and apply strong upvotes hoping to roughly neutralize the norm-breaking voting behavior of the voting group. Both the universe in which karma counts are corrupted by small voting groups and the universe in which karma counts are significantly determined by a clash between voting groups and self-appointed defenders seem really undesirable.

Just noting for anyone else reading the parent comment but not the screenshot, that said discussion was about Hacker News, not the EA Forum.

Also it was clearly not about Manifest. (Though it is nonetheless very cringe).

I wanted to share this update from Good Ventures (Cari and Dustin’s philanthropy), which seems relevant to the EA community.

Tl;dr: “while we generally plan to continue increasing our grantmaking in our existing focus areas via our partner Open Philanthropy, we have...

Continue reading

As the author of the comment linked for "criticizing OP for departing grantees too quickly," I'd note that (in my pre-Forum days) I expressed concern that GiveWell was ending the Standout Charity designation too abruptly. So I don't see my post here expressing potential concern about OP transitioning out of these subareas as evidence of singling out OP for criticism.

12
Habryka
There is a huge amount of work I am deeply grateful for that as far as I can tell is not "associated with Lightcone and its affiliates". Some examples:  * The historical impact of the Future of Humanity Institute * The historical impact of MIRI  * Gwern's writing and work * Joe Carlsmith's writing * Basically all of Holden's intellectual output (even if I disagree with his leadership of OP and EA in a bunch of very important ways) * Basically all of Owen Cotton Barratt's contributions (bar the somewhat obvious fuckups that came out last year, though I think they don't outshine his positive contributions) * John Wentworth's contributions * Ryan Greenblatt's and Buck Shlegeris's contributions * Paul Christiano's and Mark Xu's research (I have disagreements with Paul on EA leadership and governance things, but I think his research overall has been great) * Rohin Shah's many great contributions over the years * More broadly the Deepmind safety team There are also many others that I am surely forgetting. There is an enormous number of extremely talented, moral, and smart people involved in the extended rationality/EA/AI-x-risk ecosystem, and I am deeply grateful to many of them. It is rare that my relationship to someone is purely positive and completely devoid of grievance, as I think is normal for relationships, but there are many people for which my assessment of their good vastly outshines the grievances I have.
6
Jacob Eliosoff
I'd say from a grantee pov, which is I guess a large fraction of the highly-engaged EA community (eg commenters on a post like this), Dustin/GV/OP have mostly appeared as an aggregate blob - "where the money comes from".  And I've heard so much frustration & disappointment about OP over the years!  (Along with a lot of praise of course.)  That said, I get the spirit of your comment, I wouldn't want to overstate how negative people are about Dustin or OP. And for the record I've spent considerable energy criticizing OP myself, though not quite so far as "frustration" or "disappointment".

If you knew about a potential large-scale risk that, although unlikely, could kill millions, would you warn society about it? You might say yes, but many people are reluctant to warn. 

In ten studies, Matt Coleman, Joshua Lewis, Christoph Winter, and I explored a psychological...

Continue reading

Thanks for this! You might also be interested in the results of FORESIGHT, and other work within the intelligence communities, about how to warn (H/T @Alex D). 

My understanding is that warning correctly is surprisingly hard; there are many times where (correct) warnings are misunderstood, not heeded, or not even really heard.

There's a temptation ex post to blame the warned for not paying sufficient attention, of course, but it'd also be good for people interested in raising alarms ("warners") to make their warnings as clear, loud, and unambiguous as p... (read more)

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
bruce commented on Kaya Guides Pilot Results
112
19

Kaya Guides runs a self-help course on WhatsApp to reduce depression at scale in low and middle-income countries. We help young adults with moderate to severe depression. Kaya currently operates in India. We are the world's first nonprofit implementer of Step-by-Step, the...

Continue reading

Congratulations on the pilot!

I just thought I'd flag some initial skepticism around the claim:

Our estimates indicate that next year, we will become 20 times as cost-effective as cash transfers.

Overall I expect it may be difficult for the uninformed reader to know how much they should update based on this post (if at all), but given you have acknowledged many of these (fairly glaring) design/study limitations in the text itself, I am somewhat surprised the team is still willing to make the extrapolation from 7x to 20x GD within a year. It also requires that... (read more)

5
huw
The best meta-analysis for deterioration (i.e. negative effects) rates of guided self-help (k = 18, N = 2,079) found that deterioration was lower in the intervention condition, although they did find a moderating effect where participants with low education didn't see this decrease in deterioration rates (but nor did they see an increase)[1]. So, on balance, I think it's very unlikely that any of the dropped-out participants were worse-off for having tried the programme, especially since the counterfactual in low-income countries is almost always no treatment. Given that your interest is top-line cost-effectiveness, then only counting completed participants for effect size estimates likely underestimates cost-effectiveness if anything, since churned participants would be estimated at 0. ---------------------------------------- 1. Ebert, D. D. et al. (2016) Does Internet-based guided-self-help for depression cause harm? An individual participant data meta-analysis on deterioration rates and its moderators in randomized controlled trials, Psychological Medicine, vol. 46, pp. 2679–2693. ↩︎
1
Håkon Harnes
Very interesting, thanks for highlighting this!

Clearly, by definition, they are more capable than humans of efficiently using their resources for purposes, including at least the purpose of maximizing their own utility. Moreover, they are individually more capable of achieving full-scale cosmic colonization than they...

Continue reading

I think this is an interesting post. I don’t agree with the conclusion, but I think it’s a discussion worth having. In fact, I suspect that this might be a crux for quite some people in the AI safety community. To contribute to the discussion, here are two other perspectives. These are rough thoughts and I could have added a lot more nuance.

Edit: I just noticed that your title includes the word "sentient". Hence, my second perspective is not as applicable anymore. My own take that I offer at the end seems to hold up nonetheless.
 

  1. If we develop an
... (read more)
1
harfe
I strongly disagree. I think human extinction would be bad. Not every utility function is equally desirable. For example, an ASI that maximizes the number of paperclips in the universe would be a bad outcome. Most people here do adopt anthropocentric values, in that they think human flourishing would be more desirable than a vast amount of paperclips.

Say that we have a set of options, such as (for example) wild animal welfare interventions.

Say also that you have two axes along which you can score those interventions: popularity (how much people will like your intervention) and effectiveness (how much the intervention...

Continue reading
5
Stefan_Schubert
@Lucius Caviola and I discuss such issues in Chapter 9 of our recent book. If I understand your argument correctly I think our suggested solution (splitting donations between a highly effective charity and the originally preferred "favourite" charity) amounts to what you call a barbell strategy.

Huh, the convergent lines of thought are pretty cool!

Your suggested solution is indeed what I'm also gesturing towards. A "barbell strategy" works best if we only have few dimensions we don't want to make comparable, I think.

(AFAIU It grows only linearly, but we still want to perform some sampling of the top options to avoid the winners curse?)

1
niplav
I think this link is informative: Charitable interventions appear to be (weakly) lognormally distributed in cost-effectiveness. In general, my intuition is that "charities are lognormal, markets are normal", but I don't have a lot of evidence for the second part of the sentence.

(Cross-posted from my website. Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.)

This is the final essay in a series that I'm calling "Otherness and control in the age of AGI." I'm hoping that the individual essays can be read fairly well on their...

Continue reading

(very minor) as a native Chinese speaker, associating "yang" 阳 (literally, sun) with black feels really discordant/unnatural to me. 

This post is for EAs at the start of their careers who are considering which organisations to apply to, and their next steps in general.

Conclusion up front: It can be really hard to get that first job out of university. If you don’t get your top picks, your less exciting...

Continue reading

One additional reason:

If you get your (initial) training from a neutral-ish impact organisation, like some management consulting or tech companies, and then move on to a high impact job, you can add value right away with less 'training costs' for the high impact org = more impact.

All else equal, an EA org with staff with 1-3 years (non-EA) job experience can achieve more impact quicker than one with partly inexperienced staff.

That said, some things such as good epistemics or high moral integrity may be easier to learn at EA orgs (though they can definitely also be learned elsewhere).

1
Manuel Allgaier
I've supported >100 people in their career plans, and this seems pretty solid but underappreciated advice. Thanks for writing it up! I think I made that mistake too. I went for EA jobs early in my career (running EA Berlin and then EA Germany 2019-22, funded by CEA grants). There were some good reasons: This work seemed particularly neglected in 2019-21, it seemed a good fit and three senior people I had in-depth career 1-1s with all recommended it. I learned a lot, met many inspiring people and I think I did had some significant positive impact as well, on the community overall (it grew and professionalized) and on some individual member's careers. However, I made a lot of mistakes too, had slow feedback loops (no manager, little mentorship), and I'm pretty sure I would have learned many (soft) skills faster and built overall better career capital (both in- and outside of EA), if I had first spent 1-2 years in management consulting or in a fast growing (non-EA) tech company with good management, and then went on to direct EA work.

July 1-7 will be AI Welfare Debate Week on the EA Forum. We will be discussing the debate statement: “AI welfare [1] should be an EA priority[2]. The Forum team will be contacting authors who are well-versed in this topic to post, but we also welcome...

Continue reading