This is a special post for quick takes by Caruso. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Fired from OpenAI's Superalignment team, Aschenbrenner now runs a fund dedicated to funding AGI-focused startups, according to The Information. 

"Former OpenAI super-alignment researcher Leopold Aschenbrenner, who was fired from the company for allegedly leaking information, has started an investment firm to back startups with capital from former Github CEO Nat Friedman, investor Daniel Gross, Stripe CEO Patrick Collision and Stripe president John Collision, according to his personal website.

In a recent podcast interview, Aschenbrenner spoke about the new firm as a cross between a hedge fund and a think tank, focused largely on AGI, or artificial general intelligence. “There’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x. Probably you can make even way more than that,” he said. “Capital matters.”

“We’re going to be betting on AGI and superintelligence before the decade is out, taking that seriously, making the bets you would make if you took that seriously. If that’s wrong, the firm is not going to do that well,” he said."

What happened to his concerns over safety, I wonder? 

He lays out the relevant part of his perspective in "The Free World Must Prevail" and "Superalignment" in his recent manifesto.

Buck, do you have any takes on how good this seems to you / how good the arguments in the manifesto for doing this work seem to you? (No worries if not or you don't want to discuss publicly)

I don’t think he says anything in the manifesto about why AI is going to go better if he starts a “hedge fund/think tank”.

I haven’t heard a strong case for him doing this project but it seems plausibly reasonable. My guess is I’d think it was a suboptimal choice if I heard his arguments and thought about it, but idk.

My current understanding is that he believes extinction or similar from AI is possible, at 5% probability, but that this is low enough that concerns about stable totalitarianism are slightly more important. Furthermore, he believes that AI alignment is a technical but solvable problem. More here.

I am far more pessimistic than him about extinction from misaligned AI systems, but I think it's quite sensible to try to make money from AI even in worlds from high probability of extinction, since the market signal provided counterfactually moves the market far less than the realizable benefit from being richer in such a crucial time.

I am far more pessimistic than him about extinction from misaligned AI systems, but I think it's quite sensible to try to make money from AI even in worlds from high probability of extinction, since the market signal provided counterfactually moves the market far less than the realizable benefit from being richer in such a crucial time.

I am sympathetic to this position when it comes to your own money. Like, if regular AI safety people put a large fraction of their savings into NVIDIA stock, that is understandable to me.

But the situation with Aschenbrenner starting an AGI investment firm is different. He is not directing (just) his own money, but the much larger capital of his investors into AGI companies. So the majority of the wealth gain will not end up in Aschenbrenner's hands, but belong to the investors. This is different from a small-scale shareholder who gets all the gains (minus some tax) of his stock ownership.

But even if Aschenbrenner's plan is to invest into the world-destroying in order to become richer later when it matters, it would be nice to say so and also explain how you intend to use the money later. My guess however is that this is not actually what Aschenbrenner actually believes. He might just be in favour of accelerating these technologies.

If you are concerned about extinction and stable totalitarianism, 'we should continue to develop AI but the good guys will have it' sounds like a very unimaginative and naïve solution

+1. 

(I feel slightly bad for pointing this out) It's also, perhaps not too coincidentally, the sort of general belief that's associated with giving Leopold more power, compared to many other possible beliefs one could have in this area. 

What would the imaginative solution be? 

Agreed. Getting a larger share of the pie (without breaking rules during peacetime) might be 'unimaginative' but it's hardly naïve. It's straightforward and has a good track record of allowing groups to shape the world disproportionately.

I'm a bit confused. I was just calling Aschenbrenner unimaginative, because I think trying to avoid stable totalitarianism while bringing about the conditions he identified for stable totalitarianism lacked imagination. I think the onus is on him to be imaginative if he is taking what he identifies as extremely significant risks, in order to reduce those risks. It is intellectually lazy to claim that your very risky project is inevitable (in many cases by literally extrapolating straight lines on charts and saying 'this will happen') and then work to bring it about as quickly and as urgently as possible.

Just to try and make this clear, by corollary, I would support an unimaginative solution that doesn't involve taking these risks, such as by not building AGI. I think the burden for imagination is higher if you are taking more risks, because you could use that imagination to come up with a win-win solution.

In today's Bulletin of the Atomic Scientists is this headline - "Trump has a strategic plan for the country: Gearing up for nuclear war" 

https://thebulletin.org/2024/07/trump-has-a-strategic-plan-for-the-country-gearing-up-for-nuclear-war/

Does EA have a plan to address this? If not, now would be a good time.  

I published a short piece on Yann LeCun posting about Jan Leike's exit from OpenAI over perceived safety issues, and wrote a bit about the difference between Low Probility - High Impact events and Zero Probability - High Impact events. 

https://www.insideaiwarfare.com/yann-versus/

This is an interesting #OpenPhil grant. $230K for a cyber threat intelligence researcher to create a database that tracks instances of users attempting to misuse large language models.

https://www.openphilanthropy.org/grants/lee-foster-llm-misuse-database/

 Will user data be shared with the user's permission? How will an LLM determine the intent of the user when it comes to differentiating between purposeful harmful entries versus user error, safety testing, independent red-teaming, playful entries, etc. If a user is placed on the database, is she notified? How long do you stay in LLM prison? 

I did send an email to OpenPhil asking about this grant, but so far I haven't heard anything back.

[comment deleted]1
0
0
More from Caruso
Curated and popular this week
Relevant opportunities