C

Caruso

Author and researcher - Cyber Warfare
16 karmaJoined Working (15+ years)
insideaiwarfare.com

Bio

I'm Jeff Caruso, an author and researcher focusing on Cyber Warfare and AI. The third edition of my book "Inside Cyber Warfare" (O'Reilly, 2009, 2011, 2024) will be out this fall. I was a Russia Subject Matter Expert contracted with the CIA's Open Source Center, provided numerous cyber briefings to U.S. government agencies, and I've been a frequent lecturer at the U.S. Air Force Institute of Technology and the U.S. Army War College.

How others can help me

I'm looking to fund an AI consciousness lab.

How I can help others

Anything, although I probably won't be able to provide any satisfying answers. There's very little that I'm certain about. 

Posts
2

Sorted by New
1
Caruso
· · 1m read

Comments
5

Fired from OpenAI's Superalignment team, Aschenbrenner now runs a fund dedicated to funding AGI-focused startups, according to The Information. 

"Former OpenAI super-alignment researcher Leopold Aschenbrenner, who was fired from the company for allegedly leaking information, has started an investment firm to back startups with capital from former Github CEO Nat Friedman, investor Daniel Gross, Stripe CEO Patrick Collision and Stripe president John Collision, according to his personal website.

In a recent podcast interview, Aschenbrenner spoke about the new firm as a cross between a hedge fund and a think tank, focused largely on AGI, or artificial general intelligence. “There’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x. Probably you can make even way more than that,” he said. “Capital matters.”

“We’re going to be betting on AGI and superintelligence before the decade is out, taking that seriously, making the bets you would make if you took that seriously. If that’s wrong, the firm is not going to do that well,” he said."

What happened to his concerns over safety, I wonder? 

I published a short piece on Yann LeCun posting about Jan Leike's exit from OpenAI over perceived safety issues, and wrote a bit about the difference between Low Probility - High Impact events and Zero Probability - High Impact events. 

https://www.insideaiwarfare.com/yann-versus/

Thanks for the link to Open Asteroid Impact. That's some really funny satire. :-D

This is an interesting #OpenPhil grant. $230K for a cyber threat intelligence researcher to create a database that tracks instances of users attempting to misuse large language models.

https://www.openphilanthropy.org/grants/lee-foster-llm-misuse-database/

 Will user data be shared with the user's permission? How will an LLM determine the intent of the user when it comes to differentiating between purposeful harmful entries versus user error, safety testing, independent red-teaming, playful entries, etc. If a user is placed on the database, is she notified? How long do you stay in LLM prison? 

I did send an email to OpenPhil asking about this grant, but so far I haven't heard anything back.

I love this series and I'm sorry to see that you haven't continued it. The rapid growth of AI Safety organizations and the amount of insider information and conflicts of interest is kind of mind boggling. There should be more of this type of informed reporting, not less.