H

Habryka

CEO @ Lightcone Infrastructure
21716 karmaJoined Working (6-15 years)

Bio

Head of Lightcone Infrastructure. Wrote the forum software that the EA Forum is based on. Often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
1401

Topic contributions
1

I donate more to Lightcone than my salary, so it doesn't really make any sense for me to receive a salary, since that just means I pay more in taxes. 

I of course donate to Lightcone because Lightcone doesn't have enough money. 

Lightspeed Grants and the S-Process paid $20k honorariums to 5 evaluators. In addition, running the round probably cost around 8-ish months of Lightcone staff time, with a substantial chunk of that being my own time, which is generally at a premium as the CEO (I would value it organizationally at ~$700k/yr on the margin, with increasing marginal costs, though to be clear, my actual salary is currently $0), and then it also had some large diffuse effects on organizational attention.

This makes me think it would be unsustainable for us to pick up running Lightspeed Grants rounds without something like ~$500k/yr of funding for it. We distributed around ~$10MM in the round we ran.

Some of my thoughts on Lightspeed Grants from what I remember: I don’t think it’s ever a good idea to name something after the key feature everyone else in the market is failing at. It leads to particularly high expectations and is really hard to get away from. (Eg OpenAI) The S-process seemed like a strange thing to include for something intended to be fast. As far as I know the S-process has never been done quickly.

You seem to be misunderstanding both Lightspeed Grants and the S-Process. The S-Process and Lightspeed Grants both feature speculation/venture grants which enable a large group of people to make fast unilateral grants. They are by far the fastest grant-decision mechanism that I know out there, and it's been going strong for multilple years now. If you need funding quickly, an SFF speculation grant is by far the best bet, I think.

It ended up taking much longer than expected for decisions but still pretty quick overall.

I think we generally stayed within our communicated timelines, or only mildly extended them. We did also end up getting more money which caused us to reverse some rejections afterwards, but we did get back to everyone within 2 weeks on whether they would get a speculation grant, and communicated the round decisions at the deadline (or maybe a week or two later, I remember there was a small hiccup).

I’m interested to know why we haven’t seen Lightspeed Grants again?

Ironically one of the big bottlenecks was funding. OpenPhil was one funder who told us they wouldn't fund us for anything but our LW work (and that also soon after disappeared) and ironically funding coordination work doesn't seem to pay well. Distributing millions of dollars also didn't combine very well with being sued by FTX.

I am interested in picking it back up again, but it is also not clear to me how sustainable working on that is.

A non-trivial fraction of our most valuable grants require very short turn-around times, and more broadly, there is a huge amount of variance in how much time it takes to evaluate different kinds of applications. This makes a round model hard, since you both end up getting back much later than necessary to applications that were easy to evaluate, and have to reject applications that could be good but are difficult to evaluate. 

I do actually have trouble finding a good place to link to. I'll try to dig one up in the next few days.

You cannot spend the money you obtain from a loan without losing the means to pay it back. You can do a tiny bit to borrow against your future labor income, but the normal thing to do is to declare personal bankruptcy, and so there is little assurance for that.

(This has been discussed many dozens of times on both the EA Forum and LessWrong. There exist no loan structures as far as I know that allow you to substantially benefit from predicting doom.)

Most concrete progress on worst-case AI risks — e.g. arguably the AISIs network, the draft GPAI code of practice for the EU AI Act, company RSPs, the chip and SME export controls, or some lines of technical safety work

My best guess (though very much not a confident guess) is the aggregate of these efforts are net-negative, and I think that is correlated with that work having happened in backrooms, often in context where people were unable to talk about their honest motivations. It sure is really hard to tell, but I really want people to consider the hypothesis that a bunch of these behind-the-scenes policy efforts have been backfiring, especially ex-post with a more republican administration.

The chip and SME export controls seem to currently be one of the drivers of the escalating U.S. and China arms race, the RSPs are I think largely ineffectual and have delayed the speed at which we could get regulation that is not reliant on lab supervision, and the overall EU AI act seems very bad, though I think the effect of the marginal help with drafting is of course much harder to estimate. 

Missing from this list: The executive order, which I think has retrospectively revealed itself as being a major driver of polarization of AI-risk concerns, by strongly conflating near-term risk with extinction risks. It did also do a lot of great stuff, though my best guess is we'll overall regret it (but on this I feel the least confident). 

I agree that a ton of concrete political implementation work needs to be done, but I think the people working in the space who have chosen to do that work in a way that doesn't actually engage in public discourse have made mistakes, and this has had large negative externalities. 

See also: https://www.commerce.senate.gov/services/files/55267EFF-11A8-4BD6-BE1E-61452A3C48E3

Again, really not confident here, and I agree that there is a lot of implementation work to be done that is not glorious and flashy, but I think a bunch of the ways it's been done in a kind of conspiratorial and secretive fashion has been counterproductive[1]

Ultimately as you say the bottleneck for things happening is political will and buy-in that AI systems pose a serious existential risk, and I think that means a lot of implementation and backroom work is blocked and bottlenecked on that public argumentation happening. And when people try to push forward anyways, they often end up forced to conflate existential risk with highly politicized short-term issues that aren't very correlated with the actual risks, and backfire when the political winds change and people update.

  1. ^

That's their... headline result?  "We do not find, however, any evidence for a systematic link between the scale of refugee immigration (and neither the type of refugee accommodation or refugee sex ratios) and the risk of Germans to become victims of a crime in which refugees are suspects" (pg. 3), "refugee inflows do not exert a statistically significant effect on the crime rate" (pg. 21), "we found no impact on the overall likelihood of Germans to be victimized in a crime" (pg. 31), "our results hence do not support the view that Germans were victimized in greater numbers by refugees" (pg. 34).

I haven't read their paper, but the chart sure seems like it establishes a clear correlation. Also, the quotes you are saying seem to be saying something else, claiming that "greater inflow was not correlated with greater crime", which is different than "refugees were not particularly likely to commit crimes against Germans". Indeed, at least on a quick skim of the data that Larks linked, the that statement seems clearly false (though it might still be true that for some reason it is not as clear that greater immigration inflow is necessarily correlated with greater crime, since it might lower crime in other ways, though my best guess is that claim is being chosen as a result of a garden of forking paths methodology).

One reasonable compromise model between these two perspectives is to tie the discount rate to the predicted amount of change that will happen at a given point of time. This could lead to a continuously increasing discounting rate for years that lead up to and include AGI, but then eventually a falling discounting rate for later years as technological progress becomes relatively saturated

Yeah, this is roughly the kind of thing I would suggest if one wants to stay within the discount rate framework.

Load more