A

AdamGleave

1492 karmaJoined

Comments
55

This is an important point. There's a huge demand for research leads in general, but the people hiring & funding often have pretty narrow interests. If your agenda is legibly exciting to them, then you're in a great position. Otherwise, there can be very little support for more exploratory work. And I want to emphasize the legible part here: you can do something that's great & would be exciting to people if they understood it, but novel research is often time-consuming to understand, and these are time-constrained people who will not want to invest that time unless they have a strong signal it's promising.

A lot of this problem is downstream of very limited grantmaker time in AI safety. I expect this to improve in the near future, but not enough to fully solve the problem.

I do like the idea of a more research agenda agnostic research organization. I'm striving to have FAR be more open-minded, but we can't support everything so are still pretty opinionated to prioritize agendas that we're most excited by & which are a good fit for our research style (engineering-intensive empirical work). I'd like to see another org in this space set-up to support a broader range of agendas, and am happy to advise people who'd like to set something like this up.

As someone who did recently set up an AI safety lab, success rates have certainly been on my mind. It's certainly challenging, but I think the reference class we're in might be better than it seems at first.

I think a big part of what makes succeeding as a for-profit tech start-up challenging is that so many other talented individuals are chasing the same, good ideas. For every Amazon there are 1000s of failed e-commerce start-ups. Clearly, Amazon did something much better than the competition. But what if Amazon didn't exist? What if there was a company that was a little more expensive, and had longer shipping times? I'd wager that company would still be highly successful.

Far fewer people are working on AI safety. That's a bad thing, but it does at least mean that there's more low-hanging fruit to be tapped. I agree with [Adam Binks](https://forum.effectivealtruism.org/posts/PJLx7CwB4mtaDgmFc/critiques-of-non-existent-ai-safety-labs-yours?commentId=eLarcd8no5iKqFaNQ) that academic labs might be a better reference class. But even there, AI safety has had far less attention paid to it than e.g. developing treatments for cancer or unifying quantum mechanics and general relativity. 

So overall it's far from clear to me that it's harder to make progress on AI safety than solve outstanding challenge problems in academia, or in trying to make a $1 bn+ company.

Thanks Lucretia for sharing your experience. This cannot have been an easy topic to write about, and I'm deeply sorry these events happened to you. I really appreciated the clarity of the post and found the additional context above the TIME article to be extremely valuable.

I liked your suggestions for people to take on an individual level. Building on the idea of back channel references, I'm wondering if there's value in having a centralised place to collect and aggregate potential red flags? Personal networks can only go so far, and it's often useful to distinguish between isolated instances and repeated patterns of behaviour. The CEA CH team partially serves this role within EA, but there's no equivalent in the broader Silicon Valley or AI communities.

For people not familiar with the UK, the London metropolitan area houses 20% of the UK's population, and a disproportionate share of the economic and research activity. The London-Cambridge-Oxford triangle in particular is by far the research powerhouse of the country, although there are of course some good universities elsewhere (e.g. Durham, St Andrews in the north). Unfortunately, anywhere within an hour's travel of London is going to be expensive. Although I'm sure you can find somewhat cheaper options than Oxford, I expect the cost savings will be modest (noting Oxford is already cheaper than central London), and you'll likely lose something else (e.g. location is harder to get to, or is some grungy commuter town).

I would like to hear if CEA considered non-Oxford locations (as there's an obvious natural bias given CEA is headquartered in Oxford), but it wouldn't surprise me if the benefit of CEA staff (who will often be running the events) having easy access to the venue genuinely outweighed any likely modest cost savings from locating elsewhere.

A 30 person office could not house the people attending, so you'd need to add costs of a hotel/AirBnB/renting nearby houses if going down that option. Even taking into account that commercial rest estate is usually more expensive than residential, I'd expect the attendee accommodation cost to be greater than the office rental simply because people need more living space than they do conference space.

Additionally in my experience retreats tend to go much better if everyone is on-site in one location: it encourages more spontaneous interaction outside of the scheduled time. There are also benefits to being outside a city center (too easy for people to get distracted and wander off otherwise).

Was Wytham a wise investment? I'm not sure, I'd love to see a calculation on it, and it probably comes down to things like the eventual utilization rate. But I think a fairer reference class would be "renting a conference center plus hotel" than "renting a 30-person office".

Note I don't see any results for FTX Foundation or FTX Philanthropy at https://apps.irs.gov/app/eos/determinationLettersSearch So it's possible it's not a 501(c)(3) (although it could still be a non-profit corporation).

Disclaimer: I do not work for FTX, and am basing this answer off publicly available information, which I have not vetted in detail.

Nick Beckstead in the Future Fund launch post described several entities (FTX Foundation Inc, DAFs) that funds will be disbursed out of: https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1?commentId=qtJ7KviYxWiZPubtY I would expect these entities to be sufficiently capitalized to provide continuity of operations, although presumably it'll have a major impact on their long-run scale.

IANAL but I'd expect the funds in the foundation/DAF to be fairly secure against bankruptcy or court proceedings. Bankruptcy courts can't just claw back money arbitrarily from other creditors, and limited liability corporations provide significant protection for directors. However, I'd expect assets donated to FTX Foundation or associated DAFs to largely be held  in-kind (again, this is speculation, but it's standard practice for large philanthropic foundations) not liquidated for cash. These assets mark-to-market value are likely worth a lot less than they were a week ago.

Hi Aaron, thanks for highlighting this. We inadvertently published an older version of the write-up before your feedback -- this has been corrected now. However, there are still a number of areas in the revised version which I expect you'll still take issue with, so I wanted to share a bit of perspective on this. I think it's excellent you brought up this disagreement in a comment, and would encourage people to form their own opinion.

First, for a bit of context, my grant write-ups are meant to accurately reflect my thought process, including any reservations I have about a grant. They're not meant to present all possible perspectives -- I certainly hope that donors use other data points when making their decisions, including of course CES's own fundraising materials.

My understanding is you have two main disagreements with the write-up: that I understate CES's ability to have an impact on the federal level, and that the cost effectiveness is lower than you believe to be true.

On the federal level, my updated write-up acknowledges that "CES may be able to have influence at the federal level by changing state-level voting rules on how senators and representatives are elected. This is not something they have accomplished yet, but would be a fairly natural extension of the work they have done so far." However, I remain skeptical regarding the Presidential general for the reasons stated: it'll remain effectively a two-candidate race until a majority of electoral college votes can be won by approval voting. I do not believe you ever addressed that concern.

Regarding the cost effectiveness, I believe your core concern was that we included your total budget as a cost, whereas much of your spending is allocated towards longer-term initiatives that do not directly win a present-day approval voting campaign. This was intended as a rough metric -- a more careful analysis would be needed to pinpoint the cost effectiveness. However, I'm not sure that such an analysis would necessarily give a more favorable figure. You presumably went after jurisdictions where winning approval voting reform is unusually easy; so we might well expect your cost per vote to increase in future. If you do have any internal analysis to share on that then I'm sure I and others would be interested to see it.

You could argue from a "flash of insight" and scientific paradigm shifts generally giving rise to sudden progress. We certainly know contemporary techniques are vastly less sample and compute efficient than the human brain -- so there does exist some learning algorithm much better than what we have today. Moreover there probably exists some learning algorithm that would give rise to AGI on contemporary (albeit expensive) hardware. For example, ACX notes there's a supercomputer than can do $10^17$ FLOPS vs the estimated $10^16 needed for a human brain. These kinds of comparisons are always a bit apples to oranges, but it does seem like compute is probably not the bottleneck  (or won't be in 10 years) for a maximally-efficient algorithm.

The nub of course is whether such an algorithm is plausibly reachable by human flash of insight (and not via e.g. detailed empirical study and refinement of a less efficient but working AGI). It's hard to rule out. How simple/universal we think the algorithm the human brain implements is one piece of evidence here -- the more complex and laden with inductive bias (e.g. innate behavior), the less likely we are to come up with it. But even if the human brain is a Rube Goldberg machine, perhaps there does exist some more straightforward algorithm evolution did not happen upon.

Personally I'd put little weight on this. I have <10% probability on AGI in next 10 years, and think I put no more than 15% on AGI being developed ever by something that looks like a sudden insight than more continuous progress. Notably even if such an insight does happen soon, I'd expect it to take at least 3-5 years for it to gain recognition and be sufficiently scaled up to work. I do think it's probable enough for us to actively keep an eye out for promising new ideas that could lead to AGI so we can be ahead of the game. I think it's good for example that a lot of people working on AI safety were working on language models "before it was cool" (I was not one of these people), for example, although we've maybe now piled too much into that area.

Load more