In this report, alongside information about our latest grants, we have further news to share about the Long-Term Future Fund.
Changes to Fund management
We welcome two new Fund managers to our team: Adam Gleave and Asya Bergal. Adam is a PhD candidate at UC Berkeley, working on technical AI safety with the Center for Human-Compatible AI (CHAI). Asya Bergal works as a researcher at AI Impacts and writes for the AI Alignment Newsletter.
We decided to seek new Fund managers with strong backgrounds in AI safety and strategy research to increase our capacity to carefully evaluate grants in these areas, especially given that Alex Zhu left the Long-Term Future Fund (LTFF) early this year. We expect the number of high-quality grant applications in these areas to increase over time.
The new Fund managers were appointed by Matt Wage, the Fund's chairperson, after a search process with consultation from existing Fund managers and advisors. They both trialed for one grant round (Adam in March, Asya in July) before being confirmed as permanent members. We are excited about their broad-ranging expertise in longtermism, AI safety, and AI strategy.
Adam is a PhD candidate advised by Stuart Russell and has previously interned at DeepMind. He has published several safety-relevant machine learning papers. Over the past six years, he was deeply involved with the effective altruism community (running an EA group at the University of Cambridge, earning to give as a quantitative trader) and demonstrated careful judgment on a broad range of longtermist prioritization questions (see, e.g., his donor lottery report).
Asya brings a broad-ranging AI background to the Fund. At AI Impacts, she has worked on a variety of empirical projects and developed novel perspectives on far-ranging strategy questions (see, e.g., this presentation on AI timelines). She has demonstrated technical proficiency in AI, writing summaries and opinions of papers for the AI Alignment Newsletter. More recently, she has been working on hardware forecasting questions at the Centre for the Governance of AI. Asya has also researched a broad range of longtermist prioritization questions, including at the Open Philanthropy Project (where she looked into whole brain emulation, animal welfare, and biosecurity).
Adam and Asya both had very positive external references, and both appear to be esteemed and trustworthy community members. During the trial, they demonstrated careful judgment and deep engagement with the grant applications. We are excited to have them on board and believe their contributions will further improve the quality of our grants.
In other news, Jonas Vollmer recently joined CEA as Head of EA Funds. He previously served as an advisor to the LTFF. In his new role, he will make decisions on behalf of EA Funds and explore longer-term strategy for the entire EA Funds project, including the LTFF. EA Funds may be spun out of CEA's core team within 6--12 months.
Other updates
Long-Term Future Fund
- We plan to continue to focus on grants to small projects and individuals rather than large organizations. We think the Fund has a comparative advantage in this area: Individual donors cannot easily grant to researchers and small projects, and large grantmakers such as the Open Philanthropy Project are less active in the area of small grants.
- We would like to take some concrete actions to increase transparency around our grantmaking process, partly in response to feedback from donors and grantseekers. Over the next few months, we plan to publish a document outlining our process and run an “Ask Me Anything” session with our new Fund team on the Effective Altruism Forum.
- We are tentatively considering expanding into more active grantmaking, which would entail publicly advertising the types of work we would be excited to fund (details TBD).
EA Funds
Earlier this year, EA Funds ran a donor survey eliciting feedback on the Long Term Future Fund.
- Overall, the LTFF received a relatively low Net Promoter score: when asked “How likely is it that you would recommend the Long-Term Future Fund to a friend or colleague?”, donors responded with an average of 6.5 on a scale from 1 to 10. However, some donors gave a low score despite being satisfied with the Fund because their friends and colleagues are generally uninterested in longtermism. In future surveys, EA Funds intends to ask questions that more directly address how donors themselves feel about the LTFF.
- Some donors were interested in how the Fund addresses conflicts of interest, so EA Funds has been developing a conflict of interest policy and intends to have stricter rules around grants to the personal acquaintances of Fund managers.
- Some donors were surprised by the Fund’s large number of AI risk-focused grants. While the Fund managers are in favor of these grants, we want to make sure that donors are aware of the work they are supporting. As a result, we changed the EA Funds donation interface such that donors have to opt into supporting their chosen Funds. (Previously, the website suggested a default allocation for each Fund.) EA Funds also plans to offer a donation option focused on climate change for interested donors.
- Some donors expressed a preference for more legible grants (e.g., to established, reputable institutions). EA Funds will consider offering a separate donation option for those donors; while we are still developing our plans, this might take the form of a separate Fund that primarily supports Open Philanthropy’s longtermist grant recipients.
Grant recipients
Each grant recipient is followed by the size of the grant and a one-sentence description of their project. All of these grants have been paid out.
Grants made during our standard cycle:
- Robert Miles ($60,000): Creating quality videos on AI safety, and offering communication and media support to AI safety orgs.
- Center for Human-Compatible AI ($75,000): Hiring a research engineer to support CHAI’s technical research projects.
- Joe Collman ($25,000): Developing algorithms, environments and tests for AI safety via debate.
- AI Impacts ($75,000): Answering decision-relevant questions about the future of artificial intelligence.
- Alexis Carlier ($5,000): Surveying experts on AI risk scenarios and working on other projects related to AI safety.
- Gavin Taylor ($30,000): Conducting a computational study on using a light-to-vibrations mechanism as a targeted antiviral.
- Center for Election Science ($50,000): Supporting the use of better voting methods in U.S. elections
- Charlie Rogers-Smith ($7,900): Supporting research and job applications related to AI alignment.
Off-cycle grants:
- Claudia Shi ($5,000): Organizing a "Human-Aligned AI” event at NeurIPS.
- Gopal Sarma ($5,000): Organizing a workshop aimed at highlighting recent successes in the development of verified software.
- Alex Turner ($30,000): Understanding when and why proposed AI designs seek power over their environment.
- Cambridge Summer Programme in Applied Reasoning (CaSPAR) ($26,300): Organizing immersive workshops on meta skills and x-risk for STEM students at top universities.
Grant reports
Oliver Habryka
Robert Miles ($60,000)
Creating quality videos on AI safety, and offering communication and media support to AI safety orgs.
We’ve funded Rob Miles in the past, and since Rob’s work has continued to find traction and maintain a high quality bar, I am viewing this mostly as a grant renewal. Back then, I gave the following rationale for the grant:
The videos on [Rob's] YouTube channel pick up an average of ~20k views. His videos on the official Computerphile channel often pick up more than 100k views, including for topics like logical uncertainty and corrigibility (incidentally, a term Rob came up with).
More things that make me optimistic about Rob’s broad approach:
- He explains that AI alignment is a technical problem. AI safety is not primarily a moral or political position; the biggest chunk of the problem is a matter of computer science. Reaching out to a technical audience to explain that AI safety is a technical problem, and thus directly related to their profession, is a type of ‘outreach’ that I’m very happy to endorse.
- He does not make AI safety a politicized matter. I am very happy that Rob is not needlessly tribalising his content, e.g. by talking about something like “good vs bad ML researchers”. He seems to simply portray it as a set of interesting and important technical problems in the development of AGI.
- His goal is to create interest in these problems from future researchers, and not to simply get as large of an audience as possible. As such, Rob’s explanations don’t optimize for views at the expense of quality explanation. His videos are clearly designed to be engaging, but his explanations are simple and accurate. Rob often interacts with researchers in the community (at places like DeepMind and MIRI) to discuss which concepts are in need of better explanations. I don’t expect Rob to take unilateral action in this domain.
Rob is the first skilled person in the X-risk community working full-time on producing video content. Being the very best we have in this skill area, he is able to help the community in a number of novel ways (for example, he’s already helping existing organizations produce videos about their ideas).
Since then, the average views on his videos appear to have quintupled, usually eclipsing 100k views on YouTube. While I have a lot of uncertainty about what level of engagement those views represent, it would not surprise me if more than 15% of people introduced to the topic of AI alignment in the last year discovered it through Rob’s YouTube channel. This would be a substantial figure, and I also consider Rob’s material one of the best ways to be introduced to the topic (in terms of accurately conveying what the field is about).
In most worlds where I think this grant turns out to be bad, it is because it is currently harmful for the field of AI alignment to grow rapidly, because it might cause the field to become harder to coordinate, cause more bad ideas to become popular, or lead too many people to join who don’t have sufficient background or talent to make strong contributions. I think it is relatively unlikely that we are in that world, and I continue to think that the type of outreach Rob is doing is quite valuable, but I still think there’s at least a 5% probability to it being bad for the AI Alignment field to grow right now.
I trust Rob to think about these considerations and to be careful about how he introduces people to the field; thus, I expect that if we were to end up in a world where this kind of outreach is more harmful than useful, Rob would take appropriate action.
Center for Human-Compatible AI ($75,000)
Hiring a research engineer to support CHAI’s technical research projects.
Over the last few years, CHAI has hosted a number of people who I think have contributed at a very high quality level to the AI alignment problem, most prominently Rohin Shah, who has been writing and updating the AI Alignment Newsletter and has also produced a substantial number of other high-quality articles, like this summary of AI alignment progress in 2018-2019.
Rohin is leaving CHAI soon, and I'm unsure about CHAI's future impact, since Rohin made up a large fraction of the impact of CHAI in my mind.
I have read a number of papers and articles from other CHAI grad students, and I think that the overall approach I see most of them taking has substantial value, but I also maintain a relatively high level of skepticism about research that tries to embed itself too closely within the existing ML research paradigm. That paradigm, at least in the past, hasn't really provided any space for what I consider the most valuable safety work (though I think most other members of the Fund don't share my skepticism). I don't think I have the space in this report to fully explain where that skepticism is coming from, so the below should only be seen as a very cursory exploration of my thoughts here.
A concrete example of the problems I have seen (chosen for its simplicity more than its importance) is that, on several occasions, I've spoken to authors who, during the publication and peer-review process, wound up having to remove some of their papers' most important contributions to AI alignment. Often, they also had to add material that seemed likely to confuse readers about the paper's purpose. One concrete class of examples: adding empirical simulations of scenarios whose outcome is trivially predictable, where the specification of the scenario adds a substantial volume of unnecessary complexity to the paper, while distracting from the generality of the overall arguments.
Another concern: Most of the impact that Rohin contributed seemed to be driven more by distillation and field-building work than by novel research. As I have expressed in the past (and elsewhere in this report), I believe distillation and field-building to be particularly neglected and valuable at the margin. I don't currently see the rest of CHAI engaging in that work in the same way.
On the other hand, since it appears that CHAI has probably been quite impactful on Rohin's ability to produce work, I am somewhat optimistic that there are more people whose work is amplified by the existence of CHAI, even if I am less familiar with their work, and I am also reasonably optimistic that CHAI will be able to find other contributors as good as Rohin. I've also found engaging with Andrew Critch's thinking on AI alignment quite valuable, and I am hopeful about more work from Stuart Russell, who obviously has a very strong track record in terms of general research output, though my sense is that marginal funding to CHAI is unlikely to increase Stuart's output in particular (and might in fact decrease it, since managing an organization takes time away from research).
While I evaluated this funding request primarily as unrestricted funding to CHAI, the specific project that CHAI is requesting money for seems also quite reasonable to me. Given the prosaic nature of a lot of CHAI's AI alignment works, it seems quite important for them to be able to run engineering-heavy machine learning projects, for which it makes sense to hire research engineers to assist with the associated programming tasks. The reports we've received from students at CHAI also suggest that past engineer hiring has been valuable and has enabled students at CHAI to do substantially better work.
Having thought more recently about CHAI as an organization and its place in the ecosystem of AI alignment,I am currently uncertain about its long-term impact and where it is going, and I eventually plan to spend more time thinking about the future of CHAI. So I think it's not that unlikely (~20%) that I might change my mind on the level of positive impact I'd expect from future grants like this. However, I think this holds less for the other Fund members who were also in favor of this grant, so I don't think my uncertainty is much evidence about how LTFF will think about future grants to CHAI.
(Recusal note: Due to being a grad student at CHAI, Adam Gleave recused himself from the discussion and voting surrounding this grant.)
Adam Gleave
Joe Collman ($25,000)
Developing algorithms, environments and tests for AI safety via debate.
Joe was previously awarded $10,000 for independent research into extensions to AI safety via debate. We have received positive feedback regarding his work and are pleased to see he has formed a collaboration with Beth Barnes at OpenAI. In this round, we have awarded $25,000 to support Joe's continued work and collaboration in this area.
Joe intends to continue collaborating with Beth to facilitate her work in testing debate in human subject studies. He also intends to develop simplified environments for debate, and to develop and evaluate ML algorithms in this environment.
In general, I apply a fairly high bar to funding independent research, as I believe most people are more productive working for a research organization. In this case, however, Joe has demonstrated an ability to make progress independently and forge collaborations with established researchers. I hope this grant will enable Joe to further develop his skills in the area, and to produce research output that can demonstrate his abilities to potential employers and/or funders.
AI Impacts ($75,000)
Answering decision-relevant questions about the future of artificial intelligence.
AI Impacts is a nonprofit organization (fiscally sponsored by MIRI) investigating decision-relevant questions about the future of artificial intelligence. Their work has and continues to influence my outlook on how and when advanced AI will develop, and I often see researchers I collaborate with cite their work in conversations. Notable recent output includes an interview series around reasons why beneficial AI may be developed "by default" and continued work on examples of discontinuous progress.
I would characterize much of AI Impacts' research as things that are fairly obvious to look into but which, surprisingly, no one else has. In part this is because their research is often secondary, summarizing relevant existing sources, and interdisciplinary -- both of which are under-incentivized in academia. Choosing the right questions to investigate also requires considerable skill and familiarity with AI research.
Overall, I would be excited to see more research into better understanding how AI will develop in the future. This research can help funders to decide which projects to support (and when), and researchers to select an impactful research agenda. We are pleased to support AI Impacts' work in this space, and hope this research field will continue to grow.
We awarded a grant of $75,000, approximately one fifth of the AI Impacts budget. We do not expect sharply diminishing returns, so it is likely that at the margin, additional funding to AI Impacts would continue to be valuable. When funding established organizations, we often try to contribute a "fair share" of organizations' budgets based on the Fund's overall share of the funding landscape. This aids coordination with other donors and encourages organizations to obtain funding from diverse sources (which reduces the risk of financial issues if one source becomes unavailable).
(Recusal note: Due to working as a contractor for AI Impacts, Asya Bergal recused herself from the discussion and voting surrounding this grant.)
Asya Bergal
Alexis Carlier ($5,000)
Surveying experts on AI risk scenarios and working on other projects related to AI safety.
We awarded Alexis $5,000, primarily to support his work on a survey aimed at identifying the arguments and related beliefs motivating top AI safety and governance researchers to work on reducing existential risk from AI.
I think the views of top researchers in the AI risk space have a strong effect on the views and research directions of other effective altruists. But as of now, only a small and potentially unrepresentative set of views exist in written form, and many are stated in imprecise ways. I am hopeful that a widely-taken survey will fill this gap and have a strong positive effect on future research directions.
I thought Alexis's previous work on the principal-agent literature and AI risk was useful and thoughtfully done, and showed that he was able to collaborate with prominent researchers in the space. This collaboration, as well as details of the application, suggested to me that the survey questions would be written with lots of input from existing researchers, and that Alexis was likely to be able to get widespread survey engagement.
Since recommending this grant, I have seen the survey circulated and taken it myself. I thought it was a good survey and am excited to see the results.
Gavin Taylor ($30,000)
Conducting a computational study on using a light to vibrations mechanism as a targeted antiviral.
We awarded Gavin $30,000 to work on a computational study assessing the feasibility of using a light to vibrations (L2V) mechanism as a targeted antiviral. Light to vibrations is an emerging technique that could destroy viruses by vibrating them at their resonant frequency using tuned pulses of light. In an optimistic scenario, this study would identify a set of viruses that are theoretically susceptible to L2V inactivation. Results would be published in academic journals and would pave the way for further experimental work, prototypes, and eventual commercial production of L2V antiviral equipment. L2V techniques could be generalizable and rapidly adaptable to new pathogens, which would provide an advantage over other techniques used for large-scale control of future viral pandemics.
On this grant, I largely deferred to the expertise of colleagues working in physics and biorisk. My ultimate take after talking to them was that the described approach was plausible and could meaningfully affect the course of future pandemics, although others have also recently started working on L2V approaches.
My impression is that Gavin's academic background is well-suited to doing this work, and I received positive personal feedback on his competence from other EAs working in biorisk.
My main uncertainty recommending this grant was in how the LTFF should compare relatively narrow biorisk interventions with other things we might fund. I ultimately decided that this project was worth funding, but still don't have a good way of thinking about this question.
Matt Wage
Center for Election Science ($50,000)
Supporting the use of better voting methods in U.S. elections.
This is an unrestricted grant to the Center for Election Science (CES). CES works to improve US elections by promoting approval voting, a voting method where voters can select as many candidates as they like (as opposed to the traditional voting method where you can only select one candidate).
Academic experts on voting theory widely consider approval voting to be a significant improvement over our current voting method (plurality voting), and our understanding is that approval voting on average produces outcomes that better reflect what voters actually want by preventing issues like vote splitting. I think that promoting approval voting is a potentially promising form of improving institutional decision making within government.
CES is a relatively young organization, but so far they have a reasonable track record. Previously, they passed a ballot initiative to adopt approval voting in the 120,000-person city of Fargo, ND, and are now repeating this effort in St. Louis. Their next goal is to get approval voting adopted in bigger cities and then eventually states.
Charlie Rogers-Smith ($7,900)
Supporting research and job applications related to AI alignment.
Charlie applied for funding to spend a year doing research with Jan Brauner, Sören Mindermann, and their supervisor Yarin Gal (all at Oxford University), while applying to PhD programs to eventually work on AI alignment. Charlie is currently finishing a master’s in statistics at Oxford and is also participating in the Future of Humanity Institute’s Summer Research Fellowship.
We think Professor Gal is in a better position to evaluate this proposal (and our understanding is that his group is capable of providing funding for this themselves), but it will take some time for this to happen. Therefore, we decided to award Charlie a small “bridge funding” grant to give him time to try to finalize the proposal with Professor Gal or find an alternative position.
Off-cycle grants
The following grants were made outside of our regular schedule, and weren’t included in previous payout reports, so we’re including them here.
Helen Toner
Claudia Shi ($5,000)
Organizing a "Human-Aligned AI” event at NeurIPS.
Grant date: November 2019
Claudia Shi and Victor Veitch applied for funding to run a social event themed around “Human-aligned AI” at the machine learning conference NeurIPS in December 2019. The aim of the event was to provide a space for NeurIPS attendees who care about doing high-impact projects and/or about long-term AI safety to gather and discuss these topics.
I believe that holding events like this is an easy way to do a very basic form of “field-building,” by making it easier for machine learning researchers who are interested in longtermism and related topics to find each other, discuss their work, and perhaps work together in the future or change their research plans. Our funding was mainly used to cover catering for the 100-person event, which we hoped would make the event more enjoyable for participants and therefore more effective in facilitating discussions and connections. After the event, the organizers had $1863 left over, which they returned to the Fund.
Matt Wage
Gopal Sarma ($5,000)
Organizing a workshop aimed at highlighting recent successes in the development of verified software.
Grant date: January 2020.
Gopal applied for a grant to run a workshop called "Formal Methods for the Informal Engineer" (FMIE) at the Broad Institute of MIT and Harvard, on the topic of formal methods in software engineering. More information on the workshop is here.
We made this grant because we know a small set of AI safety researchers are optimistic about formal verification techniques being useful for AI safety, and we thought this grant was a relatively inexpensive way to support progress in that area.
Unfortunately, the workshop has now been postponed because of COVID-19.
Oliver Habryka
Alex Turner ($30,000)
Understanding when and why proposed AI designs seek power over their environment.
Grant date: January 2020
We previously made a grant to Alex Turner at the beginning of 2019. Here is what I wrote at the time:
My thoughts and reasoning
I'm excited about this because:
Alex's approach to finding personal traction in the domain of AI Alignment is one that I would want many other people to follow. On LessWrong, he read and reviewed a large number of math textbooks that are useful for thinking about the alignment problem, and sought public input and feedback on what things to study and read early on in the process.
He wasn't intimidated by the complexity of the problem, but started thinking independently about potential solutions to important sub-problems long before he had "comprehensively" studied the mathematical background that is commonly cited as being the foundation of AI Alignment.
He wrote up his thoughts and hypotheses in a clear way, sought feedback on them early, and ended up making a set of novel contributions to an interesting sub-field of AI Alignment quite quickly (in the form of his work on impact measures, on which he recently collaborated with the DeepMind AI Safety team)
Potential concerns
These intuitions, however, are a bit in conflict with some of the concrete research that Alex has actually produced. My inside views on AI alignment make me think that work on impact measures is very unlikely to result in much concrete progress on what I perceive to be core AI alignment problems, and I have talked to a variety of other researchers in the field who share that assessment. I think it's important that this grant not be viewed as an endorsement of the concrete research direction that Alex is pursuing, but only as an endorsement of the higher-level process that he has been using while doing that research.
As such, I think it was a necessary component of this grant that I have talked to other people in AI alignment whose judgment I trust, who do seem excited about Alex's work on impact measures. I think I would not have recommended this grant, or at least this large of a grant amount, without their endorsement. I think in that case I would have been worried about a risk of diverting attention from what I think are more promising approaches to AI Alignment, and a potential dilution of the field by introducing a set of (to me) somewhat dubious philosophical assumptions.
Overall, while I try my best to form concrete and detailed models of the AI alignment research space, I don't currently devote enough time to it to build detailed models that I trust enough to put very large weight on my own perspective in this particular case. Instead, I am mostly deferring to other researchers in this space that I do trust, a number of whom have given positive reviews of Alex's work.
In aggregate, I have a sense that the way Alex went about working on AI alignment is a great example for others to follow, I'd like to see him continue, and I am excited about the LTF Fund giving out more grants to others who try to follow a similar path.
I've been following Alex's work closely since then, and overall have been quite happy with its quality. I still have high-level concerns about his approach, but have over time become more convinced that Alex is aware of some of the philosophical problems that work on impact measures seems to run into, and so am more confident that he will navigate the difficulties of this space correctly. His work also updated me on the tractability of impact-measure approaches, and though I am still skeptical, I am substantially more open to interesting insights coming out of an analysis of that space than I was before. (I think it is generally more valuable to pursue a promising approach that many people are skeptical about, rather than one already known to be good, because the former is much less likely to be replaceable).
I've also continued to get positive feedback from others in the field of AI alignment about Alex's work, and have had multiple conversations with people who thought it made a difference to their thinking on AI alignment.
One other thing that has excited me about Alex's work is his pedagogical approach to his insights. Researchers frequently produce ideas without paying attention to how understandable those ideas are to other people, and enshrine formulations that end up being clunky, unintuitive or unwieldy, as well as explanations that aren't actually very good at explaining. Over time, this poor communication often results in substantial research debt. Alex, on the other hand, has put large amounts of effort into explaining his ideas clearly and in an approachable way, with his "Reframing Impact" sequence on the AI Alignment Forum.
This grant would fund living expenses and tuition, helping Alex to continue his current line of research during his graduate program at Oregon State.
Cambridge Summer Programme in Applied Reasoning (CaSPAR) ($26,300)
Organizing immersive workshops for STEM students at top universities.
Grant date: January 2020
From the application:
We want to build on our momentum from CaSPAR 2019 by running another intensive week-long summer camp and alumni retreat for mathematically talented Cambridge students in 2020, and increase the cohort size by 1/3 from 12 to 16.
At CaSPAR, we attract young people who are talented, altruistically motivated and think transversally to show us what we might be missing. We find them at Cambridge University, in mathematics and adjacent subjects, and funnel them via our selection process to our week-long intensive summer camp. After the camp, we welcome them to the CaSPAR Alumni. In the alumni we further support their plan changes/ideas with them as peers, and send them opportunities at a decision-relevant time of their lives.
CaSPAR is a summer camp for Cambridge students that tries to cover a variety of material related to rationality and effective altruism. This grant was originally intended for CaSPAR 2020, but since COVID has made most in-person events like this infeasible, this grant is instead intended for CaSPAR 2021.
I consider CaSPAR to be in a similar reference class as SPARC or ESPR, two programs with somewhat similar goals that have been supported by other funders in the long-term future space. I currently think interventions in this space are quite valuable, and have been impressed with the impact of SPARC; multiple very promising people in the long-term future space cite it as the key reason they became involved.
The primary two variables I looked at while evaluating CaSPAR were its staff composition and the references we received from a number of people who worked with the CaSPAR team or attended their 2019 event. Both of those seemed quite solid to me. The team consists of people I think are pretty competent and have the right skills for a project like this, and the references we received were positive.
The biggest hesitation I have about this grant is mostly the size of the program and the number of participants. Compared to SPARC or ESPR, the program is shorter and has substantially fewer attendees. From my experience with those programs, the size of the program and the length both seemed integral to their impact (I think there's a sweet spot around 30 participants --- enough people to take advantage of network effects and form lots of connections, while still maintaining a high-trust atmosphere).
This is going to sound controversial here (people are probably going to dislike this but I'm genuinely raising this as a concern) but is the Robert Miles $60,000 grant attached to any requirements? I like his content but it seems to me you could find someone with a similar talent level (explaining fairly basic concepts) who could produce many more videos. I'm not well versed in YouTube but four/five videos in the last year doesn't seem substantial. If the $60,000 was instead offered as a one-year job, I think you could find many talented individuals who could produce much more content.
I understand that he's doing other non-directly YouTube related things but if you include support in other forms (Patreon), the output seems pretty low relative to the investment.
Again I should emphasise I'm uncertain about my criticism here and personally have enjoyed watching his videos on occasion.
I think one of the things Rob has that is very hard to replace is his audience. Overall I continue to be shocked by the level of engagement Rob Miles' youtube videos get. Averaging over 100k views per video! I mostly disbelieve that it would be plausible to hire someone that can (a) understand technical AI alignment well, and (b) reliably create youtube videos that get over 100k views, for less than something like an order of magnitude higher cost.
I am mostly confused about how Rob gets 100k+ views on each video. My mainline hypothesis is that Rob has successfully built his own audience through his years of videos including on places like Computerphile, and that they have followed him to his own channel.
Building an audience like this takes many years and often does not pay off. Once you have a massive audience that cares about the kind of content you produce, this is very quickly not replaceable, and I expect to find someone other than Rob to do this, it would either take the person 3-10 years to build this size of audience, or require paying a successful youtube content creator to change the videos that they are making substantially, in a way that risks losing their audience, and thus require a lot of money to cover the risk (I'm imagining $300k–$1mil per year for the first few years).
Another person to think of here is Tim Urban, who writes Wait But Why. That blog has I think produced zero major writeups in the last year, but he has a massive audience who knows him and is very excited to read his content in detail, which is valuable and not easily replaceable. If it were possible to pay Tim Urban to write a piece on a technical topic of your choice, this would be exceedingly widely-read in detail, and would be worth a lot of money even if he didn't publish anything for a whole year.
All good points Jonas, Ben W, Ben P, and Stefan. Was uncertain at the beginning but am pretty convinced now. Also, side-note, very happy about the nature of all of the comments, in that they understood my POV and engaged with them in a polite manner.
By the way, I also was surprised by Rob only making 4 videos in the last year. But I actually now think Rob is producing a fairly standard number of high-quality videos annually.
The first reason is that (as Jonas points out upthread) he also did three for Computerphile, which brings his total to 7.
The second reason is that I looked into a bunch of top YouTube individual explainers, and I found that they produce a similar number of highly-produced videos annually. Here's a few:
CGP Grey, 3Blue 1Brown and Veritasium I believe are working on their videos full time, so I think around 10 main videos plus assorted extra pieces is within standard range for highly successful explainers on YouTube. I think this suggests potentially Rob could make more videos to fill out the space between the videos on his channel, like Q&A livestreams and other small curiosities that he notices, and could plausibly be more productive a year in terms of making a couple more of the main, highly-produced videos.
But I know he does a fair bit of other work outside of his main channel, and also he is in some respects doing a harder task than some of the above, of explaining ideas from a new research field, and one with a lot of ethical concerns around the work, not just issues of how to explain things well, which I expect increases the amount of work that goes into the videos.
I think it's possible that last year was just unusually slow for people (possibly pandemic-related?)
I looked at 3B1B (the only Youtube explainer series I'm familiar with) and since 2015 Grant has produced ~100 high quality videos, which is closer to ~20 videos/year than ~10/year.
I'm not familiar with the others.
I feel like this is low-balling potential year-to-year variation in productivity. My inside view is that 50-100% increases in productivity is plausible.
Yeah, I agree about how much variance in productivity is available, your numbers seem more reasonable. I'd actually edited it by the time you wrote your comment.
Also agree last year was probably unusually slow all round. I expect the comparison is still comparing like-with-like.
:) Appreciated the conversation! It also gave me an opportunity to clarify my own thoughts about success on YouTube and related things.
Thanks for the critique!
In addition to four videos on his own channel, Robert Miles also published three videos on Computerphile during the last 12 months. He also publishes the Alignment Newsletter podcast. So there's at least some additional output. There's probably more I don't know of.
I personally actually think this would be very difficult. Robert Miles' content seems to have been received positively by the AI safety community, but science communications in general is notoriously difficult, and I'd expect most YouTubers to routinely distort and oversimplify important concepts, such that I'd worry that such content would do more harm than good. In contrast, Robert Miles seems sufficiently nuanced.
(Disclosure: I work at EA Funds.)
Yes. Also, regarding this issue:
It seems that the Long-Term Future Fund isn't actively searching for people to do specific tasks, if I understand the post correctly. Instead, it's reviewing applications that come to them. (It's more labour-intensive to do an active search.) That means that it can be warranted to fund an applicant even if it's possible that there could be better candidates for the same task somewhere out there. (Minor edits.)
Thanks for the understanding responses Jonas and Linch. Again, I should clarify, I don't know where I stand here but I'm still not entirely convinced.
So, we have four videos in the last year on his channel, plus three videos on Computerphile, giving seven videos. If I remember correctly, The Alignment Newsletter podcast is just reading Shah's newsletter, which may be useful but I don't think that requires a lot of effort.
I should reiterate that I think what Miles does is not easy. I may also be severely underestimating the time it takes to make a YouTube video!
It might be more relevant to consider the output: 500,000 views (or ~80,000 hours of watch time). Given that the median video gets 89 views, it might be hard for other creators to match the output, even if they could produce more videos per se.
Meta: Small nitpick, but I would prefer if we reduce framings like
See Scott Alexander on Against Bravery Debates.
Thanks for pointing that out. Will refrain from doing so in the future. What I was trying to make clear was that I didn't want my comment to be seen as a personal attack on an individual. I was uneasy about making the comment on a public platform when I don't know all the details nor know much about the subject matter.
FWIW, I think that the qualification was very appropriate and I didn't see the author as intending to start a "bravery debate". Instead, the purpose appears to have been to emphasize that the concerns were raised in good faith and with limited information. Clarifications of this sort seem very relevant and useful, and quite unlike the phenomenon described in Scott's post.
I want to add that Scott isn't describing a disingenuous argumentative tactic, he's saying that the topic causes dialogue to get derailed very quickly. Analogous to the rule that bringing in a comparison to Nazis always derails internet discussion, making claims about whether the position one is advocating is the underdog or the mainstream also derails internet discussion.
Thanks, you are right. I have amended the last sentence of my comment.
Thx!
Following up, and sorry for continuing to critique after you already politely made an edit, but doesn't that change your opinion of the object level thing, which is indeed the phenomenon Scott's talking about? It's great to send signals of cooperativeness and genuineness, and I appreciate So-Low Growth's effort to do so, but adding in talk of how the concern is controversial is the standard example of opening a bravery debate.
The application of Scott's post here would be to separate clarification of intent and bravery talk – in this situation, separating "I don't intend any personal attack on this individual" from "My position is unpopular". Again, the intention is not in question, it's the topic, and that's the phenomenon Scott's discussing in his post.
I agree that the sentence Linch quoted sounds like a "bravery debate" opening, but that's not how I perceive it in the broader context. I don't think the author is presenting himself/herself as an underdog, intentionally or otherwise. Rather, they are making that remark as part of their overall attempt to indicate that they are aware that they are raising a sensitive issue and that they are doing so in a collaborative spirit and with admittedly limited information. This strikes me as importantly different from the prototypical bravery debate, where the primary effect is not to foster an atmosphere of open dialogue but to gain sympathy for a position.
I am tentatively in agreement with you that "clarification of intent" can be done without "bravery talk", by which I understand any mention that the view one is advancing is unpopular. But I also think that such talk doesn't always communicate that one is the underdog, and is therefore not inherently problematic. So, yes, the OP could have avoided that kind of language altogether, but given the broader context, I don't think the use of that language did any harm.
(I'm maybe 80% confident in what I say above, so if you disagree, feel free to push me.)
I read the top comment again after reading this comment by you, and I think I understand the original intent better now. I was mostly confused on initial reading, and while I thought SLG's comment was otherwise good and I had a high prior on the intent being very cooperative, I couldn't figure out what the first line meant other than "I expect I'm the underdog here". I now read it as saying "I really don't want to cause conflict needlessly, but I do care about discussing this topic," which seems pretty positive to me. I am pretty pro SLG writing more comments like this in future when it seems to them like an important mistake is likely being made :)
This makes a lot of sense to me Pablo. You highlighted what I was trying to explain when I was making the comment, that: 1) I was uncertain 2) I didn't want to attack someone. I must admit, my choice of words was rather poor and could come across as "bravery talk", although that was not what I intended.
To be clear, I think your overall comment added to the discussion more than it detracts, and I really appreciate you making it. I definitely did not interpret your claims as an attack, nor did I think it's a particularly egregious example of a bravery framing. One reason I chose to comment here is because I interpreted (correctly, it appears!) you as someone who'd be receptive to such feedback, whereas if somebody started a bravery debate with a clearer "me against the immoral idiots in EA" framing I'd probably be much more inclined to just ignore and move on.
It's possible my bar for criticism is too low. In particular, I don't think I've fully modeled meta-level considerations like:
1) That by only choosing to criticize mild rather than egregious cases, I'm creating bad incentives.
2) You appear to be a new commenter, and by criticizing newcomers to the EA Forum I risk making the EA Forum less appealing.
3) That my comment may spawn a long discussion.
Nonetheless I think I mostly stand by my original comment.
Yeah that makes a lot of sense. I think the rest of your comment is fine without that initial disclaimer, especially with your caveat in the last sentence! :)
I also notice myself being confused about the output here. I suspect the difficulty of being good at Youtube outreach while fully understanding technical AI safety concepts is a higher bar than you're claiming, but I also intuitively would be surprised if it takes an average of 2+ months to produce a video (though perhaps he spends a lot of time on other activities?
This quote
alludes to this.
To state a point in the neighborhood of what Stefan, Ben P, and Ben W have said, I think it's important for LTTF to evaluate the counterfactual where they don't fund something, rather than the counterfactual where the project has more reasonable characteristics.
That is, we might prefer a project be more productive, more legible or more organized, but unless that makes it worse than the marginal funding opportunity, it should be funded (where one way a project could be bad is by displacing more reasonable projects that would otherwise fill a gap).
As always, thanks very much for writing up this detailed report. I really appreciate the transparency and insight into your thought processes, especially as I realise doing this is not necessarily easy! Great job.
(It's possible that I might have some more detailed comments later, but in case I don't I didn't want to miss the chance to give you some positive feedback!)
"Some donors were surprised by the Fund’s large number of AI risk-focused grants. While the Fund managers are in favor of these grants, we want to make sure that donors are aware of the work they are supporting. As a result, we changed the EA Funds donation interface such that donors have to opt into supporting their chosen Funds. (Previously, the website suggested a default allocation for each Fund.) EA Funds also plans to offer a donation option focused on climate change for interested donors."
This is an extremely positive change and corrects what I have previously considered to be a dark pattern on the EA funds website for a long time. Thanks for implementing it.
Glad to hear you like it!