1. At the time, EA was looking for bold ideas to use a lot of money productively. There was talk of a funding overhang. EA was looking to get aggressive at spending money on useful things. I think you'd have heard very little pushback at the time on this.
2. I'm still not against this. I think this is a very good piece of the puzzle in mitigating against bio risk. I know it is expensive. Whether we do it or not depends a bit on our other options and the amount of money EA has.
3. Maybe not this island specifically but it seems like a decent start.
4. I don't think the island should be "specific to EAs" but a selected group that would be able to restart civilization? Sure
Hi Gideon, I'm a regranter and I live in Latin America. This has basically no effect on what I choose to fund.
I also care a lot more about animals than most (maybe all) the regranters.
The thing you want to select for in a group of regranters is the diversity of knowledge of opportunities and some intellectual diversity of approaches to longtermism. It seems like Manifund did a decent job here? I think i was selected for my forecasting on Manifold but there are others who work in pandemic preparedness and a lot of people work in different ways no AI safety (which is sort of what the donor wanted).
Great response J.T. This was me saying "I want these things thought about in a CEA that I suspect might not have been." I actually think this is quite promising.
For 1, divestment in this case means nothing IMO. Divestment is mainly a signalling thing. It doesn't really do anything to reduce stock price for example. https://forum.effectivealtruism.org/topics/divestment
For 2, I don't think this is something to just dismiss without calculations
For 3, yes. If someone died of coronary heart disease, there was a decent chance that they would get it without smoking. Or they had lung cancer and diabetes, there needs to be some factor you attribute to each, not just "they died of lung cancer from smoking" which is what happens in these organizations.
For 5, I'm concerned generally that LMIC have corrupt governments who, when they get more money, is actually net negative. I'm concerned about this as well in developed countries if they use it to start wars, etc. as well.
On the whole, I think this is still likely to come out as a strong intervention. I'm still more skeptical of policy changes and I think we likely over-estimate their odds of success but this still seems well worth trying.
Great post J.T.! A few things I would like addressed for me to buy this more.
Thanks for writing this. After reading this, I want EA to be even more "cause first". One of the things that I worry about for EA is that it becomes a fairly diffuse "member-first" movement, not much unlike a religious group that comes together and supports each other and believes in some common doctrines but at the end of the day, doesn't accomplish much.
I look at EA now and am nothing short of stunned at how much it is accomplishing. Not in dollars spent. But in stuff done. The EA community was at the forefront of pushing AI safety to the mainstream. It has started several new charities. It's responsible for a lot of wins for animals. It's responsible for saving hundreds of thousands of lives. It's about the only place out there that measures charities, and does so with a lot of rigor. It's produced countless reports that actually change what gets worked on. It creates new charities every year. It changes what it works on pretty well.
I think caring about its members is an instrumental goal to caring about causes. The members do, after all, work on the causes. EA does recognize this though and with notable exceptions, I think it does a very good job of it.
Just found this post but strong agree. I have previously suggested (like 3 years ago) that EAs should not hold facebook stock (now META) and that we should probably have something like rolling 1 year put options. That's easier to do in public markets and we can just ask some wealthy EAs or some other organization to put this position on. What I mean here is that we can hedge out our specific facebook risk (and long the market, other tech/social media companies and figure out other hedges we want).
Hedging a private company is much, much harder. We could have shorted $FTT futures but this is insanely risky and could blow up before you get the benefit of your hedge (better for us to just have gotten whatever amount of FTT we could get and liquidate it) and this runs into problems you spoke about.
Hedging against a market downturn (lots of EA net worth was lost here) is just very difficult as well.
Good post.
I think it's worth noting that fraud can be a grey thing. Which of the following are fraud:
A) Two founders raise $2M at a $20M valuation. They pay themselves $500k each for two years (a bit higher than their previous FAANG salaries), don't work that hard, write a bit of code but not enough for the product. They ultimately try to raise more but fail to do so and dissolve the company giving investors nothing back (all the money is spent).
B) Companies collect credit card details on a 7 day free trial of their product where only shipping needs to be paid. Immediately after 7 days, they charge the credit cards $1000 and send product saying that in the fine print they had to cancel within 7 days or they would be sent the product. The customers would never pay for the product, comparable products cost $100 for what was sent and the company refuses to provide any refund. They spent a lot on the advertisement and got 5000 people to take the 7-day free trial, 4500 of whom did not cancel.
C) Medical startup administers more tests than is necessary charging health insurance providers (Medicare and private insurance). Rarely a test finds something that otherwise would have been missed but was definitely not required given symptoms.
D) Company X raises $500k at a $5M valuation. They spend $250k reasonably on product development. They then need $500k for a mold for manufacturing. They look for financing in the form of bridge loans, raising additional capital, going back to investors telling them they need another $250k, etc. and don't manage to do so. CEO of Company X decides that this mold is pivotal for the company and is a good poker player.
i) He takes the $250k to the poker table and doubles the money.
ii) He loses $150k and then decides to save the other $100k
iii) He loses the $250k
E) Company X raises a seed round and builds an MVP. They then go on to raise a series A, showing investors a beautiful exponential curve of new users. The company allows the first use of an account to be for free and subsequent use costs $5. Founders purposely make signing up for new accounts very easy such that they know multiple people are making several accounts to use the product for free.
F) Company X gets funded by YC and takes the $125k and $375k cheques for 7% and the MFN safe. They never intend to raise another round since they are already profitable but they need to in order to satisfy the MFN clause. They get a family friend to invest $100k at a $1B valuation forcing YC to take those terms. Maybe they later buy out the friend for $105k a few months later.
I don't know which of the above legally qualifies as fraud but I sure think all of them are at the very least wrong. I'd probably call all of them fraud since they are deliberately deceiving people for financial gain by being credited with false accomplishments/qualities. I also think reasonable people can disagree with me.
It does depend on the cost, to be clear. And I fully agree with you that animal welfare charities are starved for cash and that we can be deploying far more to Global Health and Poverty.