TK

Thomas Kwa

Researcher @ MATS/Independent
3147 karmaJoined Working (0-5 years)Berkeley, CA, USA

Bio

Participation
4

Mechinterp researcher under Adrià Garriga-Alonso.

Comments
253

It is not clear to me that EA branding is net positive for the movement overall or if it's been tarnished beyond repair by various scandals. Like, it might be that people should make a small personal sacrifice to be publicly EA, but it might also be that the pragmatic collective action is to completely rebrand and/or hope that EA provides a positive radical flank effect.

The reputation of EA at least in the news and on Twitter is pretty bad; something like 90% of the news articles mentioning EA are negative. I do not think it inherently compromises integrity to not publicly associate with EA even if you agree with most EA beliefs, because people who read opinion pieces will assume you agree with everything FTX did, or are a Luddite, or have some other strawman beliefs. I don't know whether EAF readers calling themselves EAs would make others' beliefs about their moral stances more or less accurate.

I don't think this is currently true, but if the rate of scandals continues, anyone holding on to the EA label would be suffering from the toxoplasma of rage, where the EA meme survives by sounding slightly good to the ingroup but extremely negative to anyone else. Therefore, as someone who is disillusioned with the EA community but not various principles, I need to see some data before owning any sort of EA affiliation, to know I'm not making some anti-useful sacrifice.

Given the Guardian piece, inviting Hannania to Manifest seems like an unforced error on the part of Manifold and possibly Lightcone. This does not change because the article was a hitpiece with many inaccuracies. I might have more to say later.

I want to slightly push back against this post in two ways:

  • I do not think longtermism is any sort of higher form of care or empathy. Many longtermist EAs are motivated by empathy, but they are also driven by a desire for philosophical consistency, beneficentrism and scope-sensitivity that is uncommon among the general public. Many are also not motivated by empathy-- I think empathy plays some role for me but is not the primary motivator? Cold utilitarianism is more important but not the primary motivator either [1]. I feel much more caring when I cook dinner for my friends than when I do CS research, and it is only because I internalize scope sensitivity more than >99% of people that I can turn empathy into any motivation whatsoever to work on longtermist projects. I think that for most longtermists, it is not more empathy, nor a better form of empathy, but the interaction of many normal (often non-empathy) altruistic motivators and other personality traits that makes them longtermists.
  • Longtermists make tradeoffs between other common values and helping vast future populations that most people disagree with, and without ideosyncratic EA values there is no reason that a caring person should make the same tradeoffs as longtermists. I think the EA value of "doing a lot more good matters a lot more" is really important, but it is still trading off against other values.
    • Helping people closer to you / in your community: many people think this has inherent value
    • Beneficentrism: most people think there is inherent value in being directly involved in helping people. Habitat for Humanity is extremely popular among caring and empathic people, and they would mostly not think it is better to make more of an overall difference by e.g. subsidizing eyeglasses in Bangladesh.
    • Justice: most people think it is more important to help one human trafficking victim than one tuberculosis victim or one victim of omnicidal AI if you create the same welfare, because they place inherent value on justice. Both longtermists and GiveWell think they're similarly good modulo secondary consequences and decision theory.
    • Discount rate, risk aversion, etc.: There is no reason that having a 10% chance of saving 100 lives in 6,000 years is better than a 40% chance of saving 5 lives tomorrow, if you don't already believe in zero-discount expected value as the metric to optimize. The reason to believe in zero-discount expected value is a thought experiment involving the veil of ignorance, or maybe the VNM theorem. It is not caring doing the work here because both can be very caring acts, it is your belief in the thought experiment connecting your caring to the expected value.

In conclusion, I think that while care and empathy can be an important motivator to longtermists, and it is valid for us to think of longtermist actions as the ultimate act of care, we are motivated by a conjunction of empathy/care and other attributes, and it is the other attributes that are by far more important. For someone who has empathy/care and values beneficentrism and scope-sensitivity, preventing an extinction-level pandemic is an important act of care; for someone like me or a utilitarian, pandemic prevention is also an important act. But for someone who values justice more, applying more care does not make them prioritize pandemic prevention over helping a sex trafficking victim, and in the larger altruistically-inclined population, I think a greater focus on care and empathy conflict with longtermist values more than they contribute.

[1] More important for me are: feeling moral obligation to make others' lives better rather than worse, wanting to do my best when it matters, wanting future glory and social status for producing so much utility.


Not sure how to post these two thoughts so I might as well combine them.

In an ideal world, SBF should have been sentenced to thousands of years in prison. This is partially due to the enormous harm done to both FTX depositors and EA, but mainly for basic deterrence reasons; a risk-neutral person will not mind 25 years in prison if the ex ante upside was becoming a trillionaire.

However, I also think many lessons from SBF's personal statements e.g. his interview on 80k are still as valid as ever. Just off the top of my head:

  • Startup-to-give as a high EV career path. Entrepreneurship is why we have OP and SFF! Perhaps also the importance of keeping as much equity as possible, although in the process one should not lie to investors or employees more than is standard.
  • Ambition and working really hard as success multipliers in entrepreneurship.
  • A career decision algorithm that includes doing a BOTEC and rejecting options that are 10x worse than others.
  • It is probably okay to work in an industry that is slightly bad for the world if you do lots of good by donating. [1] (But fraud is still bad, of course.)

Just because SBF stole billions of dollars does not mean he has fewer virtuous personality traits than the average person. He hits at least as many multipliers than the average reader of this forum. But importantly, maximization is perilous; some particular qualities like integrity and good decision-making are absolutely essential, and if you lack them your impact could be multiplied by minus 20.

 

 

[1] The unregulated nature of crypto may have allowed the FTX fraud, but things like the zero-sum zero-NPV nature of many cryptoassets, or its negative climate impacts, seem unrelated. Many industries are about this bad for the world, like HFT or some kinds of social media. I do not think people who criticized FTX on these grounds score many points. However, perhaps it was (weak) evidence towards FTX being willing to do harm in general for a perceived greater good, which is maybe plausible especially if Ben Delo also did market manipulation or otherwise acted immorally.

Also note that in the interview, SBF didn't claim his donations offset a negative direct impact; he said the impact was likely positive, which seems dubious.

This seems right, thanks. I don't think we have positive evidence that Trabucco was not EA, though.

[Warning: long comment] Thanks for the pushback. I think converting to lives is good in other cases, especially if it's (a) useful for judging effectiveness, and (b) not used as a misleading rhetorical device [1].

The basic point I want to make is that all interventions have to pencil out. When donating, we are trying to maximize the good we create, not decide which superficially sounds better between the different strategies "empower beneficiaries to invest in their communities' infrastructure" and "use RCTs to choose lifesaving interventions" [2]. Lives are at stake, and I don't think those lives are less important simply because it's harder to put names and faces to the ~60 lives that were saved from a 0.04% chance of reduction of malaria deaths from a malaria net. Of course this applies equally to the Wytham Abbey purchase or anything else. But to point (a), we actually can compare the welfare gain from 61 lives saved to the economic security produced by this project. GiveWell has weights for doubling of consumption, partly based on interviews from Africans [3]. With other projects, this might be intractable due to entirely different cause areas or different moral preferences e.g. longtermism.

Imagine that we have a cost-effectiveness analysis made by a person with knowledge of local conditions and local moral preferences, domain expertise in East African agricultural markets, and the quantitative expertise of GiveWell analysts. If it comes out that one intervention is 5 or 10 times better than the other, as is very common, we need a very compelling reason why some consideration was missed to justify funding the other one. Compare this to our currently almost complete state of ignorance as to the value of building this plant, and you see the value of numbers. We might not get a CEA this good, but we should get close as we have all the pieces.

As to point (b), I am largely pro making these comparisons in most cases just to remind people of the value of our resources. But I feel like the Wytham and HPMOR cases, depending on phrasing, could exploit peoples' tendency to think of projects that save lives in emotionally salient ways as better than projects that save lives via less direct methods. It will always sound bad to say that intervention A is funded rather than saving X lives, and we should generally not shut down discussion of A by creating indignation. This kind of misleading rhetoric is not at all my intention; we all understand that allowing a large enough number of farmers access to sorghum markets can produce more welfare than preventing 61 deaths from malaria. We have the choice between saving 61 of someones' sons and daughters, and allowing X extremely poor people to perhaps buy metal roofs, send their children to school, and generally have some chance of escaping a millennia-long poverty trap. We should think: "I really want to know how large X is".

[1] and maybe (c) not bad for your mental health?

[2] Unless you believe empowering people is inherently better regardless of the relative cost, which I strongly disagree with.

[3] This is important-- Westerners may be biased here because we place different values on life compared to doubling consumption. But these interviews were from Kenya and Ghana, so maybe Uganda's weights slightly differ.

Just to remind everyone, 339,000 GBP in malaria nets is estimated by GiveWell to save around 61 lives, mostly young children. Therefore a 25% difference in effectiveness either way is 15 lives. A cost-effectiveness analysis is definitely required given what is at stake, even if the complexities of this project mean it is not taken as final.

Thanks. In addition to lots of general information about FTX, this helps answer some of my questions about FTX: it seems likely that FTX/Alameda were never massively profitable except for large bets on unsellable assets (anyone have better information on this?); even though they had large revenues maybe much of it was spent dubiously by SBF. And the various actions needed to maintain a web of lies indicate that Caroline Ellison and Nishad Singh, and very likely Gary Wang and Sam Trabucco (who dropped off the face of the earth at the time of the bankruptcy [1]) were definitely complicit in fraud severe and obvious enough that any moral person, (possibly even a hardcore utilitarian, if it was true that FTX was consistently losing money), should have quit or leaked evidence of said fraud.

Four or five people is very different from a single bad actor, and this almost confirms for me that FTX belongs on the list of ways EA and rationalist organizations can basically go insane in harmful ways, alongside Leverage, Zizians and possibly others. It is not clear that FTX experienced a specifically EA failure mode, rather than the very common one in which power corrupts.

I think someone should do an investigation much wider in scope than what happened at FTX, covering the entire causal chain from SBF first talking to EAs at MIT to the damage done to EA. Here are some questions I'm particularly curious about:

  • Did SBF show signs of dishonesty early on at MIT? If so, why did he not have a negative reputation among the EAs there?
  • To what extent did EA "create SBF"-- influence the values of SBF and others at FTX? Could a version of EA that placed more emphasis on integrity, diminishing returns to altruistic donations, or something else have prevented FTX?
  • Alameda was started by various traders from Jane Street, especially EAs. Did they do this despite concerns about how the company would be run, and were they correct to leave at the time?
  • [edited to add] I have heard that Tara Mac Aulay and others left Alameda in 2018. Mac Aulay claims this was "in part due to concerns over risk management and business ethics". Do they get a bunch of points for this? Why did this warning not spread, and can we even spread such warnings without overloading the community with gossip even more than it is?
  • Were Alameda/FTX ever highly profitable controlling for the price of crypto? (edit: ruling out that FTX's market share was due to artificially tight spreads due to money-losing trades from Alameda). How should we update on the overall competence of companies with lots of EAs?
  • SBF believed in linear returns to altruistic donations (I think he said this on the 80k podcast), unlike most EAs. Did this cause him to take on undue risk, or would fraud have happened if FTX had a view on altruistic returns similar to that of OP or SFF but linear moral views?
  • What is the cause of the exceptionally poor media perception of EA after FTX? When i search for "effective altruism news", around 90% of articles I could find negative and none positive, including many with extremely negative opinions unrelated to FTX. One would expect at least some article saying "Here's why donating to effective causes is still good". (In no way do I want to diminish the harms done to customers whose money was gambled away, but it seems prudent to investigate the harms to EA per se)

My guess is that this hasn't been done simply because it's a lot of work (perhaps 100 interviews and one person-year of work), no one thinks it's their job, and conducting such an investigation would somewhat entail someone both speaking for the entire EA movement and criticizing powerful people and organizations.

See also: Ryan Carey's comment

2-year update on infant outreach

To our knowledge, there have been no significant infant outreach efforts in the past two years. We are deeply saddened by this development, because by now there could have been two full generations of babies, including community builders who would go on to attract even more talent. However, one silver lining is that no large-scale financial fraud has been committed by EA infants.

We think the importance of infant outreach is higher than ever, and still largely endorse this post. However, given FTX events, there are a few changes we would make, including a decreased focus on galactic-scale ambition and especially some way to select against sociopathic and risk-seeking infants. We tentatively propose that future programs favor infants who share their toys, are wary of infants who take others' toys without giving them back, and never support infants who, when playing with blocks, try to construct tall towers that have high risk of collapse.

Load more