J

Jason

16577 karmaJoined Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· · 1m read

Comments
1937

Topic contributions
2

(I didn't vote on any of the comments.)

Some voters are known to downvote when they think the net karma on a post is too high. If asked about too high, they would probably point to the purposes of the karma system, like deciding which comments to emphasize to Forum visitors, signaling what kind of content the community finds valuable, providing adequate incentives to produce valuable content, etc.

I think that can be a valid voting philosophy (although I sometimes have various concerns about it in practice, especially where strong downvotes are used). While the karma counts for the comments here do not seem inflated to me by current karma-inflation standards, I do not think they plausibly justify an inference of bad-faith or even "suspicious voting behavior."

Last night, I quickly thumbed through the websites (and Ambitious Impact listings) of some smaller charities as I started thinking about my end of year giving plans. 

I'd encourage smaller orgs to refer to something on their website dated within the past few months -- an event, a dated blog post, etc. -- that the median website visitor will ~quickly see. This is particularly true if the website refers to more distant events, has older-dated blog posts, etc. Without some indicator of freshness, the visitor may be left uncertain whether the org is still meaningfully active and potentially worth funding. After all, the closure rate for small, new orgs is not low.

[Caveats: the thumbing was on mobile, while taking a bath, and at the end of a difficult day, so there could easily have been misses on my part]

Is information about the total spend on Uni Groups in this period available? [1] I only saw figures for Group Support Funding, which I take to be only a small fraction of the whole.

There are a few reasons that information would likely be helpful:

(1) It allows would-be smaller donors to evaluate a part of CEA's cost-effectiveness;

(2) It provides information about the general location of the funding bar for meta work, which may inform those who are considering pitching a meta project whether their ideas are cost-effective enough to have a good chance at attracting funding; and

(3) It would inform community discussion of tradeoffs made by the Uni Groups team (e.g., prioritization of super-elite universities).

  1. ^

    I think it would be okay for the stated figure to not include Uni Groups' fair share of CEA & EVF overhead if that information is not readily available, although the existence of that overhead should be noted if that is the case.

Risk factors for psychological harm that I believe to be predictable, neglected, and tractable to address on a shorter-term scale without drastic systemic change.

My general reaction is that some of the issues you identify may implicate moderately deep structural issues and/or involve some fairly significant tradeoffs. If true, that wouldn't establish that nothing should be done about them, but it would make proposed solutions that don't sufficiently grapple with the structural issues and tradeoffs unlikely to gain traction.

For example, on the issue of Forum drama (which I've chosen because the discussion and proposals feel a bit more concrete to me):

The case of Lightcone et al. v. Nonlinear et al. related to an attempt to protect community members from a perceived bad actor. Without litigating the merits of that dispute -- especially since I tried to stay away from it as much as possible! -- there still has to be a means of protecting community members from perceived bad actors. It's not clear to me that there existed a place to try this matter other than the Court of Public Opinion (EA Forum Division). A fair amount of ink has been spilled on the bad-actor problem more generally, but EA is decentralized enough that the non-messy solutions generally wouldn't work well either. Likewise, the recent case of In re Manifest was -- to at least some of the disputants -- about the bounds of what was and wasn't acceptable in the community. If there's not anything like the President or Congress of EA (and there likely shouldn't be in my view), only the community can make those decisions.

Dealing with alleged bad actors and defending core norms are important, so if that isn't going to be done on the Forum then it will need to be done somewhere else. I think you're right that people tend to behave better in person than online, but it's not clear how these kinds of issues would be adequately fleshed out in person. For starters, that gives a lot of power to whoever is handing out the invites to the in-person events (and the limited opportunities for one-to-many communications). Spending more time on community drama could also derail the stated purposes of those events. 

More broadly, taking the community out of adjudicating these kinds of disputes would mean setting up some centralized authority to do so -- maybe an elected representative assembly or something. That's possible, and maybe desirable -- but it would be a major structural change.

Maybe you could make the online discourse better -- but in this case it would have to be by the slow and time-consuming task of building consensus, not by moderator fiat. Finding people with the independence, skill set, community buy-on and time/flexibility to control big on-Forum disputes much more tightly than the mods have would be tough. It's an open Internet, and if enough of the community thinks topics that need to be discussed elsewhere are being suppressed, there's always Reddit. 

Thanks. The self-evaluation explains that:

We are not assuming perfect Pledge retention. Instead, we are aiming to extrapolate from what we have seen so far (which is a decline in the proportion of Pledgers who give, but an increase in the average amount given when they do).

So I believe that chart is showing the combined effect of (a) imperfect retention; and (b) increased average donations by retained donors (probably because "the average GWWC Pledger is fairly young, and people’s incomes tend to increase as they get older"). With a sufficiently large number of pledgers, considering the combined effect makes sense because they will likely cancel each other out to some extent. When considering a single megadonor, I think one has to consider the retention issue separately because any increased-wealth effect is useless if there is 0% retention of that donor.

Also, to the extent that people were using the value of SBF's holdings in FTX & Alameda (rather than a rolling average of his past donations) in the analysis, they were already baking in the expectation that he would have more money in years to come than he had to donate now.

The norm around donating 10% is one of the places where EA has constructed a sort of "safe harbour," sending a message at least somewhat like as long as you give 10%, and under certain circumstances less, you should feel good about yourself as an EA / feel supported / etc. In other words, the community ethos implicitly discourages feeling guilty about "only" donating 10 percent.

I'm not as convinced we have established and effectively communicate that kind of safe harbour around certain other personal decisions, like career decisions. Thus, I don't know if the soft 10 percent norm is representative of norms and pressures relating to demandingness.

To be fair, it's easier to construct a safe harbour around money than around something like career decisions because we don't have ten careers to allocate.

Also, the base rate of people following through on their publicly-professed charitable pledges is not 1. I'm not sure what base rate I'd use, but IIRC there is considerable dropout among those who take the 10 Percent Pledge.

Jason
11
4
0
1

If Elizabeth is trying to maximize the impartial good, she should probably be far more concerned about an anti-veganism advocate on Facebook than about a veganism advocate who (incorrectly) denies veganism's health tradeoffs. 

I doubt that Elizabeth -- or a meaningful number of her potential readers -- are considering whether to be associated with anti-vegan advocates on Facebook or any movement related to them. I read the discussion as mainly about epistemics and integrity (these words collectively appear ~30 times in the transcript) rather than object-level harms. 

  • I think it's generally appropriate to be more concerned about policing epistemics and integrity in your own social movement than in others. This is in part about tractability -- do we have any reason to think any anti-vegan activist movement on Facebook cares about its epistemics? If they do, do any of us have a solid reason to believe we would be effective in improving those epistemics?
  • It's OK to not want to affiliate with a movement whose epistemics and integrity you judge to be inadequate. The fact that there are other movements with worse epistemics and integrity out there isn't particularly relevant to that judgment call.
  • It's unclear whether anti-vegan activists on Facebook are even part of a broader epistemic community. EAs are, so an erosion of EA epistemic norms and integrity is reasonably likely to cause broader problems.
    • In particular, the stuff Elizabeth is concerned about gives off the aroma of ends-justify-the-means thinking to me at points. False or misleading presentations, especially ones that pose a risk of meaningful harm to the listener, are not an appropriate means of promoting dietary change. [1] Moreover, ends-justify-the-means rationalization is a particular risk for EAs, as we painfully found out ~2 years ago.
  1. ^

    I recognize there may be object-level disagreement here as to whether a given presentation is false, misleading, or poses a risk of meaningful harm.

One possible concern with this idea is that the project would probably take a lot of funding to launch. With Open Phil's financial distancing from EA Funds, my guess is that EAIF may often not be in the ideal position to be an early funder of a seven-figure-a-year project, by which I mean one that comes on board earlier than individual major funders.

I can envision some cases in which EAIF might be a better fit for seed funding, such as cases where funding would allow further development or preliminary testing of a big-project proposal to the point it could be better evaluated by funders who can consistently offer mid-six figures plus a year. It's unclear how well that would describe something like the FHI/West proposal, though.

I could easily be wrong (or there could already be enough major funder interest to alleviate the first paragraph concern), and a broader discussion about EAIF's comparative advantages / disadvantages for various project characteristics might be helpful in any event.

I'd be worried that -- even assuming the funding did not actually influence the content of the speech -- the author being perceived as on the EA payroll would seriously diminish the effectiveness of this work. Maybe that is less true in the context of a professional journal where the author's reputation is well-known to the reader than it would be somewhere like Wired, though?

Load more