I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.
Last night, I quickly thumbed through the websites (and Ambitious Impact listings) of some smaller charities as I started thinking about my end of year giving plans.
I'd encourage smaller orgs to refer to something on their website dated within the past few months -- an event, a dated blog post, etc. -- that the median website visitor will ~quickly see. This is particularly true if the website refers to more distant events, has older-dated blog posts, etc. Without some indicator of freshness, the visitor may be left uncertain whether the org is still meaningfully active and potentially worth funding. After all, the closure rate for small, new orgs is not low.
[Caveats: the thumbing was on mobile, while taking a bath, and at the end of a difficult day, so there could easily have been misses on my part]
Is information about the total spend on Uni Groups in this period available? [1] I only saw figures for Group Support Funding, which I take to be only a small fraction of the whole.
There are a few reasons that information would likely be helpful:
(1) It allows would-be smaller donors to evaluate a part of CEA's cost-effectiveness;
(2) It provides information about the general location of the funding bar for meta work, which may inform those who are considering pitching a meta project whether their ideas are cost-effective enough to have a good chance at attracting funding; and
(3) It would inform community discussion of tradeoffs made by the Uni Groups team (e.g., prioritization of super-elite universities).
I think it would be okay for the stated figure to not include Uni Groups' fair share of CEA & EVF overhead if that information is not readily available, although the existence of that overhead should be noted if that is the case.
Risk factors for psychological harm that I believe to be predictable, neglected, and tractable to address on a shorter-term scale without drastic systemic change.
My general reaction is that some of the issues you identify may implicate moderately deep structural issues and/or involve some fairly significant tradeoffs. If true, that wouldn't establish that nothing should be done about them, but it would make proposed solutions that don't sufficiently grapple with the structural issues and tradeoffs unlikely to gain traction.
For example, on the issue of Forum drama (which I've chosen because the discussion and proposals feel a bit more concrete to me):
The case of Lightcone et al. v. Nonlinear et al. related to an attempt to protect community members from a perceived bad actor. Without litigating the merits of that dispute -- especially since I tried to stay away from it as much as possible! -- there still has to be a means of protecting community members from perceived bad actors. It's not clear to me that there existed a place to try this matter other than the Court of Public Opinion (EA Forum Division). A fair amount of ink has been spilled on the bad-actor problem more generally, but EA is decentralized enough that the non-messy solutions generally wouldn't work well either. Likewise, the recent case of In re Manifest was -- to at least some of the disputants -- about the bounds of what was and wasn't acceptable in the community. If there's not anything like the President or Congress of EA (and there likely shouldn't be in my view), only the community can make those decisions.
Dealing with alleged bad actors and defending core norms are important, so if that isn't going to be done on the Forum then it will need to be done somewhere else. I think you're right that people tend to behave better in person than online, but it's not clear how these kinds of issues would be adequately fleshed out in person. For starters, that gives a lot of power to whoever is handing out the invites to the in-person events (and the limited opportunities for one-to-many communications). Spending more time on community drama could also derail the stated purposes of those events.
More broadly, taking the community out of adjudicating these kinds of disputes would mean setting up some centralized authority to do so -- maybe an elected representative assembly or something. That's possible, and maybe desirable -- but it would be a major structural change.
Maybe you could make the online discourse better -- but in this case it would have to be by the slow and time-consuming task of building consensus, not by moderator fiat. Finding people with the independence, skill set, community buy-on and time/flexibility to control big on-Forum disputes much more tightly than the mods have would be tough. It's an open Internet, and if enough of the community thinks topics that need to be discussed elsewhere are being suppressed, there's always Reddit.
Thanks. The self-evaluation explains that:
We are not assuming perfect Pledge retention. Instead, we are aiming to extrapolate from what we have seen so far (which is a decline in the proportion of Pledgers who give, but an increase in the average amount given when they do).
So I believe that chart is showing the combined effect of (a) imperfect retention; and (b) increased average donations by retained donors (probably because "the average GWWC Pledger is fairly young, and people’s incomes tend to increase as they get older"). With a sufficiently large number of pledgers, considering the combined effect makes sense because they will likely cancel each other out to some extent. When considering a single megadonor, I think one has to consider the retention issue separately because any increased-wealth effect is useless if there is 0% retention of that donor.
Also, to the extent that people were using the value of SBF's holdings in FTX & Alameda (rather than a rolling average of his past donations) in the analysis, they were already baking in the expectation that he would have more money in years to come than he had to donate now.
The norm around donating 10% is one of the places where EA has constructed a sort of "safe harbour," sending a message at least somewhat like as long as you give 10%, and under certain circumstances less, you should feel good about yourself as an EA / feel supported / etc. In other words, the community ethos implicitly discourages feeling guilty about "only" donating 10 percent.
I'm not as convinced we have established and effectively communicate that kind of safe harbour around certain other personal decisions, like career decisions. Thus, I don't know if the soft 10 percent norm is representative of norms and pressures relating to demandingness.
To be fair, it's easier to construct a safe harbour around money than around something like career decisions because we don't have ten careers to allocate.
If Elizabeth is trying to maximize the impartial good, she should probably be far more concerned about an anti-veganism advocate on Facebook than about a veganism advocate who (incorrectly) denies veganism's health tradeoffs.
I doubt that Elizabeth -- or a meaningful number of her potential readers -- are considering whether to be associated with anti-vegan advocates on Facebook or any movement related to them. I read the discussion as mainly about epistemics and integrity (these words collectively appear ~30 times in the transcript) rather than object-level harms.
I recognize there may be object-level disagreement here as to whether a given presentation is false, misleading, or poses a risk of meaningful harm.
One possible concern with this idea is that the project would probably take a lot of funding to launch. With Open Phil's financial distancing from EA Funds, my guess is that EAIF may often not be in the ideal position to be an early funder of a seven-figure-a-year project, by which I mean one that comes on board earlier than individual major funders.
I can envision some cases in which EAIF might be a better fit for seed funding, such as cases where funding would allow further development or preliminary testing of a big-project proposal to the point it could be better evaluated by funders who can consistently offer mid-six figures plus a year. It's unclear how well that would describe something like the FHI/West proposal, though.
I could easily be wrong (or there could already be enough major funder interest to alleviate the first paragraph concern), and a broader discussion about EAIF's comparative advantages / disadvantages for various project characteristics might be helpful in any event.
I'd be worried that -- even assuming the funding did not actually influence the content of the speech -- the author being perceived as on the EA payroll would seriously diminish the effectiveness of this work. Maybe that is less true in the context of a professional journal where the author's reputation is well-known to the reader than it would be somewhere like Wired, though?
(I didn't vote on any of the comments.)
Some voters are known to downvote when they think the net karma on a post is too high. If asked about too high, they would probably point to the purposes of the karma system, like deciding which comments to emphasize to Forum visitors, signaling what kind of content the community finds valuable, providing adequate incentives to produce valuable content, etc.
I think that can be a valid voting philosophy (although I sometimes have various concerns about it in practice, especially where strong downvotes are used). While the karma counts for the comments here do not seem inflated to me by current karma-inflation standards, I do not think they plausibly justify an inference of bad-faith or even "suspicious voting behavior."