This is a special post for quick takes by Aaron Bergman. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

There's a question on the forum user survey:

How much do you trust other EA Forum users to be genuinely interested in making the world better using EA principles?

This is one thing I've updated down quite a bit over the last year. 

It seems to me that relatively few self-identified EA donors mostly or entirely give to the organization/whatever that they would explicitly endorse as being the single best recipient of a marginal dollar (do others disagree?)

Of course the more important question is whether most EA-inspired dollars are given in such a way (rather than most donors). Unfortunately, I think the answer to this is "no" as well, seeing as OpenPhil continues to donate a majority of dollars to human global health and development[1] (I threw together a Claude artifact that lets you get a decent picture of how OpenPhil has funded cause areas over time and in aggregate)[2]

Edit: to clarify, it could be the case that others have object-level disagreements about what the best use of a marginal dollar is. Clearly this is sometimes the case, but it's not what I am getting at here. I am trying to get at the phenomenon where people implicitly say/reason "yes, EA principles imply th... (read more)

In your original post, you talk about explicit reasoning, in the your later edit, you switch to implicit reasoning. Feels like this criticism can't be both. I also think the implicit reasoning critique just collapses into object-level disagreements, and the explicit critique just doesn't have much evidence.

The phenomenon you're looking at, for instance, is:

"I am trying to get at the phenomenon where people implicitly say/reason "yes, EA principles imply that the best thing to do would be to donate to X, but I am going to donate to Y instead."

And I think this might just be an ~empty set, compared to people having different object-level beliefs about what EA principles are or imply they should do, and also disagree with you on what the best thing to do would be.[1] I really don't think there's many people saying "the best thing to do is donate to X, but I will donate to Y". (References please if so - clarification in footnotes[2]) Even on OpenPhil, I think Dustin just genuinely believes in worldview diversification is the best thing, so there's no contradiction there where he implies the best thing would be to do X but in practice does do Y.

I think causing this to 'update downwa... (read more)

4
Aaron Bergman
Thanks and I think your second footnote makes an excellent distinction that I failed to get across well in my post. I do think it’s at least directionally an “EA principle” that “best” and “right” should go together, although of course there’s plenty of room for naive first-order calculation critiques, heuristics/intuitions/norms that might push against some less nuanced understanding of “best”. I still think there’s a useful conceptual distinction to be made between these terms, but maybe those ancillary (for lack of a better word) considerations relevant to what one thinks is the “best” use of money blur the line enough to make it too difficult to distinguish these in practice. Re: your last paragraph, I want to emphasize that my dispute is with the terms “using EA principles”. I have no doubt whatsoever about the first part, “genuinely interested in making the world better”
4
JWS 🔸
Thanks Aaron, I think you're responses to me and Jason do clear things up. I still think the framing of it is a bit off though: * I accept that you didn't intend your framing to be insulting to others, but using "updating down" about the "genuine interest" of others read as hurtful on my first read. As a (relative to EA) high contextualiser it's the thing that stood out for me, so I'm glad you endorse that the 'genuine interest' part isn't what you're focusing on, and you could probably reframe your critique without it. * My current understanding of your position is that it is actually: "I've come to realise over the last year that many people in EA aren't directing their marginal dollars/resources to the efforts that I see as most cost-effective, since I also think those are the efforts that EA principles imply are the most effective."[1] To me, this claim is about the object-level disagreement on what EA principles imply. * However, in your response to Jason you say “it’s possible I’m mistaken over the degree to which direct resources to the place you think needs them most” is a consensus-EA principle which switches back to people not being EA? Or not endorsing this view? But you've yet to provide any evidence that people aren't doing this, as opposed to just disagreeing about what those places are.[2] 1. ^ Secondary interpretation is: "EA principles imply one should make a quantitative point estimate of the good of all your relevant moral actions, and then act on the leading option in a 'shut-up-and-calculate' way. I now believe many fewer actors in the EA space actually do this than I did last year" 2. ^ For example, in Ariel's piece, Emily from OpenPhil implies that they have much lower moral weights on animal life than Rethink does, not that they don't endorse doing 'the most good' (I think this is separable from OP's commitment to worldview diversification).
9
Jason
It seems a bit harsh to treat other user-donors' disagreement with your views on concentrating funding on their top-choice org (or even cause area) as significant evidence against the proposition that they are "genuinely interested in making the world better using EA principles." I think a world in which everyone did this would have some significant drawbacks. While I understand how that approach would make sense through an individual lens, and am open to the idea that people should concentrate their giving more, I'd submit that we are trying to do the most good collectively. For instance: org funding is already too concentrated on a too-small number of donors. If (say) each EA is donating to an average of 5 orgs, then a norm of giving 100% to a single org would decrease the number of donors by 80%. That would impose significant risks on orgs even if their total funding level was not changed.  It's also plausible that the number of first-place votes an org (or even a cause area) would get isn't a super-strong reflection of overall community sentiment. If a wide range of people identified Org X as in their top 10%, then that likely points to some collective wisdom about Org X's cost-effectiveness even if no one has them at number 1. Moreover, spreading the wealth can be seen as deferring to broader community views to some extent -- which could be beneficial insofar as one found little reason to believe that wealthier community members are better at deciding where donation dollars should go than the community's collective wisdom. Thus, there are reasons -- other than a lack of genuine interest in EA principles by donors -- that donors might reasonably choose to act in accordance with a practice of donation spreading. 
6
Aaron Bergman
Thanks, it’s possible I’m mistaken over the degree to which “direct resources to the place you think needs them most” is a consensus-EA principle. Also, I recognize that "genuinely interested in making the world better using EA principles” is implicitly value-laden, and to be clear I do wish it was more the case, but I also genuinely intend my claim to be an observation that might have pessimistic implications depending on other beliefs people may have rather than an insult or anything like it, if that makes any sense.

A couple takes from Twitter on the value of merch and signaling that I think are worth sharing here:

1) 

2) 

2
yanni kyriacos
Media is often bought on a CPM basis (cost per thousand views). A display ad on LinkedIn for e.g. might cost $30 CPM. So yeah I think merch is probably underrated. 
2
NickLaing
Love this quick take, and would appreciate more similar short fun/funny takes to lift the mood :D.
7
Aaron Bergman
Boy do I have a website for you (twitter.com)! (I unironically like twitter for the lower stakes and less insanely high implicit standards) On mobile now so can’t add image but https://x.com/aaronbergman18/status/1782164275731001368?s=46

I made a custom GPT that is just normal, fully functional ChatGPT-4, but I will donate any revenue this generates[1] to effective charities. 

Presenting: Donation Printer 

  1. ^

    OpenAI is rolling out monetization for custom GPTs:

    Builders can earn based on GPT usage

    In Q1 we will launch a GPT builder revenue program. As a first step, US builders will be paid based on user engagement with their GPTs. We'll provide details on the criteria for payments as we get closer.

Interesting that the Animal Welfare Fund gives out so few small grants relative to the Infrastructure and Long Term Future funds (Global Health and Development has only given out 20 grants, all very large, so seems to be a more fundamentally different type of thing(?)). Data here.

A few stats:

  • The 25th percentile AWF grant was $24,250, compared to $5,802 for Infrastructure and $7,700 for LTFF (and median looks basically the same).
  • AWF has only made just nine grants of less than $10k, compared to 163 (Infrastructure) and 132 (LTFF).

Proportions under $threshold 

fundprop_under_1kprop_under_2500prop_under_5kprop_under_10k
Animal Welfare Fund0.0000.0040.0120.036
EA Infrastructure Fund0.0200.0860.1940.359
Global Health and Development Fund0.0000.0000.0000.000
Long-Term Future Fund0.0070.0680.1630.308

Grants under $threshold 

fundnunder_2500under_5kunder_10kunder_25kunder_50k
Animal Welfare Fund250139243248
EA Infrastructure Fund4543988163440453
Global Health and Development Fund2000057
Long-Term Future Fund4292970132419429

Summary stats (rounded)

fundnmedianmeanq1q3total
Animal Welfare Fund250$50,000$62,188$24,250$76,000$15,546,957
EA Infrastructure Fund454$15,319$41,331$5,802$45,000$18,764,097
... (read more)
7
Jason
This is not surprising to me given the different historical funding situations in the relevant cause areas, the sense that animal-welfare and global-health are not talent-constrained as much as funding-constrained, and the clearer presence of strong orgs in those areas with funding gaps.  For instance: * there are 15 references to "upskill" (or variants) in the list of microgrants, and it's often hard to justify an upskilling grant in animal welfare given the funding gaps in good, shovel-ready animal-welfare projects.  * Likewise, 10 references to "study," 12 to "development,' 87 to "research" (although this can have many meanings), 17 for variants of "fellow," etc. * There are 21 references to "part-time," and relatively small, short blocks of time may align better with community building, small research projects than (e.g.) running a corporate campaign
5
Charles Dillon
Seems pretty unsurprising - the animal welfare fund is mostly giving to orgs, while the others give to small groups or individuals for upskilling/outreach frequently.
8
MichaelStJules
I think the differences between the LTFF and AWF are largely explained by differences in salary expectations/standards between the cause areas. There are small groups and individuals getting money from the AWF, and they tend to get much less for similar duration projects. Salaries in effective animal advocacy are pretty consistently substantially lower than in AI safety (and software/ML, which AI safety employers and grantmakers might try to compete with somewhat), with some exceptions. This is true even for work in high-income countries like the US and the UK. And, of course, salary expectations are even lower in low- and middle-income countries, which are an area of focus of the AWF (within neglected regions). Plus, many AI safety folks are in the Bay Area specifically, which is pretty expensive (although animal advocates in London also aren't paid as much).
8
Aaron Bergman
Yeah but my (implicit, should have made explicit lol) question is “why this is the case?” Like at a high level it’s not obvious that animal welfare as a cause/field should make less use of smaller projects than the others. I can imagine structural explanations (eg older field -> organizations are better developed) but they’d all be post hoc.
6
Charles Dillon
I think getting enough people interested in working on animal welfare has not usually been the bottleneck, relative to money to directly deploy on projects, which tend to be larger.
2
Aaron Bergman
This doesn't obviously point in the direction of relatively and absolutely fewer small grants, though. Like naively it would shrink and/or shift the distribution to the left - not reshape it.
4
Charles Dillon
I don't understand why you think this is the case. If you think of the "distribution of grants given" as a sum of multiple different distributions (e.g. upskilling, events, and funding programmes) of significantly varying importance across cause areas, then more or less dropping the first two would give your overall distribution a very different shape.
5
Aaron Bergman
Yeah you're right, not sure what I missed on the first read
4
MHR
Very interesting, thanks for pulling this data!

FYI talks from EA Global (or at least those that that are public on Youtube) are on a podcast feed for your listening convenience!

This was mentioned a few weeks ago but thought it was worth advertising once more with the links and such in a top-level post. 

Recently updated the feed with a few dozen from the recent EA Global London and EAGxAustralia 2023. Comments and suggestions welcome of course

According to Kevin Esvelt on the recent 80,000k podcast (excellent btw, mostly on biosecurity), eliminating the New World New World screwworm could be an important farmed animal welfare (infects livestock), global health (infects humans), development (hurts economies), science/innovation intervention, and most notably quasi-longtermist wild animal suffering intervention. 

More, if you think there’s a non-trivial chance of human disempowerment, societal collapse, or human extinction in the next 10 years, this would be important to do ASAP because we may not be able to later.

From the episode:

Kevin Esvelt: 

...

But from an animal wellbeing perspective, in addition to the human development, the typical lifetime of an insect species is several million years. So 106 years times 109 hosts per year means an expected 1015 mammals and birds devoured alive by flesh-eating maggots. For comparison, if we continue factory farming for another 100 years, that would be 1013 broiler hens and pigs. So unless it’s 100 times worse to be a factory-farmed broiler hen than it is to be devoured alive by flesh-eating maggots, then when you integrate over the future, it is more important for animal we

... (read more)

Random sorta gimmicky AI safety community building idea: tabling at universities but with a couple laptop signed into Claude Pro with different accounts. Encourage students (and profs) to try giving it some hard question from eg a problem set and see how it performs. Ideally have a big monitor for onlookers to easily see.

Most college students are probably still using ChatGPT-3.5, if they use LLMs at all. There’s a big delta now between that and the frontier.

I have a vague fear that this doesn't do well on the 'try not to have the main net effect be AI hypebuilding' heuristic.

I'm pretty happy with how this "Where should I donate, under my values?" Manifold market has been turning out. Of course all the usual caveats pertaining to basically-fake "prediction" markets apply, but given the selection effects of who spends manna on an esoteric market like this I put a non-trivial weight into the (live) outcomes.

I guess I'd encourage people with a bit more money to donate to do something similar (or I guess defer, if you think I'm right about ethics!), if just as one addition to your portfolio of donation-informing considerations.

4
Eevee🔹
This is a really interesting idea! What are your values, so I can make an informed decision?

Thanks! Let me write them as a loss function in python (ha)

For real though:

  • Some flavor of hedonic utilitarianism
    • I guess I should say I have moral uncertainty (which I endorse as a thing) but eh I'm pretty convinced
  • Longtermism as explicitly defined is true
    • Don't necessarily endorse the cluster of beliefs that tend to come along for the ride though
  • "Suffering focused total utilitarian" is the annoying phrase I made up for myself
    • I think many (most?) self-described total utilitarians give too little consideration/weight to suffering, and I don't think it really matters (if there's a fact of the matter) whether this is because of empirical or moral beliefs
    • Maybe my most substantive deviation from the default TU package is the following (defended here):
      • "Under a form of utilitarianism that places happiness and suffering on the same moral axis and allows that the former can be traded off against the latter, one might nevertheless conclude that some instantiations of suffering cannot be offset or justified by even an arbitrarily large amount of wellbeing."
  • Moral realism for basically all the reasons described by Rawlette on 80k but I don't think this really matters after conditioning on normati
... (read more)
4
Eevee🔹
I was inspired to create this market! I would appreciate it if you weighed in. :)
4
Aaron Bergman
Some shrinsight (shrimpsight?) from the comments:

I went ahead and made an "Evergreen" tag as proposed in my quick take from a while back: 

Meant to highlight that a relatively old post (perhaps 1 year or older?) still provides object level value to read i.e., above and beyond:

  1. It's value as a cultural or historical artifact above
  2. The value of more recent work it influenced or inspired
6
quinn
cool, but I don't think a year is right. I would have said 3 years. 
4
Aaron Bergman
I think the proxy question is “after what period of time is it reasonable to assume that any work building or expanding on the post would have been published?” and my intuition here is about 1 year but would be interested in hearing others thoughts
3
trevor1
I think that the number of years depends on how fast EA is growing.
4
Larks
Hopefully people will be sparing in applying it to their own recent posts!
2
Aaron Bergman
Eh I'm not actually sure how bad this would be. Of course it could be overdone, but a post's author is its obvious best advocate, and a simple "I think this deserves more attention" vote doesn't seem necessarily illegitimate to me

This post is half object level, half experiment with “semicoherent audio monologue ramble → prose” AI (presumably GPT-3.5/4 based) program audiopen.ai

In the interest of the latter objective, I’m including 3 mostly-redundant subsections: 

  1. A ’final’ mostly-AI written text, edited and slightly expanded just enough so that I endorse it in full (though recognize it’s not amazing or close to optimal) 
  2. The raw AI output
  3. The raw transcript


1) Dubious asymmetry argument in WWOTF

In Chapter 9 of his book, What We Are the Future, Will MacAskill argues that the future holds positive moral value under a total utilitarian perspective. He posits that people generally use resources to achieve what they want - either for themselves or for others - and thus good outcomes are easily explained as the natural consequence of agents deploying resources for their goals. Conversely, bad outcomes tend to be side effects of pursuing other goals. While malevolence and sociopathy do exist, they are empirically rare.

MacAskill argues that in a future with continued economic growth and no existential risk, we will likely direct more resources towards doing good things due to self-interest and increase... (read more)

1
Puggy Knudson
I think there’s a case to be made for exploring the wide range of mediocre outcomes the world could become. Recent history would indicate that things are getting better faster though. I think MacAskill’s bias towards a range of positive future outcomes is justified, but I think you agree too. Maybe you could turn this into a call for more research into the causes of mediocre value lock-in. Like why have we had periods of growth and collapse, why do some regions regress, what tools can society use to protect against sinusoidal growth rates.

A few Forum meta things you might find useful or interesting:

  1.  Two super basic interactive data viz apps 
    1. 1) How often (in absolute and relative terms) a given forum topic appears with another given topic
    2. 2) Visualizing the popularity of various tags
  2. An updated Forum scrape including the full text and attributes of 10k-ish posts as of Christmas, '22
    1. See the data without full text in Google Sheets here
    2. Post explaining version 1.0 from a few months back
  3. From the data in no. 2, a few effortposts that never garnered an accordant amount of attention (qualitatively filtered from posts with (1) long read times (2) modest positive karma (3) not a ton of comments.
    1.  Columns labels should be (left to right):
      1. Title/link
      2. Author(s)
      3. Date posted
      4. Karma (as of a week ago)
      5. Comments (as of a week ago)
 
Open Philanthropy: Our Approach to Recruiting a Strong Teampmk10/23/2021110
Histories of Value Lock-in and Ideology Critiqueclem9/2/2022111
Why I think strong general AI is coming soonporby9/28/2022131
Anthropics and the Universal DistributionJoe_Carlsmith11/28/2021180
Range and Forecasting Accuracyniplav5/27/2022122
A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warmingtu
... (read more)

Made a podcast feed with EAG talks. Now has both the recent Bay Area and London ones:

Full vids on the CEA Youtube page

So the EA Forum has, like, an ancestor? Is this common knowledge? Lol

Felicifia: not functional anymore but still available to view. Learned about thanks to a Tweet  from Jacy

From Felicifia Is No Longer Accepting New Users:

Update: threw together

  • some data with authors, post title names, date, and number of replies (and messed one section up so some rows are missing links)
  • A rather long PDF with the posts and replies together (for quick keyword searching), with decent but not great formatting 
2
Peter Wildeford
Wow, blast from the past!

LessWrong has a new feature/type of post called "Dialogues". I'm pretty excited to use it, and hope that if it seems usable, reader friendly, and generally good the EA Forum will eventually adopt it as well.

A (potential) issue with MacAskill's presentation of moral uncertainty

Not able to write a real post about this atm, though I think it deserves one. 

MacAskill makes a similar point in WWOTF, but IMO the best and most decision-relevant quote comes from his second appearance on the 80k podcast:

There are possible views in which you should give more weight to suffering...I think we should take that into account too, but then what happens? You end up with kind of a mix between the two, supposing you were 50/50 between classical utilitarian view and just strict negative utilitarian view. Then I think on the natural way of making the comparison between the two views, you give suffering twice as much weight as you otherwise would. 

I don't think the second bolded sentence follows in any objective or natural manner from the first. Rather, this reasoning takes a distinctly total utilitarian meta-level perspective, summing the various signs of utility and then implicitly considering them under total utilitarianism. 

Even granting that the mora arithmetic is appropriate and correct, it's not at all clear what to do once the 2:1 accounting is complete. MacAskill's suffering-focused ... (read more)

Idea/suggestion: an "Evergreen" tag, for old (6 months month? 1 year? 3 years?) posts (comments?), to indicate that they're still worth reading (to me, ideally for their intended value/arguments rather than as instructive historical/cultural artifacts)

As an example, I'd highlight Log Scales of Pleasure and Pain, which is just about 4 years old now.

I know I could just create a tag, and maybe I will, but want to hear reactions and maybe generate common knowledge.

6
Nathan Young
I think we want someone to push them back into the discussion.  Or you know, have editable wiki versions of them.
2
quinn
yeah some posts are sufficiently 1. good/useful, and 2. generic/not overly invested in one particular author's voice or particularities that they make more sense as a wiki entry than a "blog"-adjacent post. 

Hypothesis: from the perspective of currently living humans and those who will be born in the currrent <4% growth regime only (i.e. pre-AGI takeoff or I guess stagnation) donations currently earmarked for large scale GHW, Givewell-type interventions should be invested (maybe in tech/AI correlated securities) instead with the intent of being deployed for the same general category of beneficiaries in <25 (maybe even <1) years.

The arguments are similar to those for old school "patient philanthropy" except now in particular seems exceptionally uncerta... (read more)

I'm skeptical of this take. If you think sufficiently transformative + aligned AI is likely in the next <25 years, then from the perspective of currently living humans and those who will be born in the current <4% growth regime, surviving until transformative AI arrives would be a huge priority. Under that view, you should aim to deploy resources as fast as possible to lifesaving interventions rather than sitting on them.

The recent 80k podcast on the contingency of abolition got me wondering what, if anything, the fact of slavery's abolition says about the ex ante probability of abolition - or more generally, what one observation of a binary random variable  says about  as in

Bernoulli vs Binomial Distribution: What's the Difference?

Turns out there is an answer (!), and it's found starting in paragraph 3 of subsection 1 of section 3 of the Binomial distribution Wikipedia page:

A closed form Bayes estimator for p also exists when using the Beta distribution as a conjugate prior distribution. When using a genera

... (read more)
4
Peter Wildeford
The uniform prior case just generalizes to Laplace's Law of Succession, right?
2
Aaron Bergman
In terms of result, yeah it does, but I sorta half-intentionally left that out because I don't actually think LLS is true as it seems to often be stated. Why the strikethrough: after  writing the shortform, I get that e.g., "if we know nothing more about them" and "in the absence of additional information" mean "conditional on a uniform prior," but I didn't get that before. And Wikipedia's explanation of the rule,  seems both unconvincing as stated and, if assumed to be true, doesn't depend on that crucial assumption
3
Robi Rahman
The last line contains a typo, right?
2
Aaron Bergman
Fixed, thanks!

I tried making a shortform -> Twitter bot (ie tweet each new top level ~quick take~) and long story short it stopped working and wasn't great to begin with.

I feel like this is the kind of thing someone else might be able to do relatively easily. If so, I and I think much of EA Twitter would appreciate it very much! In case it's helpful for this, a quick takes RSS feed is at https://ea.greaterwrong.com/shortform?format=rss

2
rime
I would be interested in following this bot if it were made. Thanks for trying!

New fish data with estimated individuals killed per country/year/species  (super unreliable, read below if you're gonna use!) 

That^ is too big for Google Sheets, so here's the same thing just without a breakdown by country that you should be able to open easily if you want to take a look.

Basically the UN data generally used for tracking/analyzing the amount of fish and other marine life captured/farmed and killed only tracks the total weight captured for a given country-year-species (or group of species). 

I had chatGPT-4 provide estimated lo... (read more)

Note: this sounds like it was written by chatGPT because it basically was (from a recorded ramble)🤷‍
 

I believe the Forum could benefit from a Shorterform page, as the current Shortform forum, intended to be a more casual and relaxed alternative to main posts, still seems to maintain high standards. This is likely due to the impressive competence of contributors who often submit detailed and well-thought-out content. While some entries are just a few well-written sentences, others resemble blog posts in length and depth.

As such, I find myself hesitant... (read more)

WWOTF: what did the publisher cut? [answer: nothing]

Contextual note: this post is essentially a null result. It seemed inappropriate both as a top-level post and as an abandoned Google Doc, so I’ve decided to put out the key bits (i.e., everything below) as Shortform. Feel free to comment/message me if you think that was the wrong call! 

Actual post

On his recent appearance on the 80,000 Hours Podcast, Will MacAskill noted that Doing Good Better was significantly influenced by the book’s publisher:[1] 

Rob Wiblin: ...But in 2014 you wrote&

... (read more)

A resource that might be useful: https://tinyapps.org/ 

 

There's a ton there, but one anecdote from yesterday: referred me to this $5 IOS desktop app which (among other more reasonable uses) made me this full quality, fully intra-linked >3600 page PDF of (almost) every file/site linked to by every file/site linked to from Tomasik's homepage (works best with old-timey simpler sites like that)

New Thing

Last week I complained about not being able to see all the top shortform posts in one list. Thanks to Lorenzo for pointing me to the next best option: 

...the closest I found is https://forum.effectivealtruism.org/allPosts?sortedBy=topAdjusted&timeframe=yearly&filter=all, you can see the inflation-adjusted top posts and shortforms by year.

It wasn't too hard to put together a text doc with (at least some of each of) all 1470ish shortform posts, which you can view or download here.

  • Pros: (practically) infinite scroll of insight porn 
... (read more)

Infinitely easier said than done, of course, but some Shortform feedback/requests

  1. The link to get here from the main page is awfully small and inconspicuous (1 of 145 individual links on the page according to a Chrome extension)
    1. I can imagine it being near/stylistically  like:
      1. "All Posts" (top of sidebar)
      2. "Recommendations" in the center
      3. "Frontpage Posts", but to the main section's side or maybe as a replacement for it you can easily toggle back and forth from
  2. Would be cool to be able to sort and aggregate like with the main posts (nothing to filter by afaik
... (read more)
3
Lorenzo Buonanno🔸
For 2.a the closest I found is https://forum.effectivealtruism.org/allPosts?sortedBy=topAdjusted&timeframe=yearly&filter=all, you can see the inflation-adjusted top posts and shortforms by year. For 1 it's probably best to post in the EA Forum feature suggestion thread
3
Aaron Bergman
Late but thanks on both, and commented there! 

Events as evidence vs. spotlights

Note: inspired by the FTX+Bostrom fiascos and associated discourse. May (hopefully) develop into longform by explicitly connecting this taxonomy to those recent events (but my base rate of completing actual posts cautions humility)

Event as evidence

  • The default: normal old Bayesian evidence
    • The realm of "updates," "priors," and "credences" 
  • Pseudo-definition: Induces [1] a change to or within a model (of whatever the model's user is trying to understand)
  • Corresponds to models that are (as is often assumed):
    1. Well-de
... (read more)

EAG(x)s should have a lower acceptance bar. I find it very hard to believe that accepting the marginal rejectee would be bad on net.

Are you factoring in that CEA pays a few hundred bucks per attendee? I'd have a high-ish bar to pay that much for someone to go to a conference myself. Altho I don't have a good sense of what the marginal attendee/rejectee looks like.

3
Chris Leong
What is the acceptance bar?

Ok so things that get posted in the Shortform tab also appear in your (my) shortform post , which can be edited to not have the title "___'s shortform" and also has a real post body that is empty by default but you can just put stuff in.

There's also the usual "frontpage" checkbox, so I assume an individual's own shortform page can appear alongside normal posts(?).

The link is: [Draft] Used to be called "Aaron Bergman's shortform" (or smth)

I assume only I can see this but gonna log out and check

[This comment is no longer endorsed by its author]Reply

Effective Altruism Georgetown will be interviewing Rob Wiblin for our inaugural podcast episode this Friday! What should we ask him? 

[comment deleted]2
0
0
Curated and popular this week
Relevant opportunities