This is a special post for quick takes by Ozzie Gooen. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I really don't like the trend of posts saying that "EA/EAs need to | should do X or Y".

EA is about cost-benefit analysis. The phrases need and should implies binaries/absolutes and having very high confidence.

I'm sure there are thousands of interventions/measures that would be positive-EV for EA to engage with. I don't want to see thousands of posts loudly declaring "EA MUST ENACT MEASURE X" and "EAs SHOULD ALL DO THING Y," in cases where these mostly seem like un-vetted interesting ideas. 

In almost all cases I see the phrase, I think it would be much better replaced with things like;
"Doing X would be high-EV"
"X could be very good for EA"
"Y: Cost and Benefits" (With information in the post arguing the benefits are worth it)
"Benefits|Upsides of X" (If you think the upsides are particularly underrepresented)"

I think it's probably fine to use the word "need" either when it's paired with an outcome (EA needs to do more outreach to become more popular) or when the issue is fairly clearly existential (the US needs to ensure that nuclear risk is low). It's also fine to use should in the right context, but it's not a word to over-use. 

Related (and classic) post in case others aren't aware: EA should taboo "EA should".

Lizka makes a slightly different argument, but a similar conclusion

Strong disagree. If the proponent of an intervention/cause area believes the advancement of it is extremely high EV such that they believe it is would be very imprudent for EA resources not to advance it, they should use strong language.

I think EAs are too eager to hedge their language and use weak language regarding promising ideas.

For example, I have no compunction saying that advancement of the Profit for Good (companies with charities in vast majority shareholder position) needs to be advanced by EA, in that I believe it not doing results in an ocean less counterfactual funding for effective charities, and consequently a significantly worse world.

https://forum.effectivealtruism.org/posts/WMiGwDoqEyswaE6hN/making-trillions-for-effective-charities-through-the

I haven't noticed this trend, could you list a couple of articles like this? Or even DM me if you're not comfortable listing them here.

Some musicians have multiple alter-egos that they use to communicate information from different perspectives. MF Doom released albums under several alter-egos; he even used these aliases to criticize his previous aliases.

Some musicians, like Madonna, just continued to "re-invent" themselves every few years.

Youtube personalities often feature themselves dressed as different personalities to represent different viewpoints. 

It's really difficult to keep a single understood identity, while also conveying different kinds of information.

Narrow identities are important for a lot of reasons. I think the main one is predictability, similar to a company brand. If your identity seems to dramatically change hour to hour, people wouldn't be able to predict your behavior, so fewer could interact or engage with you in ways they'd feel comfortable with.

However, narrow identities can also be suffocating. They restrict what you can say and how people will interpret that. You can simply say more things in more ways if you can change identities. So having multiple identities can be a really useful tool.

Sadly, most academics and intellectuals can only really have one public identity.

---

EA researchers currently act this way.

In EA, it's generally really important to be seen as calibrated and reasonable, so people correspondingly prioritize that in their public (and then private) identities. I've done this. But it comes with a cost.

One obvious (though unorthodox) way around this is to allow researchers to post content either under aliases. It could be fine if the identity of the author is known, as long as readers can keep these aliases distinct.

I've been considering how to best do this myself. My regular EA Forum name is just "Ozzie Gooen". Possible aliases would likely be adjustments to this name.

- "Angry Ozzie Gooen" (or "Disagreeable Ozzie Gooen")

- "Tech Bro Ozzie Gooen"

- "Utility-bot 352d3"

These would be used to communicate in very different styles, with me attempting what I'd expect readers to expect of those styles.

(Normally this is done to represent viewpoints other than what they have, but sometimes it's to represent viewpoints they have, but wouldn't normally share)

Facebook Discussion

[anonymous]3
0
0

As someone coming from the crypto space, I think carefully about which identity has what kind of content attached, and whether they can be cross-linked. Both for privacy and engagement purposes. Usernames instead of real names work well for this.

I don't see why researchers or EAs can't do that.

EA seems to have been doing a pretty great job attracting top talent from the most prestigious universities. While we attract a minority of the total pool, I imagine we get some of the most altruistic+rational+agentic individuals. 

If this continues, it could be worth noting that this could have significant repercussions for areas outside of EA; the ones that we may divert them from. We may be diverting a significant fraction of the future "best and brightest" in non-EA fields. 

If this seems possible, it's especially important that we do a really, really good job making sure that we are giving them good advice. 

A few junior/summer effective altruism related research fellowships are ending, and I’m getting to see some of the research pitches.

Lots of confident-looking pictures of people with fancy and impressive sounding projects.

I want to flag that many of the most senior people I know around longtermism are really confused about stuff. And I’m personally often pretty skeptical of those who don’t seem confused.

So I think a good proposal isn’t something like, “What should the EU do about X-risks?” It’s much more like, “A light summary of what a few people so far think about this, and a few considerations that they haven’t yet flagged, but note that I’m really unsure about all of this.”

Many of these problems seem way harder than we’d like for them to be, and much harder than many seem to assume at first. (perhaps this is due to unreasonable demands for rigor, but an alternative here would be itself a research effort).

I imagine a lot of researchers assume they won’t stand out unless they seem to make bold claims. I think this isn’t true for many EA key orgs, though it might be the case that it’s good for some other programs (University roles, perhaps?).

Not sure how to finish this post here. I think part of me wants to encourage junior researchers to lean on humility, but at the same time, I don’t want to shame those who don’t feel like they can do so for reasons of not-being-homeless (or simply having to leave research). I think the easier thing is to slowly spread common knowledge and encourage a culture where proper calibration is just naturally incentivized.

Facebook Thread

Relevant post by Nuño: https://forum.effectivealtruism.org/posts/7utb4Fc9aPvM6SAEo/frank-feedback-given-to-very-junior-researchers?fbclid=IwAR1M0zumAQ452iOAOVKGEcOdI4MwORfVSX4H1S2zLhyUXrWjarvUt31mKsg

Could/should altruistic activist investors buy lots of Twitter stock, then pressure them to do altruistic things?

---

So, Jack Dorsey just resigned from Twitter.

Some people on Hacker News are pointing out that Twitter has had recent issues with activist investors, and that this move might make those investors happy.

https://pxlnv.com/linklog/twitter-fleets-elliott-management/

From a quick look... Twitter stock really hasn't been doing very well. It's almost back at its price in 2014.

Square, Jack Dorsey's other company (he was CEO of two), has done much better. Market cap of over 2x Twitter ($100B), huge gains in the last 4 years.

I'm imagining that if I were Jack... leaving would have been really tempting. On one hand, I'd have Twitter, which isn't really improving, is facing activist investor attacks, and worst, apparently is responsible for global chaos (of which I barely know how to stop). And on the other hand, there's this really tame payments company with little controversy.

Being CEO of Twitter seems like one of the most thankless big-tech CEO positions around.

That sucks, because it would be really valuable if some great CEO could improve Twitter, for the sake of humanity.

One small silver lining is that the valuation of Twitter is relatively small. It has a market cap of $38B. In comparison, Facebook/Meta is $945B and Netflix is $294B.

So if altruistic interests really wanted to... I imagine they could become activist investors, but like, in a good way? I would naively expect that even with just 30% of the company you could push them to do positive things. $12B to improve global epistemics in a major way.

The US could have even bought Twitter for 4% of the recent $1T infrastructure bill. (though it's probably better that more altruistic ventures do it).

If middle-class intellectuals really wanted it enough, theoretically they could crowdsource the cash.

I think intuitively, this seems like clearly a tempting deal.

I'd be curious if this would be a crazy proposition, or if this is just not happening due to coordination failures.

Admittingly, it might seem pretty weird to use charitable/foundation dollars on "Buying lots of Twitter" instead of direct aid, but the path to impact is pretty clear.


Facebook Thread

[anonymous]1
0
0

+1

Coordination is a pain though, you may be better off appealing to specific HNWI investors to rally the cause. If anyone else is interested they can buy stock and delegate votes.

In general I think there's a case to be made for making delegating voting rights easier.

One futarchy/prediction market/coordination idea I have is to find some local governments and see if we could help them out by incorporating some of the relevant techniques.

This could be neat if it could be done as a side project. Right now effective altruists/rationalists don't actually have many great examples of side projects, and historically, "the spare time of particularly enthusiastic members of a jurisdiction" has been a major factor in improving governments.

Berkeley and London seem like natural choices given the communities there. I imagine it could even be better if there were some government somewhere in the world that was just unusually amenable to both innovative techniques, and to external help with them.

Given that EAs/rationalists care so much about global coordination, getting concrete experience improving government systems could be interesting practice.

There's so much theoretical discussion of coordination and government mistakes on LessWrong, but very little discussion of practical experience implementing these ideas into action.

(This clearly falls into the Institutional Decision Making camp)

Facebook Thread

On AGI (Artificial General Intelligence):

I have a bunch of friends/colleagues who are either trying to slow AGI down (by stopping arms races) or align it before it's made (and would much prefer it be slowed down).

Then I have several friends who are actively working to *speed up* AGI development. (Normally just regular AI, but often specifically AGI)[1]

Then there are several people who are apparently trying to align AGI, but who are also effectively speeding it up, but they claim that the trade-off is probably worth it (to highly varying degrees of plausibility, in my rough opinion).

In general, people seem surprisingly chill about this mixture? My impression is that people are highly incentivized to not upset people, and this has led to this strange situation where people are clearly pushing in opposite directions on arguably the most crucial problem today, but it's all really nonchalant.

[1] To be clear, I don't think I have any EA friends in this bucket. But some are clearly EA-adjacent.

More discussion here: https://www.facebook.com/ozzie.gooen/posts/10165732991305363

There seem to be several longtermist academics who plan to spend the next few years (at least) investigating the psychology of getting the public to care about existential risks.
 

This is nice, but I feel like what we really could use are marketers, not academics. Those are the people companies use for this sort of work. It's somewhat unusual that marketing isn't much of a respected academic field, but it's definitely a highly respected organizational one.

There are at least a few people in the community with marketing experience and an expressed desire to help out. The most recent example that comes to mind is this post.

If anyone reading this comment knows people who are interested in the intersection of longtermism and marketing, consider telling them about EA Funds! I can imagine the LTFF or EAIF being very interested in projects like this.

(That said, maybe one of the longtermist foundations should consider hiring a marketing consultant?)

Yep, agreed. Right now I think there are very few people doing active work in longtermism (outside of a few orgs that have people for that org), but this seems very valuable to improve upon. 

If you're happy to share, who are the longtermist academics you are thinking of? (Their work could be somewhat related to my work)

No prominent ones come to mind. There are some very junior folks I've recently seen discussing this, but I feel uncomfortable calling them out.

When discussing forecasting systems, sometimes I get asked,

“If we were to have much more powerful forecasting systems, what, specifically, would we use them for?”

The obvious answer is,

“We’d first use them to help us figure out what to use them for”

Or,

“Powerful forecasting systems would be used, at first, to figure out what to use powerful forecasting systems on”

For example,

  1. We make a list of 10,000 potential government forecasting projects.
  2. For each, we will have a later evaluation for “how valuable/successful was this project?”.
  3. We then open forecasting questions for each potential project. Like, “If we were to run forecasting project #8374, how successful would it be?”
  4. We take the top results and enact them.

Stated differently,

  1.  Forecasting is part of general-purpose collective reasoning.
  2. Prioritization of forecasting requires collective reasoning.
  3. So, forecasting can be used to prioritize forecasting.

I think a lot of people find this meta and counterintuitive at first, but it seems pretty obvious to me.

All that said, I can’t be sure things will play out like this. In practice, the “best thing to use forecasting on” might be obvious enough such that we don’t need to do costly prioritization work first. For example, the community isn’t currently doing much of this meta stuff around Metaculus. I think this is a bit mistaken, but not incredibly so.

Facebook Thread

I’m sort of hoping that 15 years from now, a whole lot of common debates quickly get reduced to debates about prediction setups.

“So, I think that this plan will create a boom for the United States manufacturing sector.”

“But the prediction markets say it will actually lead to a net decrease. How do you square that?”

“Oh, well, I think that those specific questions don’t have enough predictions to be considered highly accurate.”

“Really? They have a robustness score of 2.5. Do you think there’s a mistake in the general robustness algorithm?”

—-

Perhaps 10 years later, people won’t make any grand statements that disagree with prediction setups.

(Note that this would require dramatically improved prediction setups! On that note, we could use more smart people working in this!)

Facebook Thread

[anonymous]1
0
0

Random thought: You could use prediction setups to resolve specific cruxes on why prediction setups outputted certain values.

P.S. I'd be keen on working on this, how do I get involved?

You could use prediction setups to resolve specific cruxes on why prediction setups outputted certain values.

My guess is that this could be neat, but also pretty tricky. There are lots of "debate/argument" platforms out there, it's seemed to have worked out a lot worse than people were hoping. But I'd love to be proven wrong.
 

P.S. I'd be keen on working on this, how do I get involved?

If "this" means the specific thing you're referring to, I don't think there's really a project for that yet, you'd have to do it yourself. If you're referring more to forecasting projects more generally, there are different forecasting jobs and stuff popping up. Metaculus has been doing some hiring. You could also do academic research in the space. Another option is getting an EA Funds grant and pursuing a specific project (though I realize this is tricky!)

[anonymous]1
0
0

If "this" means the specific thing you're referring to, I don't think there's really a project for that yet, you'd have to do it yourself. If you're referring more to forecasting projects more generally, there are different forecasting jobs and stuff popping up. Metaculus has been doing some hiring. You could also do academic research in the space. Another option is getting an EA Funds grant and pursuing a specific project (though I realize this is tricky!)

Thanks this helps

[anonymous]1
0
0

Debate platform seems very different from a prediction market with liquidity. As long as you pay sufficient incentives to marketmakers they will spend time figuring out the best prices to quote - their primary motivation is profit (rather than fun or intellectaul stimulation). Whoever is paying out these incentives can figure out which cruxes they want resolved and accordingly pay on those markets.

Epistemic status: I feel positive about this, but note I'm kinda biased (I know a few of the people involved, work directly with Nuno, who was funded)

ACX Grants just announced.~$1.5 Million, from a few donors that included Vitalik.

https://astralcodexten.substack.com/p/acx-grants-results

Quick thoughts:

  • In comparison to the LTFF, I think the average grant is more generically exciting, but less effective altruist focused. (As expected)
  • Lots of tiny grants (<$10k), $150k is the largest one.
  • These rapid grant programs really seem great and I look forward to them being scaled up.
  • That said, the next big bottleneck (which is already a bottleneck) is funding for established groups. These rapid grants get things off the ground, but many will need long-standing support and scale.
  • Scott seems to have done a pretty strong job researching these groups, and also has had access to a good network of advisors. I guess it's no surprise; he seems really good at "doing a lot of reading and writing", and he has an established peer group now.
  • I'm really curious how/if these projects will be monitored. At some point, I think more personnel would be valuable.
  • This grant program is kind of a way to "scale up" Astral Codex Ten. Like, instead of hiring people directly, he can fund them this way.
  • I'm curious if he can scale up 10x or 1000x, we could really use more strong/trusted grantmakers. It's especially promising if he gets non-EA money. :)

On specific grants:

  • A few forecasters got grants, including $10k for Nuño Sempere Lopez Hidalgo for work on Metaforecast. $5k for Nathan Young to write forecasting questions.
  • $17.5k for 1DaySooner/Rethink Priorities to do surveys to advance human challenge trials.
  • $40k seed money to Spencer Greenberg to "produce rapid replications of high-impact social science papers". Seems neat, I'm curious how far $40k alone could go though.
  • A bunch of biosafety grants. I like this topic, seems tractable.
  • $40k for land value tax work.
  • $20k for a "Chaotic Evil" prediction market. This will be interesting to watch, hopefully won't cause net harm.
  • $50k for the Good Science Project, to "improve science funding in the US". I think science funding globally is really broken, so this warms my heart.
  •  Lots of other neat things, I suggest just reading directly.

The following things could both be true:

1) Humanity has a >80% chance of completely perishing in the next ~300 years.

2) The expected value of the future is incredibly, ridiculously, high!

The trick is that the expected value of a positive outcome could be just insanely great. Like, dramatically, incredibly, totally, better than basically anyone discusses or talks about.

Expanding to a great deal of the universe, dramatically improving our abilities to convert matter+energy to net well-being, researching strategies to expand out of the universe.

A 20%, or even a 0.002%, chance at a 10^20 outcome, is still really good.

One key question is the expectation of long-term negative[1] vs. long-term positive outcomes. I think most people are pretty sure that in expectation things are positive, but this is less clear.

So, remember:

Just because the picture of X-risks might look grim in terms of percentages, you can still be really optimistic about the future. In fact, many of the people most concerned with X-risks are those *most* optimistic about the future.

I wrote about this a while ago, here:

https://www.lesswrong.com/.../critique-my-model-the-ev-of...

[1] Humanity lasts, but creates vast worlds of suffering. "S-risks"


https://www.facebook.com/ozzie.gooen/posts/10165734005520363

Opinions on charging for professional time?

(Particularly in the nonprofit/EA sector)

I've been getting more requests recently to have calls/conversations to give advice, review documents, or be part of extended sessions on things. Most of these have been from EAs.

I find a lot of this work fairly draining. There can be surprisingly high fixed costs to having a meeting. It often takes some preparation, some arrangement (and occasional re-arrangement), and a fair bit of mix-up and change throughout the day.

My main work requires a lot of focus, so the context shifts make other tasks particularly costly.

Most professional coaches and similar charge at least $100-200 per hour for meetings. I used to find this high, but I think I'm understanding the cost more now. A 1-hour meeting at a planned time costs probably 2-3x as much time as a 1-hour task that can be done "whenever", for example, and even this latter work is significant.

Another big challenge is that I have no idea how to prioritize some of these requests. I'm sure I'm providing vastly different amounts of value in different cases, and I often can't tell.

The regular market solution is to charge for time. But in EA/nonprofits, it's often expected that a lot of this is done for free. My guess is that this is a big mistake. One issue is that people are "friends", but they are also exactly professional colleagues. It's a tricky line.

One minor downside of charging is that it can be annoying administratively. Sometimes it's tricky to get permission to make payments, so a $100 expense takes $400 of effort.

Note that I do expect that me helping the right people, in the right situations, can be very valuable and definitely worth my time. But I think on the margin, I really should scale back my work here, and I'm not sure exactly how to draw the line.

[All this isn't to say that you shouldn't still reach out! I think that often, the ones who are the most reluctant to ask for help/advice, represent the cases of the highest potential value. (The people who quickly/boldly ask for help are often overconfident). Please do feel free to ask, though it's appreciated if you give me an easy way out, and it's especially appreciated if you offer a donation in exchange, especially if you're working in an organization that can afford it.]

https://www.facebook.com/ozzie.gooen/posts/10165732727415363

[comment deleted]2
0
0
Curated and popular this week
Relevant opportunities