Jeff Kaufman

Software Engineer @ Nucleic Acid Observatory
14830 karmaJoined Working (15+ years)Somerville, MA, USA
www.jefftk.com

Bio

Participation
4

Software engineer in Boston, parent, musician. Switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise.

Full list of EA posts: jefftk.com/news/ea

Comments
936

I don't think 'responsible' is the right word, but the consequences to the effective altruism project of not catching on earlier were enormous, far larger than to other economic actors exposed to FTX. And I do think we ought to have realized how unusual our situation was with respect to FTX.

I think it depends what sort of risks we are talking about. The more likely Dustin is to turn out to be perpetrating a fraud (which I think is very unlikely!) the more the marginal person should be earning to give. And the more projects should be taking approaches that conserve runway at the cost of making slower progress toward their goals.

Are the high numbers of deaths in the 1500s old world diseases spreading in the new world? If so, that seems to overestimate natural risk: the world's current population isn't separated from a larger population that has lots of highly human-adapted diseases.

In the other direction, this kind of analysis doesn't capture what I personally see as a larger worry: human-created pandemics. I know you're extrapolating from the past, and it's only very recently that these would even have been possible, but this seems at least worth noting.

other cities across the U.S. (like Boston) ... regularly build subway lines for less than $360 million per kilometer

Huh? Boston hasn't built a subway line in decades, let alone regularly builds them.

It did recently finish a light rail extension in an existing right of way, expanding a trench with retaining walls, but (a) that's naturally much cheaper than digging a subway and (b) it took 12y longer than planned.

The NAO ran a pilot where we worked with the CDC and Ginkgo to collect and sequence pooled airplane toilet waste. We haven't sequenced these samples as deeply as we would like to yet, but initial results look very promising.

Militaries are generally interested in this kind of thing, but primarily as biodefense: protecting the population and service members.

As I tried to communicate in my previous comment, I'm not convinced there is anyone who "will have their plans changed for the better by seeing OpenAI safety positions on 80k's board", and am not arguing for including them on the board.

EDIT: after a bit of offline messaging I realize I misunderstood Elizabeth; I thought the parent comment was pushing me to answer the question posed in the great grandcomment but actually it was accepting my request to bring this up a level of generality and not be specific to OpenAI. Sorry!

I think the board should generally list jobs that, under some combinations of values and world models that the job board runners think are plausible, are plausibly one of the highest impact opportunities for the right person. I think in cases like working in OpenAI's safety roles where anyone who is the "right person" almost certainly already knows about the role, there's not much value in listing it but also not much harm.

I think this mostly comes down to a disagreement over how sophisticated we think job board participants are, and I'd change my view on this if it turned out that a lot of people reading the board are new-to-EA folks who don't pay much attention to disclaimers and interpret listing a role as saying "someone who takes this role will have a large positive impact in expectation".

If there did turn out to be a lot of people in that category I'd recommend splitting the board into a visible-by-default section with jobs where conditional on getting the role you'll have high positive impact in expectation (I'd biasedly put the NAO's current openings in this category) and a you-need-to-click-show-more section with jobs where you need to think carefully about whether the combination of you and the role is a good one.

Possibly! That would certainly be a convenient finding (from my perspective) if it did end up working out that way.

[I] am slightly confused what this post is trying to get out. I think your question is: will NYC hit 1% cumulative incidence after global 1% cumulative incidence?

That's one of the main questions, yes.

The core idea is that our efficacy simulations are in terms of cumulative incidence in a monitored population, but what people generally care about is cumulative incidence in the global (or a specific country's) population.

online tool

Thanks! The tool is neat, and it's close to the approach I'd want to see.

I think this is almost never ... would surprise me

I don't see how you can say both that it will "almost never" be the case that NYC will "hit 1% cumulative incidence after global 1% cumulative incidence" but also that it would surprise you if you can get to where your monitored cities lead global prevalence?

I haven't done or seen any modeling on this, but intuitively I would expect the variance due to superspreading to have most of its impact in the very early days, when single superspreading events can meaningfully accelerate the progress of the pandemic in a specific location, and to be minimal by the time you get to ~1% cumulative incidence?

I think this is probably far along you're fine

I'm not sure what you mean by this?

(Yes, 1% cumulative incidence is high -- I wish the NAO were funded to the point that we could be talking about whether 0.01% or 0.001% was achievable.)

Load more