Will Howard🔹

Software Engineer @ Centre for Effective Altruism
1382 karmaJoined Working (0-5 years)Oxford, UK

Bio

I'm a software engineer on the CEA Online team, mostly working on the EA Forum. We are currently interested in working with impactful projects as contractors/consultants, please fill in this form if you think you might be a good fit for this.

You can contact me at will.howard@centreforeffectivealtruism.org

Posts
14

Sorted by New

Comments
120

Topic contributions
45

Out of interest, did there happen to be a tender offer running since this post came out or is there some other way you can sell the shares?

+1, I do all my budgeting in Monzo and I find it to be really good. In addition to these features I find the budgeting by category very useful, and the fact that it doesn't require copying the data out to somewhere else makes it much easier to stick to.

I personally find it simpler to do it this way because:

  1. It's easier to change the amount and where you are donating to
  2. Not all employers offer Payroll Giving, you could probably ask them to set it up but that would be a hassle. I'm currently employed via Deel, which doesn't offer it as far as I'm aware

Also I found ringing them up really quite straightforward, it took me maybe 15 mins each time.

Appendix: Example of how the arithmetic works

I'm borrowing this example from the Payroll Giving topic page:

You earn ÂŁ60,000 per year (making your marginal tax band 40%). You donate ÂŁ1,200 to a charity after you've been paid. The charity can claim 25% from the government, giving them ÂŁ1,500. And you can claim a tax rebate of ÂŁ300 (1500 * 0.2), meaning you are only out-of-pocket ÂŁ900.

The way they actually get the ÂŁ300 into your bank account is by adjusting your tax code, which means you pay less tax for the rest of the year.

In a tax code like 1257L, the 1257 means you pay zero tax on the first ÂŁ12,570. Changing this to e.g. 1258L effectively shifts all the tax bands up by ÂŁ10, so if the highest band you were in was 40% this gets you an extra ÂŁ4.

In the above case, they would add ÂŁ750 to the tax-free allowance, to save you . So this would change your tax code from 1257L to 1332L (equal to )

You can use this to check they've done the adjustment correctly when you talk to them over the phone. It should be the case that:

I realize that maybe the other people here in the thread have so little trust in the survey designers that they're worried that, if they answer with the low-probability, higher-EV option, the survey designers will write takeaways like "more EAs are in favor of donating to speculative AI risk."

I'm one of the people who agreed with @titotal's comment, and it was because of something like this.

It's not that I'm worried per se that the survey designers will write a takeaway that puts a spin on this question (last time they just reported it neutrally). It's more that I expect this question[1] to be taken by other orgs/people as a proxy metric for the EA community's support for hits-based interventions. And because of the practicalities of how information is acted on the subtlety of the wording of the question might be lost in the process (e.g. in an organisation someone might raise the issue at some point, but it would eventually end up as a number in a spreadsheet or BOTEC, and there is no principled way to adjust for the issue that titotal describes).

  1. ^

    And one other about supporting low-probability/high-impact interventions

I think this table from the paper gives a good idea of the exact methodology:

Like others I'm not convinced this is a meaningful "red line crossing", because non-AI computer viruses have been able to replicate themselves for a long time, and the AI had pre-written scripts it could run to replicate itself.

The reason (made up by me) non-AI computer viruses aren't a major threat to humanity is that:

  1. They are fragile, they can't get around serious attempts to patch the system they are exploiting
  2. They lack the ability to escalate their capabilities once they replicate themselves (a ransomware virus can't also take control of your car)

I don't think this paper shows these AI models making a significant advance on these two things. I.e. if you found this model self-replicating you could still shut it down easily, and this experiment doesn't in itself show the ability of the models to self-improve.

I just sent out the Forum digest and I thought there was a higher number of underrated (and slightly unusual) posts this week, so I'm re-sharing some of them here:

I don't work on the EAG team, but I believe applications haven't opened yet because the exact date and location haven't been decided (cc @RobertHarling)

I'm curating this post. I really enjoyed the argument-rebuttal format used here, and it does a great job of tiling out the common flavours of PAV arguments.

I think it's a shame the Nucleic Acid Observatory are getting so few votes.

They are relatively cheap (~$2M/year) and are working on a unique intervention that on the face of it seems like it would be very important if successful. At least as far as I'm aware there is no other (EA) org that explicitly has the goal of creating a global early warning system for pandemics.

By the logic of it being valuable to put the first few dollars into something unique/neglected I think it looks very good (although I would want to do more research if it got close to winning).

Load more