finnhambly

Probabilistic modelling, forecasting @ University of Bath / Swift Centre
163 karmaJoined Working (0-5 years)
www.admonymous.co/finnhambly

Bio

DM me if you want to talk about probabilistic modelling (of policies/tech progress/etc)

Got involved with EA in 2017

Comments
14

Topic contributions
2

Have you got a link to where this excerpt came from?

I don't think this disclosure shows that much awareness, as the notes seem to dismiss it as a problem, unless I'm misunderstanding what Holden means by "don’t assume things about my takes on specific AI labs due to this". It sounds like he's claiming he's able to assess these things neutrally, which is quite a big claim!

Why is this getting downvoted? This comment seems plainly helpful; it's an important thing to highlight.

I can see why some people think the publicity effects of the letter might be valuable, but — when it comes to the 6-month pause proposal itself — I think Matthew's reasoning is right.

I've been surprised by how many EA folk are in favour of the actual proposal, especially given that AI governance literature often focuses on the risks of fuelling races. I'd be keen to read people's counterpoints to Matthew's thread(s); I don't think many expect GPT-5 will pose an existential threat, and I'm not yet convinced that 'practice' is a good enough reason to pursue a bad policy.

I don't think gossip ought to be that public or legible. 

Firstly, I don't think it would work for achieving your goals; I would still hesitate about having my opinions uploaded without feeling very confident in them (rumours are powerful weapons and I wouldn't want to start one if I was uncertain).

Secondly, I don't think it's worth the costs of destroying trust. A whole bunch more people will distance themselves from EA if they know their public reputation is on the line with every interaction. (I also agree with Lawrence on the Slack leaks, FWIW).

I see why you might want public info (akin to scandal markets) when people are more high-profile, but I don't think Sam Bankman-Fried would have passed that bar in 2018.

I think the main problem being faced again and again is that internal reporting lacks teeth. 

I think public reporting is an inadequate alternative. It's a big demand to ask people to become public whistleblowers, especially since most things worth reporting aren't always black and white. It's hard to publicly speak out about things if you're not certain about them (eg because of self-doubt, wondering if it's even worth bothering, creating a reputation for yourself, etc).

Additionally, the subsequent discourse seems to put additional burden on those speaking out. If I spoke up about something just to see a bunch of people doubt what I've said is true (or, like in previous cases, have to engage with the wrongdoer and proofread their account of events) I'd probably regret my choice.

Okay great, that makes sense to me. Thank you very much for the clarification!

I am unsure what you mean by AGI. You say:

For purposes of our definitions, we’ll count it as AGI being developed if there are AI systems that power a comparably profound transformation (in economic terms or otherwise) as would be achieved in such a world [where cheap AI systems are fully substitutable for human labor].

and:

causing human extinction or drastically limiting humanity’s future potential may not show up as rapid GDP growth, but automatically counts for the purposes of this definition.

If someone uses AI capabilities to create a synthetic virus (which they wouldn't have been able to do in the counterfactual world without that AI-generated capability) and caused the extinction or drastic curtailment of humanity, would that count as "AGI being developed"?

My instinct is that this should not be considered to be AGI — since it is the result of just narrow AI and a human. However the caveat implies that it would count, because an AI system would have powered human extinction.

I get the impression you want to count 'comprehensive AI systems' as AGI if the system is able to act ~autonomously from humans[1].  Is that correct?

  1. ^

    Putting it another way: 
    If there is a company powered employs both humans and lots of AI technologies and it brings about a "profound transformation (in economic terms or otherwise)" , I assume the combined capability of the AI-elements of the company should be equivalently general as a single AGI would be to count.

    If it does not sum up to that level of generality, but is still used to bring about a transformation, I think that it should not resolve 'AGI developed' positively. However, it currently looks like it would resolve it positively.

Load more