Ben Millwood

2856 karmaJoined Dec 2015

Comments
344

Topic contributions
1

I think this isn't relevant to the person in the UK you're thinking of, but just as an interesting related thing, members of the UK parliament are protected from civil or criminal liability for e.g. things they say in parliament: see parliamentary privilege.

A potential downside is that markets about markets are generally easier to manipulate than markets about ground truth. You may even find that second-order markets create an incentive to distort first-order markets to an extent that makes the existing markets less reliable.

Today I got a dose of Novavax for free, largely by luck that's probably not reproducible.

It turns out that vials of Novavax contain 5 doses and only last a short time, I think for 24 hours. Pharmacies therefore need to batch bookings together, and I guess someone got tired of waiting and opted to just buy the entire vial for themselves, letting whoever pick up the other doses. I then found about this via Rochelle Harris, who in turn found out about it via a Facebook group (UK Novavax Vaccine info) for coordinating these things.

By that I meant it's an org doing AI safety which also takes VC capital / has profitmaking goals /produces AI products.

For me, the idea was "creating a stronger public / free case that these contracts are unenforceable will make former OpenAI employees more willing to speak out / seek legal assurance that they can speak out, which in general is good for us learning more about what OpenAI is doing and more appropriately reacting to it".

I guess lots of people out there have a non-disparagement agreement that they're not going to question because they want a quiet life. I'd like to liberate those people (so that we can use their information) if I can.

I wonder how the recent turn for the worse at OpenAI should make us feel about e.g. Anthropic and Conjecture and other organizations with a similar structure, or whether we should change our behaviour towards those orgs.

  • How much do we think that OpenAI's problems are idiosyncratic vs. structural? If e.g. Sam Altman is the problem, we can still feel good about peer organisations. If instead weighing investor concerns and safety concerns is the root of the problem, we should be worried about whether peer organizations are going to be pushed down the same path sooner or later.
  • Are there any concerns we have with OpenAI that we should be taking this opportunity to put to its peers as well? For example, have peers been publically asked if they use non-disparagement agreements? I can imagine a situation where another org has really just never thought to use them, and we can use this occasion to encourage them to turn that into a public commitment.

do you know if Marisa exclusively used they/them pronouns, or she/they, or what? I remember hearing something about this but I'm not certain and can't find any online profiles anymore :(

Does anyone have guesses about how much it would cost to pay a California-licensed employment lawyer to form an opinion on this?

(edit: I don't plan to do this, because I'm on the wrong continent and don't really know any of the relevant people. I want this to be a prompt for someone closer to OpenAI to think about if they could make this happen.)

I cried when I read this. What an absolutely miserable thing to have happened.

[you] can most optimistically assume normal distribution of these traits in people in power

This is not maximally optimistic! We can hope we could come up with a system that (a) empowers unselfish people over selfish people and (b) protects the system itself against interference from the powerful. This is a difficult thing to achieve, and many have arguably failed, but that doesn't mean it isn't possible to do.

Centralized systems inherently offer more affordances of seizing power to selfish ends.

I think this is kind of unclear. If you do not deliberately engineer a government to manage the distribution of power, instead you will get an unmanaged distribution of power, which in particular will not obviously be well-placed to prevent an individual accumulating and then seizing power for themselves.

But even if true, I think I would still be in favour of central government because centralized systems inherently offer so many other things, which together are IMO worth it.

Load more