Posts tagged community

Quick takes

Show community
View more
27
Linch
3h
0
Do we know if @Paul_Christiano or other ex-lab people working on AI policy have non-disparagement agreements with OpenAI or other AI companies? I know Cullen doesn't, but I don't know about anybody else. I know NIST isn't a regulatory body, but it still seems like standards-setting should be done by people who have no unusual legal obligations. And of course, some other people are or will be working at regulatory bodies, which may have more teeth in the future. To be clear, I want to differentiate between Non-Disclosure Agreements, which are perfectly sane and reasonable in at least a limited form as a way to prevent leaking trade secrets, and non-disparagement agreements, which prevents you from saying bad things about past employers. The latter seems clearly bad to have for anybody in a position to affect policy. Doubly so if the existence of the non-disparagement agreement itself is secretive.
126
Cullen
3d
0
I am not under any non-disparagement obligations to OpenAI. It is important to me that people know this, so that they can trust any future policy analysis or opinions I offer. I have no further comments at this time.
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder: The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism. My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique. But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it's the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice. (I mean, you can also find longtermism worthy because of something something math and cold utilitarianism. That’s not out of the question. I just don’t think it’s the only way to reach that conclusion.)
I just looked at [ANONYMOUS PERSON]'s donations. The amount that this person has donated in their life is more than double the amount that I have ever earned in my life. This person appears to be roughly the same age as I am (we graduated from college ± one year of each other). Oof. It makes me wish that I had taken steps to become a software developer back when I was 15 or 18 or 22. Oh, well. As they say, comparison is the thief of joy. I'll try to focus on doing the best I can with the hand I'm dealt.
Most possible goals for AI systems are concerned with process as well as outcomes. People talking about possible AI goals sometimes seem to assume something like "most goals are basically about outcomes, not how you get there". I'm not entirely sure where this idea comes from, and I think it's wrong. The space of goals which are allowed to be concerned with process is much higher-dimensional than the space of goals which are just about outcomes, so I'd expect that on most reasonable sense of "most" process can have a look-in. What's the interaction with instrumental convergence? (I'm asking because vibe-wise it seems like instrumental convergence is associated with an assumption that goals won't be concerned with process.) * Process-concerned goals could undermine instrumental convergence (since some process-concerned goals could be fundamentally opposed to some of the things that would otherwise get converged-to), but many process-concerned goals won't * Since instrumental convergence is basically about power-seeking, there's an evolutionary argument that you should expect the systems which end up with most power to have the power-seeking behaviours * I actually think there are a couple of ways for this argument to fail: 1. If at some point you get a singleton, there's now no evolutionary pressure on its goals (beyond some minimum required to stay a singleton) 2. A social environment can punish power-seeking, so that power-seeking behaviour is not the most effective way to arrive at power * (There are some complications to this I won't get into here) * But even if it doesn't fail, it pushes towards things which have Omuhundro's basic AI drives (and so pushes away from process-concerned goals which could preclude those), but it doesn't push all the way to purely outcome-concerned goals In general I strongly expect humans to try to instil goals that are concerned with process as well as outcomes. Even if that goes wrong, I mostly expect them to end up something which has incorrect preferences about process, not something that doesn't care about process. How could you get to purely outcome-concerned goals? I basically think this should be expected just if someone makes a deliberate choice to aim for that (though that might be possible via self-modification; the set of goals that would choose to self-modify to be purely outcome-concerned may be significantly bigger than the set of purely outcome-concerned goals). Overall I think purely outcome-concerned goals (or almost purely outcome-concerned goals) are a concern, and worth further consideration, but I really don't think they should be treated as a default.

Popular comments

Recent discussion

Scarlett Johansson makes a statement about the "Sky" voice, a voice for GPT-4o that OpenAI recently pulled after less than a week of prime time.

tl;dr: OpenAI made an offer last September to Johansson; she refused. They offered again 2 days before the public demo. Scarlett...

Continue reading

Seems like bad behaviour from Altman (though not terribly surprising). 

I doubt I'll comment much on this publicly because I doubt I have much to add. I think there is a risk of overextension here - this seems like dumb/bad behaviour but isn't as harmful as the NDA stuff. I think it would be easy to stop being focused on "are OpenAI being good stewards of AI?" to "we sneer whenever altman makes a mistake". I think that would be a bad transition. 

1
huw
1h
If this is true, it indicates that Sam believes he can act with impunity now that he has a favourable board behind him. Their actions over the next few days will be very telling of how many more chances people should give OpenAI.

A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them.


Some quotes perhaps worth highlighting...

Continue reading

Scarlett Johansson issues statement about similarities between "Sky" and her voice[1]
 

Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and Al. He said he felt that my voice would be comforting to people. After much consideration and for personal reasons, declined the offer. Nine months later, m

... (read more)
2
BrownHairedEevee
6h
It seems like these terms would constitute theft if the equity awards in question were actual shares of OpenAI rather than profit participation units (PPUs). When an employee is terminated, their unvested RSUs or options may be cancelled, but the company would have no right to claw back shares that are already vested as those are the employee's property. Similarly, don't PPUs belong to the employee, meaning that the company cannot "cancel" them without consideration in return?

We’re excited to announce a new volunteer-run organisation, UK Voters For Animals, dedicated to mobilising UK voters to win key legislative changes for farmed animals. Our goal is to recruit and train voters to meet with MPs and prospective MPs to build political support...

Continue reading

Are there particular "key legislative changes" that this could help achieve, or are they hypothetical at present?

1
Holly Baines
8h
Hi Charlotte, we have and have already posted in quite a few. But please feel free to post in some groups yourself, too! Thank you, Holly
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

I am a lawyer. I am not licensed in California, or Delaware, or any of the states that likely govern OpenAI's employment contracts. So take what I am about to say with a grain of salt, as commentary rather than legal advice. But I am begging any California-licensed attorneys...

Continue reading
4
Jonas V
3h
If someone has a good plan for how to make good/useful things happen here, but requires funding for it, please contact me.
23
rossaokod
15h

For those who don't know, Matt Bruenig is a large and well-known twitter account. It would be easy to fact check this and very likely to be found out if it was false, so it is probably true. 

Linch commented on Ramiro's quick take 2h ago

Time to cancel my Asterisk subscription?

 

 

So Asterisk dedicates a whole self-aggrandizing issue to California, leaves EV for Obelus (what is Obelus?), starts charging readers, and, worst of all, celebrates low prices for eggs and milk?

Continue reading

You should cancel if you think it's not worth the money. The other reasons seem worse.

DC
2h2
0
0

The eggs and milk quip might be offensive on animal welfare reasons. Eggs at least are one of the worst commonly consumed animal products according to various ameliatarian Fermi estimates.

If you previously liked the magazine these seem like relatively weak reasons to cancel it. 

I just looked at [ANONYMOUS PERSON]'s donations. The amount that this person has donated in their life is more than double the amount that I have ever earned in my life. This person appears to be roughly the same age as I am (we graduated from college ± one year of each...

Continue reading

Hi Joseph :) Based on what you've written I'm going to guess you have probably donate more than 99% of the world's population to effective charities. So you're probably crushing it!

I am two weeks into the strategy development phase of my movement building and almost ready to start ideating some programs for the year.

But I want these programs to be solving the biggest pain points people experience when trying to have a positive impact in AI Safety .

Has anyone seen any research that looks at this in depth? For example, through an interview process and then survey to quantify how painful the pain points are?

Some examples of pain points I've observed so far through my interviews with Technical folk:

  • I often felt overwhelmed by the vast amount of material to learn.
  • I felt there wasn’t a clear way to navigate learning the required information
  • I lacked an understanding of my strengths and weaknesses in relation to different AI Safety areas  (i.e. personal fit / comparative advantage) .
  • I lacked an understanding of my progress after I get started (e.g. am I doing well? Poorly
...
Continue reading

In 1976, our founder Tim Black established MSI Reproductive Choices[1] to bring contraception and abortion care to women in underserved communities that no one else would go to. As a doctor, he witnessed firsthand the hardship caused by the lack of reproductive choice and...

Continue reading
1
SummaryBot
5h
Executive summary: MSI Reproductive Choices has served over 200 million clients since 1976, delivering highly cost-effective sexual and reproductive healthcare services focused on underserved communities in Africa, Asia, and Latin America. Key points: 1. MSI's services have averted an estimated 316,000 maternal deaths and 158.6 million unintended pregnancies since 2000, at an average cost of $4.70 per DALY and $3,353 per maternal death averted globally. 2. MSI reaches last-mile communities through mobile outreach teams, partnerships with public sector facilities, and a network of local midwives and nurses called MSI Ladies. 3. In 2023, MSI served 23.3 million clients, with 57% having no other service options, 31% living in multi-dimensional poverty, and 19% being adolescents. 4. MSI's services in 2023 are estimated to save 37,500 lives, prevent 16.5 million unintended pregnancies, avert 9 million unsafe abortions, and save $11.2 million in direct healthcare costs for low- and middle-income countries. 5. MSI Nigeria, the most cost-effective program, has a cost per DALY of $1.63 and per maternal death averted of $685, with significant potential for expansion to meet the country's high unmet need for contraception.     This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
3
Karthik Tadepalli
5h
This note says: I suppose the other numbers are extrapolations from this figure, though it's hard to say.

These cost effectiveness numbers are based on the total costs of the our Outreach and Public Sector Strengthening programs in Nigeria and our impact based on services delivered through these channels and calculated using our Impact 2 tool. Costs include all actual expenses to MSI involved in delivering services through these two channels, for example, for our outreach teams, these costs encompass service and non service providers, consumables, travel, demand generation, quality assurance, and other overhead expenses associated with providing services on-si... (read more)

Linch posted a Quick Take 3h ago

Do we know if @Paul_Christiano or other ex-lab people working on AI policy have non-disparagement agreements with OpenAI or other AI companies? I know Cullen doesn't, but I don't know about anybody else.

I know NIST isn't a regulatory body, but it still seems like standards-setting should be done by people who have no unusual legal obligations. And of course, some other people are or will be working at regulatory bodies, which may have more teeth in the future.

To be clear, I want to differentiate between Non-Disclosure Agreements, which are perfectly sane and reasonable in at least a limited form as a way to prevent leaking trade secrets, and non-disparagement agreements, which prevents you from saying bad things about past employers. The latter seems clearly bad to have for anybody in a position to affect policy. Doubly so if the existence of the non-disparagement agreement itself is secretive.

Continue reading