Hide table of contents

I want to get a sense for what kinds of things EAs — who don't spend most of their time thinking about AI stuff — find most confusing/uncertain/weird/suspect/etc. about it.

By "AI stuff", I mean anything to do with how AI relates to EA.

For example, this includes:

  • What's the best argument for prioritising AI stuff?, and
  • How, if at all, should I factor AI stuff into my career plans?

but doesn't include:

  • How do neural networks work? (except inasmuch as it's relevant for your understanding of how AI relates to EA).

Example topics: AI alignment/safety, AI governance, AI as cause area, AI progress, the AI alignment/safety/governance communities, ...

I encourage you to have a low bar for writing an answer! Short, off-the-cuff thoughts very welcome.

25

0
0

Reactions

0
0
New Answer
New Comment

10 Answers sorted by

I've since gotten a bit more context, but I remember feeling super confused about these things when first wondering how much to focus on this stuff:

  1. Before we get to "what's the best argument for this," just what are the arguments for (and against) (strongly) prioritizing AI stuff (of the kind that people in the community are currently working on)?
    1. People keep saying heuristic-y things about self-improving AI and paperclips--just what arguments are they making? (What are the end-to-end / logically thorough / precise arguments here?)
    2. A bunch of people seem to argue for "AI stuff is important" but believe / act as if "AI stuff is overwhelmingly important"--what are arguments for the latter view?
    3. Even if AI is overwhelmingly important, why does this imply we should be focusing on the things the AI safety/governance fields are currently doing?
    4. Some of the arguments for prioritizing AI seem to route through "(emerging) technologies are very important"--what about other emerging technologies?
    5. If there's such a lack of strategic clarity / robustly good things to do in AI governance, why not focus on broadly improving institutions?
    6. Why should we expect advanced AI anytime soon?
  2. What are AI governance people up to? (I.e. what are they working on / what's their theory of change?)
  3. What has the AI safety field accomplished (in terms of research, not just field-building)? (Is there evidence that AI safety is tractable right now?)
  4. A lot of the additional things I found suspect were outside-view-y considerations / "common sense" heuristics--to put it in a very one-sided way, it was something like, "So you're telling me some internet forum is roughly the first and only community to identify the most important problem in history, despite this community's vibes of overconfidence and hero-worship and non-legible qualifications and getting nerd-sniped, and this supposedly critical problem just happens to be some flashy thing that lines up with their academic interests and sounds crazy and isn't a worry for the most clearly relevant experts?")

(If people are curious, the resources I found most helpful on these were: this, this, and this for 1.1, the former things + longtermism arguments + The Precipice on non-AI existential risks for 1.2, 1.1 stuff & stuff in this syllabus for 1.3 and 3, ch. 2 of Superintelligence for 1.4, this for 1.6, the earlier stuff (1.1 and 3) for 4, and various more scattered things for 1.5 and 2.)

(I have completely no expertise in AI, but this is what I always felt personally confused about)
How are we going to know/measure/judge whether our efforts to prevent AI risks are actually helping? Or how much they are helping? 

Firstly, thank you for this! For such a big priority (within the EA community), I feel like there's a lot of confusion about AI.

I help organize UChicago EA & have asked members to send me questions, so I'll update this comment as they come in:

  • AI Alignment: Do we need to decide on a moral principle(s) first? How would it be possible to develop beneficial AI without first 'solving' ethics/morality?
  • AI is neglected by the world generally but doesn’t seem neglected within EA. Does this have any implications for career planning?

Do we need to decide on a moral principle(s) first? How would it be possible to develop beneficial AI without first 'solving' ethics/morality?

Good question! The answer is no: 'solving' ethics/morality first is one thing that we probably eventually need to do, but we could first solve a narrower, simpler form of AI alignment, and use those aligned systems to help us solve ethics/morality and the other trickier problems (like the control problem for more general, capable systems). This is more or less what is discussed in ambitious vs narrow value learning. Narrow value learning is one narrower, simpler form of AI alignment. There are others, discussed here under the heading "Alternative solutions".

How worried are people actually about suffering in neural networks/artificial minds? 

(My impression is that this is a fun thing to talk about, but won't be that useful for a long time)

Here's a great post about this, which I would summarise as "not worried yet, but it's really hard to tell when we should worry".

Hi Sam. I'm curious to what extent people in the field think risk communication could be beneficial for reducing AI risk. In other words, are there any aspects of AI risk that could be mitigated by large numbers of people having accurate knowledge about them? Or is AI risk communication largely irrelevant to the problem? Or is it more likely to increase rather than decrease AI risk (perhaps by means of some type of infohazard)?

Here's a couple that came to mind just now.

  1. How smart do you need to be to contribute meaningfully t AI safety? Near top in class in high-school? Near top in class at ivy-league? Potential famous prof at ivy league? Potential fields medalist?

  2. Also, how hard should we expect alignment to be? Are we trying to throw resources at a problem we expect to be able to at least partially solve in most worlds (which is e.g. the superficial impression I get from biorisk) or are we attempting a hail mary, because it might just work and it's important enough to be worth a try (not saying that would be bad)?

  3. Big labs in the West that kind of target AGI are OpenAI and DeepMind. Others target AGI less explicitly, but inlcude e.g. Google Brain. Are there equivalents elsewhere? China? Do we know whether these exits? Am I missing labs that target AGI in the West?

  4. Finally, this one's kind of obvious, but how large is the risk? What's the probability of catastrophe? I'm aware of many estimates, but this is still definitely something I'm confused about.


I think on all these questions except (3), there's substantial disagreement among AI safety researchers, though I don't have a good feeling for the distributions of views either.

Thank you for posting this question and encouraging people to talk openly about this topic!

Here are some of the AI-related questions that I've thought about from time to time:

  • On the margin, should donors prioritize AI safety above other existential risks and broad longtermist interventions? Open Phil gives $10's of millions to AI safety and biosecurity every year, but other x-risks and longtermist areas seem rather unexplored and neglected, like s-risks.
  • What would make an artificial entity (like a computer program) sentient? What would count as a painful experience for said entity? Can we learn about this by studying the neuroscience of animal sentience?
  • In expectation, will there be more sentient artificial beings than sentient biological beings (including animals) over the long-term future? (brought up as an objection to this)
  • Is "intelligence" (commonly defined as the cognitive ability to make and execute plans to achieve goals) really enough to make an AI system more powerful than humans (individuals, groups, or all of humanity combined)?
  • Should we expect AI development to move toward AGI, narrowly superhuman AIs, CAIS, or something else?
  • What benefits and risks should we expect in a CAIS scenario?

On the margin, should donors prioritize AI safety above other existential risks and broad longtermist interventions?

To the extent that this question overlaps with Mauricio's question 1.2 (i.e. A bunch of people seem to argue for "AI stuff is important" but believe / act as if "AI stuff is overwhelmingly important"--what are arguments for the latter view?), then you might find his answer helpful.

other x-risks and longtermist areas seem rather unexplored and neglected, like s-risks

Only a partial answer, but worth noting that I think the most plausible... (read more)

Is "intelligence" ... really enough to make an AI system more powerful than humans (individuals, groups, or all of humanity combined)?

Some discussion of this question here: https://www.alignmentforum.org/posts/eGihD5jnD6LFzgDZA/agi-safety-from-first-principles-control

Here are some big and common questions I've received from early-stage AI Safety focused people, with at least some knowledge of EA.

They probably don't spend most of their time thinking about AIS, but it is their cause area of focus. Unsure if that meets the criteria you're looking for, exactly.

  1. What evidence would be needed for EA to depriotise AI Safety as a cause area, at least relative to other x-risks?
  2. What is the most impactful direction of research within AIS (common amongst people looking for their first project/opportunity. I usually point them at this lesswrong series, as a starting point).

I think that as a software developer I can't really help with this problem, but I'm not sure and would I'd like input from people in the field

A timely post: https://forum.effectivealtruism.org/posts/DDDyTvuZxoKStm92M/ai-safety-needs-great-engineers

(The focus is software engineering not development, but should still be informative.)

I'd like help vetting this role.

Here's my attempt

(Is this on-topic enough?)

Comments1
Sorted by Click to highlight new comments since:
[comment deleted]1
0
0
Curated and popular this week
Relevant opportunities