RomanHauksson

Computer Science student @ University of Texas at Dallas
142 karmaJoined Pursuing an undergraduate degreeDallas, TX, USA
roman.computer

Bio

Participation
5

Organizing my university's EA student group and self-studying to become an AI alignment researcher. I want to maximally reduce the risk of superintelligent agents killing everyone and/or sentient digital systems experiencing astronomical amounts of suffering. Also interested in entrepreneurship and changing institutions to use better-designed incentive systems (see mechanism design).

Comments
21

80,000 Hours had an article with advice for new college students, and a section towards the end touches on your question.

Make sure to check out OpenPhil's undergraduate scholarship if you haven't yet.

Here are a couple of excerpts from relevant comments from the Astral Codex Ten post about the tournament. From the anecdotes, it seems as though this tournament had some flaws in execution, namely that the "superforcasters" weren't all that. But I want to see more context if anyone has it.

From Jacob:

I signed up for this tournament (I think? My emails related to a Hybrid Forecasting-Persuasion tournament that at the very least shares many authors), was selected, and partially participated. I found this tournament from it being referenced on ACX and am not an academic, superforecaster, or in any way involved or qualified whatsoever. I got the Stage 1 email on June 15.

From magic9mushroom:

I participated and AIUI got counted as a superforecaster, but I'm really not. There was one guy in my group (I don't know what happened in other groups) who said X-risk can't happen unless God decides to end the world. And in general the discourse was barely above "normal Internet person" level, and only about a third of us even participated in said discourse. Like I said, haven't read the full paper so there might have been some technique to fix this, but overall I wasn't impressed.

Same reason we haven't been destroyed by a nuclear apocalypse yet: if we had, we wouldn't be here talking about it.

As for the question "why haven't we encountered a power-seeking AGI from elsewhere in the universe who didn't destroy us", I don't know.

I can look into how to set up a torrent link tomorrow and let you know how it goes!

Can we set up a torrent link for this?

Rational Animations is probably the YouTube channel the report is referring to, in case anyone's curious.

Where did you copy the quote from?

I plan to do some self-studying in my free time over the summer, on topics I would describe as "most useful to know in the pursuit of making the technological singularity go well". Obviously, this includes technical topics within AI alignment, but I've been itching to learn a broad range of subjects to make better decisions about, for example, what position I should work in to have the most counterfactual impact or what research agendas are most promising. I believe this is important because I aim to eventually attempt something really ambitious like founding an organization, which would require especially good judgement and generalist knowledge. What advice do you have on prioritizing topics to self-study and for how much depth? Any other thoughts or resources about my endeavor? I would be super grateful to have a call with you if this is something you've thought a lot about (Calendly link). More context: I'm a undergraduate sophomore studying Computer Science.

So far, my ordered list includes:

  1. Productivity
  2. Learning itself
  3. Rationality and decision making
  4. Epistemology
  5. Philosophy of science
  6. Political theory, game theory, mechanism design, artificial intelligence, philosophy of mind, analytic philosophy, forecasting, economics, neuroscience, history, psychology...
  7. ...and it's at this point that I realize I've set my sights too high and I need to reach out for advice on how to prioritize subjects to learn!

I think it's important to give the audience some sort of analogy that they're already familiar with, such as evolution producing humans, humans introducing invasive species in new environments, and viruses. These are all examples of "agents in complex environments which aren't malicious or Machiavellian, but disrupt the original group of agents anyway".

I believe these analogies are not object-level enough to be arguments for AI X-risk in themselves, but I think they're a good way to help people quickly understand the danger of a superintelligent, goal-directed agent.

Load more