Hide table of contents

Hi all, I'm a highschool senior trying to make some college-related decisions, and I'd like to ask for some advice.

My current situation is:

  • I want to work on technical alignment. For exogenous reasons, not going to college (e.g., taking a year off, just being an autodidact/independent researcher) is not an available option, so I'll have to leverage my undergraduate experience as much as possible to upskill on technical alignment.
    • I'll probably double major somewhere along CS/Math and maybe CompBio.
  • Accepted to Harvard (non-binding REA). Was planning to apply to Stanford, MIT, Harvey Mudd for RD, but ... 
  • ... I truly despise the application writing process, every single second of it, and it has taken a significant toll on my mental health. I'd prefer not to go through that again, although I can if necessary.

My considerations are:

  • Flexibility - Is it possible to take advanced (under)graduate courses while skipping prerequisites? I've been (and currently am) self-studying a bunch of (under)graduate subjects that I think would be helpful (mainly from the Study Guide) and it'd really suck to have to take them all again just for meeting prereqs for advanced classes.
    • I don't really care much about getting class credits as long as (1) I don't get kicked out of school for low credit and (2) the low credits or lack of prereqs won't prevent me from taking advanced subjects later on.
  • Are there any alignment research community/group/event nearby?
  • No need for financial aid right now.

The impression I got about Harvard (probably not so well-justified, just from anecdotes across reddit/etc) is that they're much less flexible in terms of class choices or prereqs compared to more traditionally "engineering" colleges like eg MIT. I also think the alignment community is mostly centered around the Bay area and that it hasn't really developed much around Harvard yet (I know about HAIST, though!)

Would Harvard be a good option to just go with, or is there enough additional value from Stanford/MIT/Harvey Mudd that it would be worth applying to any one of those colleges? Thanks!

(Apologies in advance if I broke any posting norms.)

20

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

The information about Harvard you received is very not true in the fields I am familiar with, and I suspect generally. The Harvard Math and CS departments never(!) enforce pre-requisites (which is not true for MIT CS). Compared to MIT, the "global" required courses will be fewer in number (even restricting to non-technical requirements) and easier/less demanding of your time. For instance, the single humanities division class you will be required to take can be satisfied by a logic course taught by a math professor, and the single social science course by a graduate-level course on game theory.

My impression is that this is also broadly true of economics at Harvard compared to economics at MIT. The Harvard econ department seems much more open to undergrads taking grad-level classes, and I have the sense that many prerequisites are not enforced. Harvard, in general, seems to do a better job of recognizing that some of its undergraduates are prepared to pursue very advanced coursework very early on in college than those of its peer schools with which I’m most familiar (which, admittedly, are not among the schools you listed). I think there are a lo... (read more)

Are there any alignment research community/group/event nearby?

HAIST is probably the best AI safety group in the country, they have office space quite near campus and several full time organizers. 

Yup, I'd say that from the perspective of someone who wants a good AI safety (/EA/X-risk) student community, Harvard is the best place to be right now (I say this as an organizer, so grain of salt). Not many professional researchers in the area though which is sad :(

As for the actual college side of Harvard, here's my experience (as a sophomore planning to do alignment):

  • Harvard doesn't seem to have up-to-date CS classes for ML. If you want to learn modern ML, you're on your own
    • (or with your friends! many HAIST people have self-studied ML together or taken
... (read more)

Minor nitpick but I don't think any of the organizers were running it full-time. I know of three who were close to that level, but the full-time ops people do ops for multiple orgs and the full-time alignment people spend some time doing alignment research, not just running HAIST.

But you are right that HAIST has lots of organizers and tons of programs, and I'd go as far as to say it's probably the best place in the world to be a first-year college student interested in learning about alignment right now. The only downside is that there aren't a lot of professional alignment researchers, but that problem exists everywhere. Perhaps Berkeley (specifically CHAI) is better in that regard.

(I am writing this before reading other responses to avoid anchoring)

My experience (current joint CS/Math concentrator) has been that Harvard is much more flexible with requirements than say, MIT (I don't know about the other colleges you named). There are only a handful of "core" courses (called "Geneds" that are typically very easy), a language requirement, and then 3 distributional classes (one must be Science, one must be Social Science, one must be Humanities), and then other than that... basically nothing besides your major's requirement. I have felt very free to take what I want, and it is no problem to skip ahead to grad-level courses if you feel ready.

Compare this to MIT, where (going off of what my friends there say) there are a bunch of cores STEM requirements (chemistry, biology, physics, etc.) in addition to even more humanities requirements than Harvard (apparently you have to take one a semester).

Overall, I have felt very academically unconstrained at Harvard, both in terms of not having to sit through boring requirements, and having great advanced classes available.

In my experience, Harvard has a considerably more active AI safety community than MIT's MAIA.

Many of my friends went to Harvard/Yale/Princeton/MIT/Stanford.* I also went to one of those schools. Occasionally, for fun, we compare our college experiences. Our conclusion is that there are differences between them that are meaningful but not deal-breaking. 

Given your academics interests, I would recommend Yale least. Other than that, any of those schools will give you fine preparation for a math/physics/CS degree. A fact you can weight weakly is that Princeton's (non-theory) CS program is relatively weak but its math program is extremely strong. The MIT graduates I know tend to be weaker at understanding foreign policy/economics/the arts, perhaps because no undergraduates study those subjects at MIT. I expect you will be able to take graduate classes and feel academically challenged at any of these schools. 

The thing that I've heard the most variance on is how happy people were in college, and how many smart, fun and insightful friends they met while there. The main reason I'd advise applying to more colleges is so that you can visit each of them in April and pick the one that you feel like you enjoy the most, because the social experience can be very variable (MIT tends to hyper-sort people into groups of very similar people, Princeton has eating clubs, Stanford used to have lots of interesting living groups but might be growing more boring, Harvard aiui randomizes living groups and they feel a little undifferentiated as a result).

*I don't know anything about Harvey Mudd, unfortunately. There are other schools that might be a better fit for you which you should obviously also apply to, but these are the ones that I would consider to be meaningful competitors to Harvard.

Curated and popular this week
 ·  · 11m read
 · 
Does a food carbon tax increase animal deaths and/or the total time of suffering of cows, pigs, chickens, and fish? Theoretically, this is possible, as a carbon tax could lead consumers to substitute, for example, beef with chicken. However, this is not per se the case, as animal products are not perfect substitutes.  I'm presenting the results of my master's thesis in Environmental Economics, which I re-worked and published on SSRN as a pre-print. My thesis develops a model of animal product substitution after a carbon tax, slaughter tax, and a meat tax. When I calibrate[1] this model for the U.S., there is a decrease in animal deaths and duration of suffering following a carbon tax. This suggests that a carbon tax can reduce animal suffering. Key points * Some animal products are carbon-intensive, like beef, but causes relatively few animal deaths or total time of suffering because the animals are large. Other animal products, like chicken, causes relatively many animal deaths or total time of suffering because the animals are small, but cause relatively low greenhouse gas emissions. * A carbon tax will make some animal products, like beef, much more expensive. As a result, people may buy more chicken. This would increase animal suffering, assuming that farm animals suffer. However, this is not per se the case. It is also possible that the direct negative effect of a carbon tax on chicken consumption is stronger than the indirect (positive) substitution effect from carbon-intensive products to chicken. * I developed a non-linear market model to predict the consumption of different animal products after a tax, based on own-price and cross-price elasticities. * When calibrated for the United States, this model predicts a decrease in the consumption of all animal products considered (beef, chicken, pork, and farmed fish). Therefore, the modelled carbon tax is actually good for animal welfare, assuming that animals live net-negative lives. * A slaughter tax (a
MarieF🔸
 ·  · 4m read
 · 
Summary * After >2 years at Hi-Med, I have decided to step down from my role. * This allows me to complete my medical residency for long-term career resilience, whilst still allowing part-time flexibility for direct charity work. It also allows me to donate more again. * Hi-Med is now looking to appoint its next Executive Director; the application deadline is 26 January 2025. * I will join Hi-Med’s governing board once we have appointed the next Executive Director. Before the role When I graduated from medical school in 2017, I had already started to give 10% of my income to effective charities, but I was unsure as to how I could best use my medical degree to make this world a better place. After dipping my toe into nonprofit fundraising (with Doctors Without Borders) and working in a medical career-related start-up to upskill, a talk given by Dixon Chibanda at EAG London 2018 deeply inspired me. I formed a rough plan to later found an organisation that would teach Post-traumatic stress disorder (PTSD)-specific psychotherapeutic techniques to lay people to make evidence-based treatment of PTSD scalable. I started my medical residency in psychosomatic medicine in 2019, working for a specialised clinic for PTSD treatment until 2021, then rotated to child and adolescent psychiatry for a year and was half a year into the continuation of my specialisation training at a third hospital, when Akhil Bansal, whom I met at a recent EAG in London, reached out and encouraged me to apply for the ED position at Hi-Med - an organisation that I knew through my participation in their introductory fellowship (an academic paper about the outcomes of this first cohort can be found here). I seized the opportunity, applied, was offered the position, and started working full-time in November 2022.  During the role I feel truly privileged to have had the opportunity to lead High Impact Medicine for the past two years. My learning curve was steep - there were so many new things to
Ozzie Gooen
 ·  · 9m read
 · 
We’re releasing Squiggle AI, a tool that generates probabilistic models using the Squiggle language. This can provide early cost-effectiveness models and other kinds of probabilistic programs. No prior Squiggle knowledge is required to use Squiggle AI. Simply ask for whatever you want to estimate, and the results should be fairly understandable. The Squiggle programming language acts as an adjustable backend, but isn’t mandatory to learn. Beyond being directly useful, we’re interested in Squiggle AI as an experiment in epistemic reasoning with LLMs. We hope it will help highlight potential strengths, weaknesses, and directions for the field. Screenshots The “Playground” view after it finishes a successful workflow. Form on the left, code in the middle, code output on the right.The “Steps” page. Shows all of the steps that the workflow went through, next to the form on the left. For each, shows a simplified view of recent messages to and from the LLM. Motivation Organizations in the effective altruism and rationalist communities regularly rely on cost-effectiveness analyses and fermi estimates to guide their decisions. QURI's mission is to make these probabilistic tools more accessible and reliable for altruistic causes. However, our experience with tools like Squiggle and Guesstimate has revealed a significant challenge: even highly skilled domain experts frequently struggle with the basic programming requirements and often make errors in their models. This suggests a need for alternative approaches. Language models seem particularly well-suited to address these difficulties. Fermi estimates typically follow straightforward patterns and rely on common assumptions, making them potentially ideal candidates for LLM assistance. Previous direct experiments with Claude and ChatGPT alone proved insufficient, but with substantial iteration, we've developed a framework that significantly improves the output quality and user experience. We're focusing specifically on