Bio

Running an AI safety group at the University of Michigan. https://maisi.club/

Email: jakraus@umich.edu

Anonymous feedback: https://www.admonymous.co/jakraus

Comments
130

Probably worth adding a section of similar collections / related lists. For instance, see Séb Krier's post and https://aisafety.video/.

Apart Research has a newsletter that might be on hiatus.

Second, live talks induce a level of collective emotional engagement, a kind of mass hypnosis, or a tribal ritualistic mind-set, that heightens the affective impact of the talk. This might be as 'efficient' at a strictly cognitive level as watching the talk later at 1.75x speed. But it can help the ideas sink deeper into one's heart and brain, as it were. 

I agree. Another option -- and probably the more relevant control group for studying this particular phenomenon -- is to watch the talk later at 1x speed.

However, the drop in engagement time which we could attribute to this change was larger than we’d expected.

How did you measure a "drop in engagement time which we could attribute to this change"? Some relevant metrics are page view counts, time spent on the website, number of clicks, number of applications to 80k advising, etc.

Current scaling "laws" are not laws of nature. And there are already worrying signs that things like dataset optimization/pruning, curriculum learning and synthetic data might well break them

Interesting -- can you provide some citations?

Can you highlight some specific AGI safety concepts that make less sense without secular atheism, reductive materialism, and/or computational theory of mind?

The AI Does Not Hate You is the same book as The Rationalist's Guide to the Galaxy? I didn't realize that. Why do they have different titles?

Load more