peterhartree

3446 karmaJoined Working (6-15 years)Reykjavik, Islande
pjh.is

Bio

Now: TYPE III AUDIO

Previously: 80,000 Hours (2014-15; 2017-2021) Worked on web development, product management, strategy, internal systems, IT security, etc. 

Before that: My CV.

Side-projects: Inbox When Ready; Radio Bostrom; The Valmy; Comment Helper for Google Docs.

Comments
261

Topic contributions
4

I also don't see any evidence for the claim of EA philosophers having "eroded the boundary between this kind of philosophizing and real-world decision-making".

Have you visited the 80,000 Hours website recently?

I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad we’re attempting this, but we must recognise that this is an extraordinarily risky business. Even the wisest humans are unqualified for this role. Many of our attempts are 51:49 bets at best—sometimes worth trying, rarely without grave downside risk, never without an accompanying imperative to listen carefully for feedback from the world. And yes—diverse, hedged experiments in overconfidence also make sense. And no, SBF was not hedged anything like enough to take his 51:49 bets—to the point of blameworthy, perhaps criminal negligence.

A notable exception to the “we’re mostly clueless” situation is: catastrophes are bad. This view passes the “common sense” test, and the “nearly all the reasonable takes on moral philosophy” test too (negative utilitarianism is the notable exception). But our global resource allocation mechanisms are not taking “catastrophes are bad” seriously enough. So, EA—along with other groups and individuals—has a role to play in pushing sensible measures to reduce catastrophic risks up the agenda (as well as the sensible disaster mitigation prep).

(Derek Parfit’s “extinction is much worse than 99.9% wipeout” claim is far more questionable—I put some of my chips on this, but not the majority.)

As you suggest, the transform function from “abstract philosophical idea” to “what do” is complicated and messy, and involves a lot of deference to existing norms and customs. Sadly, I think that many people with a “physics and philosophy” sensibility underrate just how complicated and messy the transform function really has to be. So they sometimes make bad decisions on principle instead of good decisions grounded in messy common sense.

I’m glad you shared the J.S. Mill quote.

…the beliefs which have thus come down are the rules of morality for the multitude, and for the philosopher until he has succeeded in finding better

EAs should not be encouraged to grant themselves practical exception from “the rules of morality for the multitude” if they think of themselves as philosophers. Genius, wise philosophers are extremely rare (cold take: Parfit wasn’t one of them).

To be clear: I am strongly in favour of attempts to act on important insights from philosophy. I just think that this is hard to do well. One reason is that there is a notable minority of “physics and philosophy” folks who should not be made kings, because their “need for systematisation” is so dominant as to be a disastrous impediment for that role.

In my other comment, I shared links to Karnofsky, Beckstead and Cowen expressing views in the spirit of the above. From memory, Carl Shuman is in a similar place, and so are Alexander Berger and Ajeya Cotra.

My impression is that more than half of the most influential people in effective altruism are roughly where they should be on these topics, but some of the top “influencers”, and many of the ”second tier”, are not.

(Views my own. Sword meme credit: the artist currently known as John Stewart Chill.)

We made this change a few weeks ago. I'm sorry for the delay—I didn't see your message until now. I've tweaked my notification setup so that I'll see messages on this thread sooner.

Thanks. We're now filtering the diamond emojis out of the narrations. I've left the others in for now.

1. My current process

I check a couple of sources most days, at random times during the afternoon or evening. I usually do this on my phone, during breaks or when I'm otherwise AFK. My phone and laptop are configured to block most of these sources during the morning (LeechBlock and AppBlock).

When I find something I want to engage with at length, I usually put it into my "Reading inbox" note in Obsidian, or into my weekly todo list if it's above the bar.

I check my reading inbox on evenings and weekends, and also during "open" blocks that I sometimes schedule as part of my work week. 

I read about 1/5 of the items that get into my reading inbox, either on my laptop or iPad. I read and annotate using PDF Expert, take notes in Obsidian, and use Mochi for flashcards. My reading inbox—and all my articles, highlights and notes—are synced between my laptop and my iPad.


2. Most useful sources

(~Daily)

  • AI News (usually just to the end of the "Twitter recap" section). 
  • Private Slack and Signal groups.
  • Twitter (usually just the home screen, sometimes my lists).
  • Marginal Revolution.
  • LessWrong and EA Forum (via the 30+ karma podcast feeds; I rarely check the homepages)

(~Weekly)

  • Newsletters: Zvi, CAIS.
  • Podcasts: The Cognitive Revolution, AXRP, Machine Learning Street Talk, Dwarkesh.

3. Problems

I've not given the top of the funnel—the checking sources bit—much thought. In particular, I've never sat down for an afternoon to ask questions like "why, exactly, do I follow AI news?", "what are the main ways this is valuable (and disvaluable)?" and "how could I make it easy to do this better?". There's probably a bunch of low-hanging fruit here.

Twitter is... twitter. I currently check the "For you" home screen every day (via web browser, not the app). At least once a week I'm very glad that I checked Twitter—because I found something useful, that I plausibly wouldn't have found otherwise. But—I wish I had an easy way to see just the best AI stuff. In the past I tried to figure something out with Twitter lists and Tweetdeck (now "X Pro"), but I've not found something that sticks. So I spend most of my time with the "For you" screen, training the algorithm with "not interested" reports, an aggressive follow/unfollow/block policy, and liberal use of the "mute words" function. I'm sure I can do better...

My newsletter inbox is a mess. I filter newsletters into a separate folder, so that they don't distract me when I process my regular email. But I'm subscribed to way too many newsletters, many of which aren't focussed on AI, so when I do open the "Newsletters" folder, it's overwhelming. I don't reliably read the sources which I flagged above, even though I consider them fairly essential reading (and would prefer to read them to many of the things I do, in fact, read). 

I addictively over-consume podcasts, at the cost of "shower time" (diffuse/daydream mode) or higher-quality rest. 

I don't make the most of LLMs. I have various ideas for how LLMs could improve my information discovery and engagement, but on my current setup—especially on mobile—the affordances for using LLMs are poor.

I miss things that I'd really like to know about. I very rarely miss a "big story", but I'd guess I miss several things that I'd really like to know about each week, given my particular interests.

I find out about many things I don't need to know about.

I could go on...

Thanks for your feedback.

For now, we think our current voice model (provided by Azure) is the best available option all things considered. There are important considerations in addition to human-like delivery (e.g. cost, speed, reliability, fine-grained control).

I'm quite surprised that an overall-much-better option hasn't emerged before now. My guess is that something will show up later in 2024. When it does, we will migrate.

There are good email newsletters that aren't reliably read.

Readit.bot turns any newsletter into a personal podcast feed.

TYPE III AUDIO works with authors and orgs to make podcast feeds of their newsletters—currently Zvi, CAIS, ChinAI and FLI EU AI Act, but we may do a bunch more soon.

I think that "awareness of important simple facts" is a surprisingly big problem.

Over the years, I've had many experiences of "wow, I would have expected person X to know about important fact Y, but they didn't".

The issue came to mind again last week:

My sense is that many people, including very influential folks, could systematically—and efficiently—improve their awareness of "simple important facts".

There may be quick wins here. For example, there are existing tools that aren't widely used (e.g. Twitter lists; Tweetdeck). There are good email newsletters that aren't reliably read. Just encouraging people to make this an explicit priority and treat it seriously (e.g. have a plan) could go a long way.

I may explore this challenge further sometime soon.

I'd like to get a better sense of things like:

a. What particular things would particular influential figures in AI safety ideally do?
b. How can I make those things happen?

As a very small step, I encouraged Peter Wildeford to re-share his AI tech and AI policy Twitter lists yesterday. Recommended.

Happy to hear from anyone with thoughts on this stuff (p@pjh.is). I'm especially interested to speak with people working on AI safety who'd like to improve their own awareness of "important simple facts".

Load more