Jamie is Managing Director at Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history.
Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.
Give Jamie anonymous advice or feedback here.
Yeah, seems fair; asking LLMs to model specific orgs or people might achieve a similar effect without needing the contextual info, if there's much info about those orgs or people in the training data and you don't need it to represent specific ideas or info highlighted in a course's core materials.
Thanks for posting! I'll consider whether it'd be helpful for me to include replications of these questions to https://www.leaf.courses/ participants for comparison. Let me know if it'd be helpful to you somehow!
Just wanted to thank you and NickLaing for this exchange. I'm planning to use an adapted version of the thoughts/considerations as an example of estimating expected value in some resources I'm creating!
Working on a new, more effective TB vaccine: Cost per life saved?
I’ve started to worry that it might be important to get digital sentience work (e.g. legal protection for digital beings) before we get transformative AI, and EA’s seem like approximately the only people who could realistically do this in the next ~5 years.
I was interested to see you mention this, as this is something I think is very important.
The phrasing here got me thinking a bit about what would that look like if we were try to make meaningful changes within 5 years specifically.
But I was wondering why you used the "~5 years" phrase here?
(Do you think transformative AI is likely within 5 years?)
Hey Joel! Cool list you already have.
Is the 300 USD prize for "(2) Cause areas" and/or "(3) Causes"? You distinguish them at the start of your post but then refer to "potential cause areas", "causes", and "cause ideas" in describing the contest.
Also, its just one 300USD prize and one 700USD prize, right?
Thanks!
To add in some 'empirical' evidence: Over the past few months, I've read 153 answers to the question "What is your strongest objection to the argument(s) and claim(s) in the video?" in response to "Can we make the future a million years from now go better?" by Rational Animations, and 181 in response to MacAskill's TED talk, “What are the most important moral problems of our time?”.
I don't remember the concern that you highlight coming up very much if at all. I did note "Please focus on the core argument of the video — either 'We can make future lives go better', or the framework for prioritising pressing problems (from ~2mins onwards in either video)", but I still would have expected this objection to come up a bunch if it was a particularly prevalent concern. For example, I got quite a lot of answers commenting that people didn't believe it was fair/good/right/effective/etc to prioritise issues that affect the future when there are people alive suffering today, even though this isn't a particularly relevant critique to the core argument of either of the videos.
If someone wanted to read through the dataset and categorise responses or some such, I'd be happy to provide the anonymised responses. I did that with my answers from last year, which were just on the MacAskill video and didn't have the additional prompt about focusing on the core argument, but probably won't do it this year.
(This was as part of the application process to Leaf's Changemakers Fellowship, so the answers were all from smart UK-based teenagers.)
i don't think we need to worry too much about 'crying wolf'. The effects of media coverage and persuasive messaging on (1) attitudes, and (2) perceived issue importance both substantially (though not necessarily entirely) wash out in a matter of weeks to months.
So I think we should be somewhat worried about wasted efforts -- not having a sufficiently concrete action plan to capitalise on the attention gained -- but not so much about lasting negative effects.
(More speculatively: I expect that there are useful professional field-building effects that will last longer than public opinion effects though, e.g. certain researchers deciding it now merits their attention, which make these efforts worthwhile any.)
This seems really cool. I was really excited just by reading the title of this forum piece. My initial reaction was something like, 'Yeah, I would be willing to sign up immediately and pay a subscription fee to access that if it was an app on my phone.' I could use it like a news app, that way I could read it during breakfast or whenever else I have a spare moment. It could be a replacement or supplement to the BBC News app I read currently.
I took a very quick look at the site on my phone, so these are just quick initial reactions. So take my comments with a pinch of salt, but I think this is probably comparable to how most people would engage with this site if they're not dedicated forecasters or identify as effective altruists or rationalists, or something similar.
My main point is similar to another commenter on this forum post, that I'd love to be able to click on individual stories and read more about them. Even if the headline is the main takeaway, it feels like it doesn't really sink in until you've read some surrounding words, thoughts, comments, analysis, etc. Another forum commenter suggested that you could get forecasters to write explanations, but that sounds a bit technical and dry. My suggestion would be something like an interesting journalistic piece that uses the forecast as the main hook and story. For instance, you could interview some superforecasters and get quotes from them to try to clarify the topic. But you should also have some surrounding discussion and analysis about the context and the story itself.
Another gut reaction was like, 'Oh, okay, so they have stories on a couple of specific topics, but not other stuff.' I think I was expecting to see stories on a wide variety of topics, gathered from different prediction markets and forecasting platforms. Some of these stories might be on random or frankly unimportant topics. My guess is that this would make for a much more engaging and interesting site. My guess is also that creating a truly engaging platform is more important to your mission or theory of change than focusing solely on important topics. By attracting more traffic, you'll get more people engaging with prediction markets and forecasts, and then maybe they'll start reading about the other topics too.
To reiterate, this seems like a really cool project. Let me know if you'd be interested in having guest writers. I'd be interested in trying to write one myself, and I imagine that a bunch of smart students would love to get some experience like that too. I could probably connect you with some. But I imagine that you could find lots more yourself pretty easily.
(I also realise that my suggestions/ impressions might be time consuming to implement and you're just in MVP phase, but thought they were worth sharing anyway.)
In the vein of "another good point" made in public reactions to the statement, an article I read in The Telegraph:
"Big tech’s faux warnings should be taken with a pinch of salt, for incumbent players have a vested interest in barriers to entry. Oppressive levels of regulation make for some of the biggest. For large companies with dominant market positions, regulatory overkill is manageable; costly compliance comes with the territory. But for new entrants it can be a killer."
This seems obvious with hindsight as one factor at play, but I hadn't considered it before reading it here. This doesn't address Daniel / Haydn's point though, of course.
https://www.telegraph.co.uk/business/2023/06/04/worry-climate-change-not-artificial-intelligence/
Yeah, not sure. I expect this won't be a major bottleneck for most participants if they're just using it to bounce a few ideas around with.