T

tobycrisford

382 karmaJoined

Comments
83

I'm not sure if I agree with this.

I think your characterization of EAs is spot on, but I don't think it's a bad thing.

I've been loosely involved in a few different social movements (student activism, vegan activism, volunteering for a political party) and what makes EA unique is exactly the attitude that you're describing here. Whenever I went to an EA meetup or discussion group, people spent most of their time discussing things that EA could be getting fundamentally wrong. In my admittedly limited experience, that is really weird! And it's also brilliant! Criticisms of EA, and it's currently popular ideas, are taken extremely seriously, and in good faith.

I think a necessary consequence of this attitude is that the EA label becomes something people adopt only with a healthy degree of embarrassment and apologeticness. It is not a badge of pride. Because as soon as it becomes an identity to be proud of, it becomes much harder and emotionally draining to carefully consider the important criticisms of it.

I think you are right that there are probably downsides to this attitude as well. But I worry about what it would mean for the EA movement if it ever lost it.

I feel like I should also acknowledge the irony in the fact that in this particular context, it is you who are criticizing an aspect of the EA movement, and me who is jumping to defend it and sing its virtues! I'm not sure what this means but it's a bit too meta for my liking so I'll end my comment there!

Thanks for your interesting thoughts on this!

On the timelines question, I know Chollet argues AGI is further off than a lot of people think, and maybe his views do imply that in expectation, but it also seems to me like his views introduce higher variance into the prediction, and so would also allow for the possibility of much more rapid AGI advancement than the conventional narrative does.

If you think we just need to scale LLMs to get to AGI, then you expect things to happen fast, but probably not that fast. Progress is limited by compute and by data availability.

But if there is some crucial set of ideas yet to be discovered, then that's something that could change extremely quickly. We're potentially just waiting for someone to have a eureka moment. And we'd be much less certain what exactly was possible with current hardware and data once that moment happens. Maybe we could have superhuman AGI almost overnight?

This a really interesting way of looking at the issue!

But is PASTA really equivalent to "a system that can automate the majority of economically valuable work"? If it specifically is supposed to mean the automation of innovation, then that sounds closer to Chollet's definition of AGI to me: "a system that can efficiently acquire new skills and solve open-ended problems"

Thanks for this interesting summary! These are clearly really powerful arguments for biting the bullet and accepting fanaticism. But does this mean that Hayden Wilkinson would literally hand over their wallet to a pascal mugger, if someone attempted to Pascal mug them? Because Pascals mugger doesn't have to be a thought experiment. It's a script you could literally say to someone in real life, and I'm assuming that if I tried it on a philosopher advocating for fanaticism, then I wouldn't actually get their wallet. Why is that? What's the argument that lets you not follow through on that in practice?

I 'disagreed' with this, because I don't think you drew enough of a distinction between purchasing animals raised on factory farms, and purchasing meat in general.

While there might be an argument that the occasional cheeseburger isn't that "big of a deal", I think purchasing a single chicken raised on a factory farm is quite a big deal. And if you do that occasionally, stopping doing that will probably be pretty high up on the list of effective actions you can take, in terms of impact to effort ratio.

Thanks for this write up!

You might already be aware of these, but I think there are some strong objections to the doomsday argument that you didn't touch on in your post.

One is the Adam & Eve paradox, which seems to follow from the same logic as the Doomsday argument, but also seems completely absurd.

Another is reference class dependence. You say it is reasonable for me to conclude I am in the middle of 'humanity', but what is humanity? Why should I consider myself a sample from all homo sapiens, and not say, apes, or mammals? or earth-originating life? What even is a 'human'?

Makes sense, thank you for the reply, I appreciate it!

And good to know you still want to hear from people who don't meet that threshold of involvement, I wasn't sure if that was the case or not from the wording of the post and survey questions. I will fill it in now!

Are you wanting non "actively involved" EAs to complete the survey?

The definition of "active involvement" is given as working >5 hours per week in at least one EA cause area, and it reads like the $40 is only donated for people in that category? Suggesting maybe these are the only people you want to hear from?

This seems quite strict! I've taken the GWWC pledge and I give all my income to EA causes above a cap. I also volunteer with the humane league, probably spending a few hours a month on average doing stuff with them. And I check the EA forum pretty regularly, even commenting sometimes! But I definitely don't spend 5 hours of my time a week working on the causes from that list, so can't honestly describe myself as an "actively involved" EA in answer to that question.

If the survey's not for me, then that's obviously fine, it's your survey! Can't help feeling a bit disappointed that I don't qualify as "actively involved" though! Maybe a survey for "EAs involved in direct work" would be a kinder way of phrasing it?

You've made some interesting points here, but I don't think you ever discussed the possibility that someone is actually voting altruistically, for the benefit of some group or cause they care about (either helping people in their local area, people in the rest of the country, everyone in the world, future generations, etc).

Is it really true that most voters' behavior can be explained by either (i) self-interest, or (ii) an 'emotionally rewarding cheer for their team'..? I find that a depressing thought. Is no one sincerely trying to do the right thing?

If you are voting altruistically, then the number of people affected by the outcome of an election is big enough to start outweighing the tiny chance that your vote will change the result, in expected value terms.

Load more