Q

quinn

1584 karmaJoined Working (0-5 years)Philadelphia, PA, USA
quinnd.net

Bio

Participation
6

Off and on projects in epistemic public goods, AI alignment (mostly interested in multipolar scenarios, cooperative AI, ARCHES, etc.), and community building. I've probably done my best work as a research engineer on a cryptography team, I'm pretty bad at product engineering. 

Comments
238

I think a separate but plausibly better point is the "memetic gradient" is characterized in known awful ways for politics, and many longtermist theories of change offer an opportunity for something better. If you pursue a political theory of change, you're consenting to a relentless onslaught of people begging you to make your epistemics worse on purpose. This is a perfectly good reason not to sign up for politics, the longtermist ecosystem is not immune to similar issues but it seems certainly like there's a fighting chance or that it's the least bad of all options. 

longtermism and politics both seem "error bars so wide that expected value theory is probably super useless or an excuse for motivated reasoning or both". But I don't think this is damning toward EA because downsides of a misfire in a brittle theory of change don't seem super important for most longtermist interventions (your pandemic preparedness scheme might accidentally abolish endemic flu, so your miscalculation about the harm or likelihood of a way-worse-than-covid is sort of fine). Whereas in politics the brittleness of the theory of change means you can be well-meaningly harmful, which is kinda the central point of anything involving "politics" at all. 

Certainly this is not robust to all longtermist interventions, but I find very convincing for the average case. 

Yeah but I think it relies too much on a given applicant's estimate of how well CEA knows or how much they trust the connection. 

Answer by quinn8
2
1

https://www.lesswrong.com/tag/complexity-of-value 

I'm roughly comfortable sort of leaving it here, though how different people really get convinced of it is not obvious. They're right to question speciesism or whatever, and I hope it becomes salient to them that their mistakes aren't simply disloyalty.

I wouldn't expect a lot of scarcity mindset, because there's a lot of generically in demand talent and experience among AI x-risk orgs. Status may be a more reasonable question, but job security doesn't really make sense.

Withholding the current score of a post till after a vote is cast (but the casting is committal) should be enough to prevent strategic behavior. But it comes with many downsides (I think feed ordering / recsys could work with private information, so the scores may be in principle inferrable from patterns in your feed, but you probably won't actually do it. The worse problem is commitment, I do like to edit my votes quite a bit after initial impressions). 

I imagine there's a more subtle instrument, withholding the current score until committal votes have been cast seems almost like a limit case. 

I'm extremely upset about recent divergence from ForumMagnum / LessWrong. 

  • 1 click to go from any page to my profile became 2 clicks. (is the argument that you looked at the clickstream dashboard and found that Quinn was the only person navigating to his profile noticeably more than he was navigating to say DMs or new post? I go to my profile a lot to look up prior comments so I can not repeat myself across discord servers or threads as much!) 
  • permalink in the top right corner of comment, instead of clicking the timestamp (David Mears suggests that we're in violation of industry standard now)
  • moving upvote downvote to the left. and removing it from the bottom! this seems opposite to me: we want more people upvoting/downvoting at the bottom of the posts (presumably to decrease voting on things without actually reading) and less people voting at the top!

I'm neutral on QuickTakes rebrand: I'm a huge fan of shortform overall (if I was Dictator of Big EA I would ban twitter and facebook and move everybody to to shortform/quicktake!), I trust y'all to do whatever you can to increase adoption. 

I tend to think common knowledge of overall ambivalence or laziness in vetting writers, evidenced by the magazines behind Torres' clickbait career, is worth promoting https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty tho I don't know anything about this mark fuentes character or if he's trustworthy. 

I guess I can say what I've always said: that the value of sneer/dunk culture is publicity and antiselection. People who think sneer/dunk culture makes bad writing become attracted to us and people who think it's good writing don't. 

The top right is a divergence from lesswrong, right? it used to be clicking the timestamp would permalink, and I think lesswrong is still that way. 

is viewpoints.xyz on github? 

Load more