H

Habryka

18959 karmaJoined Sep 2014

Bio

Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
1200

Topic contributions
1

Yeah, I don't think this is a crazy take. I disagree with it based on having thought about it for many years, but yeah, I agree that it could make things better (though I don't expect it would and would instead make things worse).

Yes, that if we send people to Anthropic with the aim of "winning an AI arms race" that this will make it more likely that Anthropic will start to cut corners. Indeed, that is very close to the reasoning that caused OpenAI to exist and what seems to have caused it to cut lots of corners.

That sounds like the way OpenAI got started.

Oh, I quite like the idea of having the AI score the writing on different rubrics. I've been thinking about how to better use LLMs on LW and the AI Alignment Forum, and I hadn't considered rubric scoring so far, and might give it a shot as a feature to maybe integrate.

That's an interesting idea, I hadn't considered that!

Yeah, I've considered this a bunch (especially after my upvote strength on LW went up to 10, which really limits the number of people in my reference class). 

I think a whole multi-selection UI would be hard, but maybe having a user setting that you can change on your profile where you can set your upvote-strength to be any number between 1 and your current vote strength seems less convenient but much easier UI wise. It would require some kind of involved changes in the way votes are stored (since we currently have an invariant that guarantees you can recalculate any users karma from nothing but the vote table, and this would introduce a new dependency into that that would have some reasonably big performance implications).

(I care quite a bit about votes being anonymous, so will generally glomarize in basically all situations where someone asks me about my voting behavior or the voting behavior of others, sorry about that)

My guess is LW both bans and rate-limits more. 

Academia pre the mid-20th-century was a for-profit enterprise. It did not receive substantial government grants and indeed was often very tightly intertwined with the development of industry (much more so than today).

Indeed, the degree to which modern academia is operating on a grant basis and has adopted more of the trappings of the nonprofit space is one of the primary factors in my model of its modern dysfunctions.

Separately, I think the contribution of militaries to industrial and scientific development is overrated, though that also would require a whole essay to go into.

I take a very longtermist and technology-development focused view on things, so the GHD achievements weigh a lot less in my calculus. 

The vast majority of world-changing technology was developed or distributed through for-profit companies. My sense is nonprofits are also more likely to cause harm than for-profits (for reasons that would require its own essay to go into, but are related to their lack of feedback loops).

Load more