P

Pablo

Director @ Tlön
10989 karmaJoined Working (6-15 years)Buenos Aires, Argentina
www.stafforini.com/

Bio

I am the director of Tlön, an organization that translates content related to effective altruism, existential risk, and global priorities research into various languages.

After living nomadically for many years, I recently moved back to my native Buenos Aires. Feel free to get in touch if you are visiting BA and would like to grab a coffee.


Every post, comment, or wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License

Sequences
1

Future Matters

Comments
1198

Topic contributions
4123

@RobBensinger had a useful chart depicting how EA was influenced by various communities, including the rationalist community.

I think it is undeniable that the rationality community played a significant part in the development of EA in the early days. I’m surprised to see people denying this.

What seems more debatable is whether this influence is best characterized as “rationalism influenced EA” rather than “both rationalism and EA emerged to a significant degree from an earlier and broader community of people that included a sizeable number of both proto-EAs and proto-rationalists”.

Hi Mo. I'm unsure if you've seen it, but Gwern’s article was discussed here.

Thanks for sharing this. FYI,  the links to the ‘Nuclear Safety Standards’ and ‘Basel III’ case studies are not publicly accessible.

Beware safety washing:

An increasing number of people believe that developing powerful AI systems is very dangerous, so companies might want to show that they are being “safe” in their work on AI.

Being safe with AI is hard and potentially costly, so if you’re a company working on AI capabilities, you might want to overstate the extent to which you focus on “safety.”

I think if you think there's a major difference between the candidates, you might put a value on the election in the billions -- let's say $10B for the sake of calculation.

You don't need to think there's a major difference between the candidates to conclude that the election of one candidate adds billions in value. The size of the US discretionary budget over the next four years is roughly three orders of magnitude your $10B figure, and a president can have an impact of the sort EAs care about in ways that go beyond influencing the budget, such as regulating AI, setting immigration policy, eroding government institutions and waging war.

Couldn't secretive agreements be mostly circumvented simply by directly asking the person whether they signed such an agreement? If they fail to answer, the answer is very likely 'Yes', especially if one expects them to answer 'Yes' to a parallel question in scenarios where they had signed a non-secretive agreement.

Alternatively, you could make the downvote button reduce votes by one if the vote count is positive, and vice versa. For example, after casting a +9 on a comment by strongly upvoting it, the user can reduce the vote strength to +7 by pressing the downvote button twice.

Another option is to let people with a voting power of n cast a vote of any strength between 1 and n. This may be somewhat challenging from a UI perspective, though.

I think many people have a voting power of 9. I do, and I know many people with more karma than me.

That seems like a fully general counterargument against relying on medical diagnoses for anything. There are always facts that confirm a diagnosis, and then the diagnosis itself. Presumably, it is often helpful to argue that the facts confirm the diagnosis instead of simply listing the facts alone. I don’t see any principled reason for eschewing diagnoses when they are being used to support the conclusion that someone's testimony or arguments should be distrusted.

Load more