Director @ Tlön
10958 karmaJoined Working (6-15 years)Buenos Aires, Argentina


I am the director of Tlön, an organization that translates content related to effective altruism, existential risk, and global priorities research into various languages.

After living nomadically for many years, I recently moved back to my native Buenos Aires. Feel free to get in touch if you are visiting BA and would like to grab a coffee.

Every post, comment, or wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License


Future Matters


Topic contributions

Thanks for sharing this. FYI,  the links to the ‘Nuclear Safety Standards’ and ‘Basel III’ case studies are not publicly accessible.

Beware safety washing:

An increasing number of people believe that developing powerful AI systems is very dangerous, so companies might want to show that they are being “safe” in their work on AI.

Being safe with AI is hard and potentially costly, so if you’re a company working on AI capabilities, you might want to overstate the extent to which you focus on “safety.”

I think if you think there's a major difference between the candidates, you might put a value on the election in the billions -- let's say $10B for the sake of calculation.

You don't need to think there's a major difference between the candidates to conclude that the election of one candidate adds billions in value. The size of the US discretionary budget over the next four years is roughly three orders of magnitude your $10B figure, and a president can have an impact of the sort EAs care about in ways that go beyond influencing the budget, such as regulating AI, setting immigration policy, eroding government institutions and waging war.

Couldn't secretive agreements be mostly circumvented simply by directly asking the person whether they signed such an agreement? If they fail to answer, the answer is very likely 'Yes', especially if one expects them to answer 'Yes' to a parallel question in scenarios where they had signed a non-secretive agreement.

Alternatively, you could make the downvote button reduce votes by one if the vote count is positive, and vice versa. For example, after casting a +9 on a comment by strongly upvoting it, the user can reduce the vote strength to +7 by pressing the downvote button twice.

Another option is to let people with a voting power of n cast a vote of any strength between 1 and n. This may be somewhat challenging from a UI perspective, though.

I think many people have a voting power of 9. I do, and I know many people with more karma than me.

That seems like a fully general counterargument against relying on medical diagnoses for anything. There are always facts that confirm a diagnosis, and then the diagnosis itself. Presumably, it is often helpful to argue that the facts confirm the diagnosis instead of simply listing the facts alone. I don’t see any principled reason for eschewing diagnoses when they are being used to support the conclusion that someone's testimony or arguments should be distrusted.

It seems reasonably clear that there are certain psychiatric disorders such that people would be justified in refusing to engage with, or dismiss the claims of, those who suffer from them. I think the epistemically sound norm would be to ask those who argue that someone suffers from such a disorder to provide adequate evidence for the allegation.

See also Anders’s more personal reflections:

I have reached the age when I have seen a few lifecycles of organizations and movements I have followed. One lesson is that they don’t last: even successful movements have their moment and then become something else, sclerotize into something stable but useless, or peter out. This is fine. Not in some fatalistic “death is natural” sense, but in the sense that social organizations are dynamic, ideas evolve, and there is an ecological succession of things. 1990s transhumanism begat rationalism that begat effective altruism, and to a large degree the later movements suck up many people who would otherwise have been recruited by the transhumanists.

FHI did close before its time, but it is nice to know it did not become something pointlessly self-perpetuating. As we noted when summing up, 19 years is not bad for a 3-year project. Indeed, a friend remarked that maybe all organisations should have a 20-year time limit. After that, they need to be closed down and recreated if they are still useful, shedding some of the accumulated dross.

The ecological succession of organizations and movements is not all driven by good forces. A fresh structure driven by interested and motivated people is often gradually invaded by poseurs, parasites and imitators, gradually pushing away the original people (or worse, they mutate into curators, gatekeepers and administrators). Many ideas develop, flourish, become explored and then forgotten once a hype peak is passed – even if they still have merit. People burn out, lose interest, form families and have to change priorities, or the surrounding context make the movement change in nature. Dwindling activist movements may suffer “core collapse” as moderate members drift off while the hard core get more radical and pursue ever more extreme activism in order to impress each other rather than the world outside.

FHI did not do any of that. If we had a memetic failure, it was likely more along the lines of developing a shared model of the world and future that may have been in need for more constant challenge. That is one reason why I hope there will be more organizations like FHI but not thinking alike – places like CSER, Mimir, FLI, SERI, GCRI, and many others. We need the focus of a strongly connected organization to build thoughts and systems of substance but separate organizations to get mutual critique and diversity in approaches. Plus, hopefully, metapopulation resilience against individual organizational failures.

Load more