Epistemic status: we used ThEAsaurus 🦕 on this announcement post. Other notes: cringe warning for pedants, and we should flag that this is a personal project — not an Online Team or EV project.
Executive summary
We’re announcing a new free epistemics tool for rewriting texts in more EA-specialized language. (See also the motivation section below ⬇️.)
How to use ThEAsaurus 🦕
Just add ThEAsaurus 🦕 as an extension on your browser. Then open the text you want help with. ThEAsaurus 🦕 will suggest edits on the text in question.
⚙️ You can customize your experience. For instance, by default, the tool will suggest EA-related hyperlinks for your text; you can turn that feature off.
Example of ThEAsaurus 🦕 in action
Before (source):
Effective altruism is a project that aims to find the best ways to help others, and put them into practice.
It’s both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.
This project matters because, while many attempts to do good fail, some are enormously effective. For instance, some charities help 100 or even 1,000 times as many people as others, when given the same amount of resources.
This means that by thinking carefully about the best ways to help, we can do far more to tackle the world’s biggest problems.
After:
Effective altruism is a mega-project that aims to find the pareto-optimal person-affecting[1] actions, and put them into spaced repetition.
It’s worth decoupling the two parts of effective altruism: it’s both a research field, which aims to add transparency to the world’s most pressing problems and identify the optimized solutions to them, and a practical community that iterates and updates to use those findings to do public goods.
What’s the motivated reasoning for this project? The project has moral weight because, while many attempts to do good fail, some are existentially effective. For instance, some charities produce 100 or even 1,000 times as many utils as others, when opportunity costs are fixed and taken into account.
This means that by developing credal resilience about the best ways to beneficently row and steer, we can do far more to tackle the world’s biggest problems.
(For the sake of clarity, we turned off the hyperlinking feature for this example.)
Why we built ThEAsaurus 🦕
There’s been a lot of discussion on how to improve the communication of EA ideas (and how EAs can better grok each other’s writing). On priors, we’re expecting value via (1) generally improving EA writing by increasing the use of helpful terminology, (2) boosting the accessibility of the EA community, and (3) providing some other benefits. (We don’t know the exact order of magnitude of these orthogonal effects, so we’re listing all the pathways to impact we’re goodharting towards.)
1. Helpful & specific terminology improves EA writing
The base rate of EAs using helpful terminology is already quite high,[2] but we thought it could be further maximized. ThEAsaurus 🦕 can help users distill their content by suggesting helpful replacement terms and phrases that are more specialized for EA-relevant discussions.
ThEAsaurus 🦕 is dual-use. Its basic purpose is to:
- Increase the value of information of users’ writing
- The specificity of the new terminology will also help prevent counter-factual interpretations of the texts.
- Make users’ writing differentially epistemically legible to EAs (the suggested replacements are more understandable to members of the EA community)
- As an added bonus: It’ll be much harder for those less familiar with the topics being written about to criticize your writing.
2. Democratizing the EA community
As one of us has written before, people are sometimes unfairly judged by others in EA (or by people who are EA-adjacent) for coming from a different linguistic background than the median EA. ThEAsaurus 🦕 levels the playing field.
3. Other benefits
We were excited to see a surprising amount of convergence of benefits and goals ThEAsaurus 🦕 could achieve:
- We have a comparative advantage here: we’re familiar with EA terminology and have worked on projects involving Google Docs a lot.
- EA suffers from the unilateralist’s curse because it tends to attract somewhat more individualistic people; projects that diffusely benefit the whole community (as opposed to specific people) are under-prioritized.
- Relatedly, utilitarians are known to narrowly one-box on utility; we embrace a more cause-neutral holistic approach.
- In general, ThEAsaurus 🦕 is highly neglected. No one else was working on it, so we thought it was a strong candidate for Cause X.
- Information security is an important cause area in EA, and we think ThEAsaurus 🦕 can help. You can trust us with your Google Docs, so it was important that we were the ones to build this (our replaceability was low).
- But we won’t try to safety-wash this and we don’t want to mislead potential risk-averse users; there are still risks, as the tool could get hacked. We recommend only allowing the extension on specific Google Documents, actually.
- Relatedly, we’re worried that there might be information hazards due to excerpts from published EA writing being taken out of context, and we expect that the hyperlinking feature might help mitigate those.
- We have moral uncertainty about this, but if it gets wide use, we think ThEAsaurus 🦕 will also help reinforce the Schelling point value of the EA community via acculturating readers into important EA ways of thinking more quickly.
- The instrumental value of tools like ThEAsaurus 🦕 is high.
Per-mortem of the tool
Cost-effectiveness estimate
Because the tool is free, the cost-effectiveness of using it is infinite. (We accept donations, though.)
Accuracy of the tool’s suggestions
ThEAsaurus 🦕’s suggestions are highly accurate, but we can’t guarantee total unfalsifiability; we believe that would be counterproductive. After all, 100% accuracy probably means ThEAsaurus 🦕 is too conservative.
As always, we love feedback; if you encounter any false positives or other standard errors, please feel free to flag them to us.
Overall reflection
Our inside view is that ThEAsaurus 🦕 is great, but we’d love to hear your outside views — and any requests or changes you might have!
We also know that slow or poor adoption diminishes the returns of even the best tools,[3] so please let us know if there’s anything that could make ThEAsaurus 🦕 more accessible for you (or easier to work into your routine, and generally anything that might decrease the inferential distance here).
And the repugnant conclusion is always possible; we pride ourselves on our ability to stare into the abyss, so please let us know if you hate our tool.
Followup projects & action items
We’re hoping to get to some of these follow-up improvements (particularly (1)), but it might be a longtermist goal as this is a side-project for us — we ask that you be patient philanthropists or embrace the agentic foundations of EA and build the tools on your own.
- Expanding and improving ThEAsaurus 🦕
- Make it work on Notion and other text editors (not just Google Docs) (and maybe make it work for live conversations).
- Solve the Fermi paradox we’re encountering with the current version. We expected that it would be easy to build in functionality that makes Fermi estimates easier to insert via ThEAsaurus 🦕, but paradoxically, it wasn’t.
- Adding marginal features requested by test users. (Someone told us that they want side-notes instead of footnotes, to more easily clarify deltas between their terminology use and more common usage.)
- Build an “accept all” button; users have told us they often just want to defer to the tool (and we hope you feel the same ex-this-post!).
- ⇄ ReversEAsaurus 🦖
- Translating texts that use EA terminology into texts without that terminology.
- E.g. by removing all references to cost-effectiveness, QALYs/DALYs, utilitarianism, co-living, expected value, and other concepts and practices that were invented by people in EA and/or are only used by people in EA.
- ThEA-adjacentsaurus 🦕
- Expanding ThEAsaurus 🦕 to include other EA-relevant terminology, and adding the ability to add caveats to all references to “EA” or “EAs” etc. (to avoid causing offense[4]).
- Spinosaurus
- Self-explanatory.
- ^
We realize that focusing on helping “persons” might sound like it excludes other potentially sentient beings. But we believe that “persons” should actually include all sentient beings.
- ^
Well done, folks!
- ^
The s-risk, or the risk that a technology never reaches a critical mass of adoption (i.e. a new stable equilibrium), is particularly scary here.
- ^
We’re confused about why some people talk about the offense-defense balance here. Defensive writing is somewhat harmful to epistemics, but that’s not the kind of defense we’re aiming for, and we do think offensiveness is basically always unnecessary (see “don’t be an edgelord”).
To red-team a strawman of your (simulated) argument: what about the Pascallian and fanatical implications across evidentially cooperating large worlds? I think we need some Bayesian, anthropic reasoning, lots of squiggle notebooks, and perhaps a cross-cause cost-effectiveness model to get to the bottom of this!