I have two thoughts that spring from this:
Related to your last paragraph, what do you think about Have epistemic conditions always been this bad? In other words, was there a time when the US wasn't like this?
In Six Plausible Meta-Ethical Alternatives, I wrote (as one of the six alternatives):
- Most intelligent beings in the multiverse share similar preferences. This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts. There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.
I think in this post you're not giving enough attention to the possibility that there's something that we call "doing philosophy" that can be used to discover all kinds of philosophical truths, and that you can't become a truly powerful civilization without being able to "do philosophy" and be generally motivated by the results. Consider that philosophy seems to have helped the West become the dominant civilization on Earth, for example by inventing logic and science, and more recently have led to the discovery of ideas like acausal extortion/trade (which seem promising albeit still highly speculative). Of course I'm very uncertain of this and have little idea what "doing philosophy" actually consists of, but I've written a few more words on this topic if you're interested.
The point of my comment was that even if you're 100% sure about the eventual interest rate move (which of course nobody can be), you still have major risk from path dependency (as shown by the concrete example). You haven't even given a back-of-the-envelope calculation for the risk-adjusted return, and the "first-order approximation" you did give (which both uses leverage and ignores all risk) may be arbitrarily misleading, even for the purpose of "gives an idea of how large the possibilities are". (Because if you apply enough leverage and ignore risk, there's no limit to how large the possibilities are of any given trade.)
We welcome other criticisms to discuss, but comments like your first line are not helpful!
I thought about not writing that sentence, but figured that other readers can benefit from knowing my overall evaluation of the post (especially given that many others have upvoted it and/or written comments indicating overall approval). Would be interested to know if you still think I should not have said it, or should have said it in a different way.
I think this post contains many errors/issues (especially for a post with >300 karma). Many have been pointed out by others, but I think at least several still remain unmentioned. I only have time/motivation to point out one (chosen for being relatively easy to show concisely):
Using the 3x levered TTT with duration of 18 years, a 3 percentage point rise in rates would imply a mouth-watering cumulative return of 162%.
Levered ETFs exhibit path dependency, or "volatility drag", because they reset their leverage daily, which means you can't calculate the return without knowing what the interest rate does in between the 3% rise. TTT's website acknowledges this with a very prominent disclaimer:
Important Considerations
This short ProShares ETF seeks a return that is -3x the return of its underlying benchmark (target) for a single day, as measured from one NAV calculation to the next.
Due to the compounding of daily returns, holding periods of greater than one day can result in returns that are significantly different than the target return, and ProShares' returns over periods other than one day will likely differ in amount and possibly direction from the target return for the same period. These effects may be more pronounced in funds with larger or inverse multiples and in funds with volatile benchmarks."
You can also compare 1 and 2 and note that from Jan 1, 2019 to Jan 1, 2023, the 20-year treasury rate went up ~1%, but TTT is down ~20% instead of up (ETA: and has paid negligible dividends).
A related point: The US stock market has averaged 10% annual returns over a century. If your style of reasoning worked, we should instead buy a 3x levered S&P 500 ETF, get 30% return per year, compounding to 1278% return over a decade, handily beating out 162%.
Pure selfishness can't work, since if everyone is selfish, why would anyone believe anyone else's PR? I guess there has to be some amount of real altruism mixed in, just that when push comes to shove, people who will make decisions truly aligned with altruism (e.g., try hard to find flaws in one's supposedly altruistic plans, give up power after you've gained power for supposedly temporary purposes, forgo hidden bets that have positive selfish EV but negative altruistic EV) may be few and far between.
Ignaz Semmelweis
This is just a reasonable decision (from a selfish perspective) that went badly, right? I mean if you have empirical evidence that hand-washing greatly reduced mortality, it seems pretty reasonable that you might be able to convince the medical establishment of this fact, and as a result gain a great deal of status/influence (which could eventually be turned into power/money).
The other two examples seem like real altruism to me, at least at first glance.
The best you can do is “egoism, plus virtue signalling, plus plain insanity in the hard cases”.
Question is, is there a better explanation than this?
Do you know any good articles or posts exploring the phenomenon of "the road to hell is paved in good intentions"? In the absence of a thorough investigation, I'm tempted to think that "good intentions" is merely a PR front that human brains put up (not necessarily consciously), and that humans deeply aligned with altruism don't really exist, or are even rarer than it looks. See my old post A Master-Slave Model of Human Preferences for a simplistic model that should give you a sense of what I mean... On second thought, that post might be overly bleak as a model of real humans, and the truth might be closer to Shard Theory where altruism is a shard that only or mainly gets activated in PR contexts. In any case, if this is true, there seems to be a crucial problem of how to reliably do good using a bunch of agents who are not reliably interested in doing good, which I don't see many people trying to solve or even talk about.
(Part of "not reliably interested in doing good" is that you strongly want to do things that look good to other people, but aren't very motivated to find hidden flaws in your plans/ideas that only show up in the long run, or will never be legible to people whose opinions you care about.)
But maybe I'm on the wrong track and the main root cause of "the road to hell is paved in good intentions" is something else. Interested in your thoughts or pointers.
Over time, I've come to see the top questions as:
In one of your charts you jokingly ask, "What even is philosophy?" but I'm genuinely confused why this line of thinking doesn't lead a lot more people to view metaphilosophy as a top priority, either in the technical sense of solving the problems of what philosophy is and what constitutes philosophical progress, or in the sociopolitical sense of how best to structure society for making philosophical progress. (I can't seem to find anyone else who often talks about this, even among the many philosophers in EA.)
I'd love to see research into what I called "human safety problems" (or sometimes "human-AI safety"), fleshing out the idea more or giving some empirical evidence as to how much of a problem it really is. Here's a short description of the idea from AI design as opportunity and obligation to address human safety problems:
I go into a bit more detail in Two Neglected Problems in Human-AI Safety.