An alignment tax (sometimes called a safety tax) is the additional cost of making AI aligned, relative to unaligned AI.
Paul Christiano distinguishes two main approaches for dealing with the alignment tax.[1][2] One approach seeks to find ways to pay the tax, such as persuading individual actors to pay it or facilitating coordination of the sort that would allow groups to pay it. The other approach tries to reduce the tax, by differentially advancing existing alignable algorithms or by making existing algorithms more alignable.
...