You might be interested to know that I also wrote a paper with John Kay critiquing EE, which was published with Econ Journal Watch last year. 'What Work is Ergodicity Doing in Economics?' was an earlier and more general survey for a seminar, and as you note was quite wordy! The EJW paper is targeted at EE, more mathematical, generally tighter, and introduces a few new points. Peters and some other coauthors responded to it but didn't really address any of the substantive points, which I think is further evidence that the theory's just not very good.
Other commenters have made some solid points here about the problems with ergodicity economics (EE) in general. I've also addressed EE at length in a short paper published last year, so won't go over all that here. But there are three specific implications for the framework you've presented which, I think, need further thought.
Firstly, a lot of EE work misrepresents expected utility as something that can only be applied to static problems. But this isn't the case, and it has implications for your framework. You say that for situations with an additive dynamic, or which are singular, expected value theory is the way to go, whereas for multiplicative dynamics we should use EE. But any (finite) multiplicative dynamic can be turned into a one-off problem, if you assume (as EE does) that you don't mind the pathway you take to the end state. (For example: we can express Peters' coin toss as a repeated multiplicative dynamic, but we can also ask 'Do you want to take a bet where you have a 25% chance of shrinking your wealth by 64%, a 50% chance of shrinking it by 10%, and a 25% chance of increasing it by 125%?'.) So your framework doesn't actually specify how to approach these situations.
Secondly: ruin. You don't provide a very clear definition of what counts as a 'ruin' problem. For example, say I go to a casino. I believe that I have an edge which will, if I can exploit it all night, let me win quite a chunk of money, which I will donate to charity. However I only have so much cash on me and can't get more. If I'm unlucky early on then I lose all my cash and can't play any more. This is an absorbing state (for this specific problem) and so would, I think, count as a ruin problem. But it seems implausible that we can't subject it to the same kind of analysis that we would if the casino let me use my credit card to buy chips.
Treating ruin as a totally special case also risks proving too much. It would seem to imply that we should either drop everything and focus on, for example, a catastrophic astroid strike, or that we just don't have a framework for dealing with decisions in a world where there could, possibly, one day be a catastrophic asteroid strike. This doesn't seem very plausible or useful.
The third issue follows from the first two. There are clear issues with Expected Value Theory and with Expected Utility Theory more broadly. But there are also some clear justifications for their use. It's unclear to me what your justification for using EE is. Since EE is incompatible with expected utility theory in general, which includes expected value maximisation, it seems curious that both of these coexist in your framework for making ethical decisions.
I appreciate you want to avoid apparent paradoxes which result from applying expected value theory, but I think you still need some positive justification for using EE. Personally I don't think such a justification exists.