Thanks, really interesting.
Yes yes-I think the point we wanted to put across is what you say when you say "to credit the argument". Strict liability here would be "unreasonably unfair" insofar as it doesn't consider circumstances before imposing liability. I think it's fine for a legal regime to be "unfair" to a party (for the reasons you've outlined) where there's some kind of good-enough rationale. Fault-based liability would require the consideration of circumstances first.
Thanks. Might be more useful if you explain why the arguments weren't persuasive to you. Our interest is in a system of liability that can meet AI safety goals and at the same time have a good chance of success in the real world. Anyway, even if we start from your premise, it doesn't mean strict liability would work better than a fault-based liability system (as we demonstrated in Argument 1).
Thanks Ian. Yes, fair point. Assuming this suggests that a comparison with nuclear power makes sense, I would say: partially. I think there's a need to justify why that's the comparative feature that matters most given that there are other features (for example, potential benefits to humanity at large) that might lead us to conclude that the two aren't necessarily comparable.
Yeah this is sensible. But I'm still hopeful that work like Deepmind's recent research or Clymer et al's recent work can help us create duties for a fault-based system that can actually not lead to a de-facto zero liability regime. Worth remembering that the standard of proof will not be perfection: So long as a judge is more convinced than not, liability would be established.