Mechatronics Engineer, recently pivoted from work at a medical robotics startup (semi-autonomous eye surgery robot).
(have only skimmed the post, but sending this link in case it is relevant) - https://cltc.berkeley.edu/seeking-input-and-feedback-ai-risk-management-standards-profile-for-increasingly-multi-purpose-or-general-purpose-ai/
I am part of a team collaborating with him
Are there examples of standards in other industries where people were quite confused about what "safety" would require?
Yes, medical robotics is one I was involved in. Though there, the answer is often just wait for the first product to hit the market (there is nothing quite there yet, doing full autonomous surgery), and then copy their approach. As is, the medical standards don't cover much ML, and so the companies have to come up with the reasoning themselves for convincing the FDA in the audit. Which in practice means many companies just don't risk it, and do something robotic, but surgeon controled, or use classical algorithms instead of deep learning.
I'm confused, we make this caution compromise all the time - for example, medicine trial ethics. Can we go faster? Sure, but the risks are higher. Yes, that can mean that some people will not get a treatment that is developed a few years too late.
Another closer example is gain of function research. The point is, we could do a lot, but we chose not to - AI should be no different.
Seems to me that this post is a little detached from real world caution considerations, even if it isn't making an incorrect point.
Well said, though I think your comment could use that advice :) Specific phrases/words I noticed: reign in, tendancy, bearing in mind, inhibit, subtlety, IQ-signal (?).
I'm non-native and I do know these words, but I'm mostly native level at this point (spent half my life in an English speaking country) I think many non-native speakers won't be as familiar
This makes sense to me, good writeup!