S

Shaun

Interested in Law and AI
9 karmaJoined

Posts
1

Sorted by New

Comments
3

Thanks Max! I actually have not dived as deeply into the question as I would like, but hope to over the coming months. I agree: the public response towards cows and pigs might again intuitively be more repulsed than towards insects and shrimp. I had not seen the link, thanks for the share. 

I hadn't thought about the second paragraph, and I think this kind of research might help to mitigate some of those statements? The issue might also be that it will be rather difficult to gain access without significant funding and time to these kinds of systems - how can farming companies' claims be disproven? If we can't disprove those kinds of claims, can we change the public's general moral view of factory farms? 

Answer by Shaun1
0
0

Similarly to Geoffrey, I like the way this question is set up but not quite sure I have understood it correctly.

However, I would say as an initial response that the legal approach to AI is so in its infancy that the focus to respond to risk has to be more holistic (see the EU AI Act, which has ‘risk tiers’).

When we think about IP laws, they tend not to play quite the same role in reducing risk. Tight IP might have corollary effects on eg how NLP systems can be trained, but I would need to have a good think to uncover whether, if at all, intellectual property laws could have such an effect. Would love to hear your thoughts however!

Thank you for this post. The short description of AI Ethics is interesting. I spent time thinking about this issue when researching private law and AI - I ended up effectively stumbling into a definition centred around safe design on one hand and fair output on the other, but that did not feel quite right. I prefer your broader takeaway of fairness and inclusivity. 

I really like the diagram! I found myself wondering where we would fit AI Law / AI Policy into that model. I think it is a very useful tool as an explainer.