Hi everyone, I am Adebayo Mubarak, I am currently undertaking a law degree. I have taken the Intro to EA, In-Depth, The Precipice Reading and currently taking the Legal Topics in EA which will come to an end by next week. Also, I am the coordinator of my University reading group (Bayero University, Kano) and the Lead Facilitator for EA Kano Hub Nigeria.
I am open to opportunities to further widen my knowledge of the core principles and cause area in EA. And lastly, I am a freelance Writer.
I think what matters here is having a kill switch or some set of parameters like [if <situation> occurs, kill] or some sort of limitation to the purview or what a particular model can undertake. If we keep churning out models trained in a general way, there is a high probability of running riot one day but if there are limitations to what they can do, which unfortunately will be undermining the reason we deploy AI in the first place but at is stands now, we need something urgent to keep this existential risk at bay or perhaps it's our paranoia running riot... Perhaps not.