JM

Jack Mario

0 karmaJoined

Comments
1

Your points raise important considerations about the rapid development and potential risks of AI, particularly LLMs. The idea that deploying AI early to extend the timeline of human control makes sense strategically, especially when considering the potential for recursive LLMs and their self-improvement capabilities. While it's true that companies and open-source communities will continue experimenting, the real risk lies in humans deliberately turning these systems into agents to serve long-term goals, potentially leading to unforeseen consequences. The concern about AI sentience ChatGPT and the potential for abuse is also valid, and highlights the need for strict controls around AI access, transparency, and ethical safeguards. Ensuring that AIs are never open-sourced in a way that could lead to harm, and that interactions are monitored, seems essential in preventing malicious uses or exploitation.