Thanks for the perspective Yonatan! I've rearranged the post to better conform to your suggestions.
Because we have multiple roles, here I am trying to balance brevity and answering the question "am a specifically a good fit for this". My last post went more in to detail about the company and one type of role in particular. Couldn't post all the roles in the title but tried to make them as understandable as possible in few words, while linking to more information.
See above for a more general response about existential risk.
To a "concrete intervention" - the current state of AI assessment is relatively poor. Many many models are deployed with the barest of adequacy assessment. Building a comprehensive assessment suite and making it easy to deploy on all productionizing ML systems is hugely important. Will it guard against issues related to existential risk? I don't know honestly. But if someone comes up with good assessments that will probe such an ambiguous risk, we will incorporate it into the product!
Credo AI is not specifically targeted at reducing existential risk from AI. We are working with companies and policy makers who are converging on a set of responsible AI principles that need to be thought out better and implemented.
-
Speaking for myself now - I became interested in AI safety and governance because of the existential risk angle. As we have talked to companies and policy makers it is clear that most groups do not think about AI safety in that way. They are concerned with ethical issues like fairness - either for moral reasons, or, more likely, financial reasons (no one wants to have an article written about their unfair AI system!)
So what to do? I believe supporting companies to incorporate "ethical" principles like fairness into their development process is a first step to incorporating other more ambiguous values into their AI systems. In essence, Fairness is the first non-performance ethical value most governments and companies are realizing they want their AI systems to adhere to. It isn't generic "value alignment", but it is a big step from just minimizing a traditional loss function.
Moving beyond Fairness, there are so many components of AI development process, infrastructure and government understanding that need to be moved. Building a tool that can be incorporated into the heart of the development process provides an avenue to support companies on a host of responsible dimensions - some of which our customers will ask for (supporting fair AI systems), and some they won't (reducing existential risk of our systems). All of this will be important for existential risk, particularly in a slow-takeoff scenario.
All that said, if the existential risk of AI systems is your specific focus (and you don't believe in a slow-takeoff scenario where the interventions Credo AI will support could be helpful), then Credo AI may not be the right place for you.
Credo AI is an AI governance company focused on the assessment and governance of AI systems. In addition to our Governance Platform, we develop Lens, an open-source assessment framework. Find a longer poster about the company here.
Roles
We are expanding our data science team and hiring applied AI practitioners! If you believe you have the skills and passion for contributing to the nascent world of AI governance, we want to hear from you!
To help you figure out if that’s you, I’ll describe some of the near-term challenges we are facing:
And some characteristics we look for:
Hiring process and details
Our hiring process starts with you reaching out. We are looking for anyone who read the above section and thinks “that’s me!”. If that’s you, send a message to me at ian@credo.ai. Please include “Effective Altruism Forum” in the subject line so I know where you heard of us.