RG

r_gorman

2 karmaJoined

Comments
1

Thanks for the great questions, Sawyer!

I'd like to more fully understand why you've made this a for-profit company instead of a charity. 

 

When I Stuart and I were collaborating on AI safety research, I'd occasionally ask him, 'So what's the plan for getting alignment research incorporated into AIs being build, once we have it?' He'd answer that DeepMind, Open AI, etc would build it in. Then I'd say, 'But what about everybody else?' Aligned AI is our answer to that question.

We also want to be able to bring together a brilliant, substantial team to work on these problems. A lot of brilliant minds choose to go the earning to give route, and we think it would be fantastic to be a place where people can both go that route and still work on an aligned organisation.


Are there other roads to profit that you're considering? Is this the main one? How much does the success of this approach (or others) hinge on governments adopting particular legislation or applying particular regulations? In other words, if governments don't regulate the thing you're solving, why would companies still buy your product?

The "etc" here doesn't refer just to "other regulations", but also to "other ways that unsafe AI cause costs and risks to companies".

I like to use the analogy of CAD (computer-aided design) software for building sky scrapers and bridges. It's useful even without regulations, because engineers like building sky scrapers and bridges that don't fall down.  We can be useful in the same sort of way for AI (companies like profit, but they also like reducing expenses, such as costs for PR and settlements when things go wrong). 

We're starting with research - with the AI equivalent of developing principles that civil engineers can use to build taller safe sky scrapers and longer safe bridges, to build into our CAD-analogous product.