[Note: I'm posting this on behalf of my friends at CHS.]

 

 

The Centre for Human success is a registered non-profit recently launched in Toronto, Canada. Our goal is to create the conditions for effective political action on technological unemployment and AI Safety. Specifically, raising awareness and advocating for solutions that can mitigate the possible displacement of human labour, and the risks inherent in the AGI arms race.

Our goal is to get out in front of the debate around AI and help frame it in as constructive a way as possible.

 

Why approach AI from the political angle?

As Seth Baum, Executive Director of the Global Catastrophic Risks Institute wrote in this October’s newsletter:

 

"The best opportunities [for reducing catastrophic risk] are often a few steps removed from academic risk and policy analysis. For example, there is a large research literature on climate change policy, much of which factors in catastrophic risk. However, the United States still has little in the way of actual climate policy, which is due to our political process, not to any shortcomings in the research. Likewise, some of the best opportunities to reduce climate change risk involve engaging with the political process, often in ways that are unrelated to climate change."

 

What does ‘political action’ involve?

Building a movement. How? 1) By raising awareness - educating the public through events, media, citizen networks and even door-to-door canvassing; and 2) Through advocacy - petitions, letter writing campaigns, anything that can create momentum and help politicians realise there are votes to be gained by taking constructive action.

 

Who is behind this? It was founded by Wyatt Tessari (engineer, former political candidate and climate activist), and is currently a team of volunteers and concerned citizens. Our aim is to grow quickly and become a national (and ultimately international) reference on AI, much like 350.org is for climate.

 

What are our targets & how will we measure our success?

 

By the end of June 2018 (our first Annual General Meeting), these are our targets:

 

Community Building:

 

  • Events:

    • Create a passionate community of at least 5,000 people in Canada.

    • Hold weekly smaller events (20-50 people) and at least one major event (50-100) each month, focusing on the quality of the content

    • Expand beyond Toronto to other major cities in Canada by the end of 2018

 

  • Advocacy:

    • Launch a petition in early 2018 asking in the Canadian government to initiate global talks on managing AI risks, aiming for 2000+ signatures within 3 months. [Currently there has only been one AI petition in Canada (on lethal autonomous weapons), and 3 smaller Change.org ones with the biggest having <500 votes.]

    • Be active in the 2018 Ontario provincial election, host all-candidate debates, giving scorecards to Parties, and asking them to present a vision for navigating the impacts of AI & technological unemployment

    • Gain local and national media attention

Research:

  • Stay on top of latest policy recommendations (AI Safety and technological unemployment) and share findings on our blog & newsletter

  • Put together a report & event comparing global approaches to these issues

  • Prepare educational materials (infographics, flyers, press releases) for the general public

  • Create an online tool to help people navigate the ongoing job market disruption

 

Funding:

  • Amount ($CAD):

    • Raise $15k by February 1st. [We currently consist of one full-time volunteer staff and six part time volunteers, and have run over a dozen events in the last 3 months spending only $700. Our primary goal to hire at least one full time staff for overseeing operations by early 2018]

    • Raise $75k by June 30, 2018 to grow the operation with two full time staff

  • Sources:

    • Individuals as well as members within the community

    • Organisations/grants that won’t affect our mission. EA has a number of promising avenues like the GiveWell Incubation Grant and OpenPhil, which we are exploring.



What’s next?

November-December (set up & consultations):

  • Administration - Incorporate as a non-profit, set up legal and banking processes, set up electronic tools such as NationBuilder, website

  • Fundraising: Build strategy, launch crowdfunding & other fundraising efforts

  • Team: Recruit and onboard new volunteers, train on Marshall Ganz organising techniques

 

  • Communications - Refine message, building media contact list, launch social media accounts

  • Outreach & consultations - Reach out to policy experts and a variety of stakeholders (government, political, NGO) to help us with the focus and strategy of advocacy efforts

  • Research - Conduct research and share our findings (blog, newsletter, media)

  • Events - Reach out to potential speakers and plan upcoming events

January-February (advocacy launch):

  • Strategy - Create an advisory board, expand and reorganise team/specialise roles, develop branding

  • Advocacy - Launch and manage the petition, begin media campaign

  • Events - Scale up events

  • Research - Continue to share our findings from our research (blog, newsletter, media)

  • Fundraising - Adapt and expand efforts as needed

March Onwards (expansion & Ontario election):

  • Prepare for the Ontario provincial election in spring 2018

  • Expand operations and community building

  • Establish/strengthen connections with all major organisations involved in AI policy

  • Secure additional funding and plan for 2018/19



What are our Strengths and Weaknesses?

We are the first and only advocacy group in Canada dedicated to AI safety and technological unemployment. As such, we are in a position to help frame them as they emerge into the political debate. With Canada as one of  the most positively viewed country in the world and Toronto as an international AI hub, our country is in an ideal position to take the lead on the world stage and host global talks and agreements.

 

Our main limitation right now is our small size and lack of network. For us to be taken seriously we will need to grow significantly, expand our team and build relationships with stakeholders throughout Canada and abroad. The other key challenge comes from the lack of clear policies available in the AI field - the path forward for everyone is unclear and we will need to continuously navigate uncharted waters.



What do you need from EA?

Any and all forms of support (volunteers, funds, mentorship)! Including critical feedback as to best strategies for growth and which policies you believe would be most effective to advocate for (or connecting us with an expert who could advise us).

 

Thanks in advance for your thoughts and feedback (positive or negative)!”

 

P.S. One of our team members, David Yu, will be attending EAG London. If any of you are interested in chatting more at the conference, don’t hesitate to reach out at david.yu@centreforhumansuccess.org

3

0
0

Reactions

0
0

More posts like this

Comments13
Sorted by Click to highlight new comments since:

To what extent have you (whoever's in charge of CHS) talked with the relevant AI Safety organizations and people?

To what extent have you researched the technical and strategic issues, respectively?

What is CHS's comparative advantage in political mobilization and advocacy?

What do you think the risks are to political mobilization and advocacy, and how do you plan on mitigating them?

If CHS turned out to be net harmful rather than net good - what process would discover that, and what would the result be?

Hi Dony,

Great questions! My name is Wyatt Tessari and I am the founder.

1) We are doing that right now. Consultations is a top priority for us before we start our advocacy efforts. It's also part of the reason we're reaching out here.

2) Our main comparative advantage is that (to the best of our research) there is no one else in the political/advocacy sphere openly talking about the issue in Canada. If there are better organisations than us, where are they? We'd gladly join or collaborate with them.

3) There are plenty of risks - causing fear or misunderstanding, getting hijacked by personalities or adjacent causes, causing backlash or counterproductive behaviour - but the reality is they exist anyway. The general public will eventually clue in to the stakes around ASI and AI safety and the best we can do is get in early in the debate, frame it as constructively as possible, and provide people with tools (petitions, campaigns) that will be an effective outlet for their concerns.

4) This is a tough question. There would likely be a number of metrics - feedback from AI & governance experts, popular support (or lack thereof), and a healthy dose of ongoing critical thought. But if you (or anyone else reading this) has better ideas we'd love to hear them.

In any case, thanks again for your questions and we'd love to hear more (that's how we're hoping to grow...).

Seems like the main argument here is that: "The general public will eventually clue in to the stakes around ASI and AI safety and the best we can do is get in early in the debate, frame it as constructively as possible, and provide people with tools (petitions, campaigns) that will be an effective outlet for their concerns."

One concern about this is that "getting in early in the debate" might move up the time that the debate happens or becomes serious, which could be harmful.

An alternative approach would be to simply build latent capacity - work on issues that are already in the political domain (I think basic income as a solution for technological employment is something that is already out there in Canada), but avoid raising new issues until other groups move into that space too. While you're doing that, you could build latent capacity (skills, networks) and learn how to effectively advocate in spaces that don't carry the same risks of prematurely politicizing AI related issues. Then when something related to AI becomes a clear goal for policy advocacy, moving onto it at the right time.

Indeed. Getting in early in the debate also means taking on extra responsibility when it comes to framing and being able to respond to critics. It is not something we take lightly.

Our current strategy is to start with technological unemployment and experiment, build capacity & network with that first before taking on ASI, similar to your suggestion.

This also fits with the election cycle here as there is a provincial election in Ontario in 2018 (which has more jurisdiction over labour policies) before the federal one in 2019 (where foreign policy/global governance is addressed).

The challenge remains that no one knows when the issue of ASI will become mainstream. There are rumours of an "Inconvenient Truth"-type documentary on ASI coming out soon, and with Elon Musk regularly making news and the plethora of books & TED talks being produced, no one has the time to wait for a perfect message, team or strategy. Some messiness will have to be tolerated (as is always the case in politics).

Hi Wyatt,

I'm a Canadian currently studying public policy in London. I'm planning to write my dissertation on AI policy and gender, so naturally I'm fascinated by your organization.

The topics you're planning to discuss, especially the risk of a general artificial intelligence, seem quite sensitive. You didn't say a lot about your background. What relevant experience does your team have at handling sensitive issues or framing political debates? (I mean in your day jobs; I know the nonprofit is new.)

Kirsten

I'm a Canadian currently studying public policy in London. I'm planning to write my dissertation on AI policy and gender, so naturally I'm fascinated by your organization.

Out of curiosity, what is the connection between AI policy and gender you're looking at?

I'm tentatively planning to look at the government's role in regards to AI that is discriminating or perceived to be discriminating based on sex. For example, if an AI system was only short-listing men for top jobs, should the government respond with regulation, make it easier for offended parties to challenge in court, try to provide incentives for the company to improve their technology, or something else entirely?

I just started my MA a month ago, though, and won't be seriously focusing on my dissertation until May, so I will have a much better idea in six months. :)

Very interesting! Please share your findings when they're ready. Would love to know more.

Good question. Right now, our team has a wealth of organisational knowledge, but the political experience comes from me - I am a former climate change advocate and three-time political candidate. To get a sense of what that involved, this is a speech I gave at a climate rally in 2015: https://vimeo.com/124727472 (note: I am no longer a member of any party and the CHS is strictly non-partisan)

I also have a bachelor's in mechanical engineering, am fluent in French (important in national media & politics), and have a track record of leading teams of volunteers.

I learned the hard way how difficult it can be to get a complex global challenge like climate change into the political debate, and there are many lessons learned I intend to apply to campaigning on AI and technological unemployment.

All that said, expanding our team and circle of advisors is essential for us to succeed, and this our #1 priority at this stage.

I've heard quite a few people say that they were wary about this kind of public outreach because they thought it might politicise the issue and do more harm than good. I'm not saying that this is my position, but what are your thoughts on stopping this from happening?

Further, it isn't clear from the above what kind of political action you intend to push for.

Yes we've heard this concern as well, and it's a fair one. The challenge is that public outreach on AI has already begun (witness Elon Musk's warnings) and holding back won't stop that.

Our approach is to engage with people across the political spectrum (framing the issue accordingly) and reinforce the message that when it comes to ASI risks we're quite literally all in this together.

As for specific government actions we'd be advocating for, this is something we are currently defining but the three areas we've flagged as most likely to help human success this century are technology governance, societal resilience and global coordination.

Thanks for sharing Liav! We’d love to get some feedback/advice, since we are pretty new at this. If you would be interested in dropping a personal line, don’t hesitate to email me at david.yu@centreforhumansuccess.org!

I've talked to Wyatt and David, afterwards I am more optimistic that they'll think about downside risks and be responsive to feedback on their plans. I wasn't convinced that the plan laid out here is a useful direction, but we didn't dig into it into enough depth for me to be certain.

Curated and popular this week
Relevant opportunities