P

Peter

570 karmaJoined Working (0-5 years)

Bio

Participation
2

Interested in AI safety talent search and development. 

How others can help me

  1. Discuss charity entrepreneurship ideas, nuts & bolts. 
  2. Recommend guest speakers for discussions on AI alignment, biosecurity, animal welfare, AI governance, and charity entrepreneurship.
  3. Connect me with peers, partners, or cowriters for research or fiction. 

How I can help others

Making and following through on specific concrete plans. 

Comments
129

Topic contributions
2

Do you think there's a way to tell the former group apart from people who are closer to your experience (hearing earlier would be beneficial)?

Interesting. People probably aren't at peak productivity or even working at all for some part of those hours, so you could probably cut the hours by 1/4. This narrows the gap between what GPT2030 can achieve in a day and what all humans can together. 

Assuming 9 billion people work 8 hours that's ~8.22 million years of work in a day. But given slowdowns in productivity throughout the day we might want to round that down to ~6 million years. 

Additionally, GPT2030 might be more effective than even the best human workers at their peak hours. If it's 3x as good as a PhD student at learning, which it might be because of better retention and connections, it would be learning more than all PhD students in the world every day. The quality of its work might be 100x or 1000x better, which is difficult to compare abstractly. In some tasks like clearing rubble, more work time might easily translate into catching up on outcomes. 

With things like scientific breakthroughs, more time might not result in equivalent breakthroughs. From that perspective, GPT2030 might end up doing more work than all of humanity since huge breakthroughs are uncommon. 

 

This is a pretty interesting idea. I wonder if what we perceive as clumps of 'dark matter' might be or contain silent civilizations shrouded from interference. 

Maybe there is some kind of defense dominant technology or strategy that we don't yet comprehend. 

Interesting post - I particularly appreciated the part about the impact of Szilard's silence not really affecting Germany's technological development. This was recently mentioned in Leopold Aschenbrenner's manifesto as an analogy for why secrecy is important, but I guess it wasn't that simple. I wonder how many other analogies are in there and elsewhere that don't quite hold. Could be a useful analysis if anyone has the background or is interested. 

Huh had no idea this existed

This exists here but I haven't updated it in about a year. If someone wants to take it over or automate it that could be good: EA Talks (formerly EARadio)

I think it's good to critically interrogate this kind of analysis. I don't want to discourage that. But as someone who publicly expressed skepticism about Flynn's chances, I think there are several differences that mean it warrants closer consideration. The polls are much closer for this race, Biden is well known and experienced at winning campaigns, and the differences between the candidates in this race seem much larger. Based on that it at least seems a lot more reasonable to think Biden could win and that it will be a close race worth spending some effort on. 

  1. Interesting. Are there any examples of what we might consider a relatively small policy changes that received huge amounts of coverage? Like for something people normally wouldn't care about. Maybe these would be informative to look at compared to more hot button issues like abortion that tend to get a lot of coverage. I'm also curious if any big issues somehow got less attention than expected and how this looks for pass/fail margins compared to other states where they got more attention. There are probably some ways to estimate this that are better than others. 
  2. I see. 
  3. I was interpreting it as "a referendum increases the likelihood of the policy existing later." My question is about the assumptions that lead to this view and the idea that it might be more effective to run a campaign for a policy ballot initiative once and never again. Is this estimate of the referendum effect only for the exact same policy (maybe an education tax but the percent is slightly higher or lower) or similar policies (a fee or a subsidy or voucher or something even more different)? How similar do they have to be? What is the most different policy that existed later that you think would still count?

"Something relevant to EAs that I don't focus on in the paper is how to think about the effect of campaigning for a policy given that I focus on the effect of passing one conditional on its being proposed. It turns out there's a method (Cellini et al. 2010) for backing this out if we assume that the effect of passing a referendum on whether the policy is in place later is the same on your first try is the same as on your Nth try. Using this method yields an estimate of the effect of running a successful campaign on later policy of around 60% (Appendix Figure D20).

I'd be curious to hear about potential plans to address any of these, especially talent development and developing the pipeline of AI safety and governance. 

Very interesting. 
1. Did you notice an effect of how large/ambitious the ballot initiative was? I remember previous research suggesting consecutive piecemeal initiatives were more successful at creating larger change than singular large ballot initiatives. 

2. Do you know how much the results vary by state?

3. How different do ballot initiatives need to be for the huge first advocacy effect to take place? Does this work as long as the policies are not identical or is it more of a cause specific function or something in between? Does it have a smooth gradient or is it discontinuous after some tipping point?

Load more