Pato

Bio

Infinite Player

Comments
48

Topic contributions
1

Answer by Pato1
0
0

I think the bigger question is why haven't we found other species elsewhere in the universe.

Then I see the question about whether they'll kill us or not as a different one.

I really liked the axis that you presented and the comparision between a version of the community that is more cause oriented vs member oriented.

The only caveat that I have is that I don't think we can define a neutral point in between them that allows you to classify communities as one type or the other. 

Luckily, I think that is unnecesary because even though the objective of EA is to have the best impact in the world and not the greatest number of members, I think we all think the best decision is to have a good balance between cause and member oriented. So the question that we should ask is should EA be MORE big tent, weird, or do we have a good balance right now?

And to achieve that balance we can be more big tent in some aspects, moments and orgs and weirder in others.

I strongly agree with you and, as someone who thinks alignment is extremely hard (not just because of the technical side of the problem, but due the human values side too), I believe that a hard pause and studying how to augment human intelligence is actually the best strategy.

Wait, I don't understand. Are 63.6% or 76.6% of respondents left-leaning? And 69.58% or 79.81% non-religious?

Have important "crying wolf" cases have happen in real life? About societal issues? I mean, yeah, it is a possibility but the alternatives seem so much worse.

How do we know when we are close enough to the precipice for other people to be able to see it and to ask to stop the race to it? General audiences lately have been talking about how surprised they are about AI so it seems like perfect timing for me.

Also, if people get used to benefit and work in narrow and safe AIs they could put themselves against stopping/ slowing them down.

Even if more people could agree on decelerate in the future it would take more time to stop/ going really slow with more stakeholders going at a higher speed. And of course, after that we would be closer to the precipice that if we started the deceleration before.

I doubt this:

the AGI doom memeplex has, to some extent, a symbiotic relationship with the race toward AGI memeplex

I mean, if you say it could increase the amount of people working in capabilities at first I would agree, but it probably increases a lot more the amount of people working on safety and wanting to slow/ stop capabilities research, which could create legislation and at the end of the day increase time until AGI.

In respect of the other cons of the doom memeplex I kinda agree to a certain extent but I don't see them come even close to the pros of potentially having lots of people actually taking the problem very seriously.

What if we fundamentally value other things but instrumentally that translates to power?

The challenge isn’t figuring out some complicated, nuanced utility function that “represents human values”; the challenge is getting AIs to do what it says on the tin—to reliably do whatever a human operator tells them to do.

Why do you think this? I infer for what I've seen written in other posts and comments that this is a common belief but I don't find the reasons why. 

The fact that there are specific really difficult problems with aligning ML systems doesn't mean that the original really difficult problem with finding and specifying the objectives that we want for a superintelligence were solved.

I hate it because it makes it seems like alignment is a technical problem that can be solved by a single team and as you put it in your other post we should just race and win against the bad guys. 

I could try to envision what type of AI you are thinking of and how would you use it, but I would prefer if you tell me. So, what would you ask your aligned AGI to do and how would it interpret that? And how are you so sure that most alignment researchers would ask it the same things as you?

Load more