I don't know why people keep downvoting my posts. I agree that they could be better, but I don't think my post's karma accurately reflects their worth. I am biased, however.
I’ve gotten much better at brainstorming.
my general strategy is to use a flowchart to model things such that every possible outcome is accounted for, and numerically defining any given problem so I can see what variables to change.
For example, here’s what my process would look like for if the US should launch nukes at North Korea at any given time (from the perspective of 🇺🇸)
E(value of sending nukes)=p(🇰🇵 responds by sending nukes|🇺🇸 sent nukes)*E(value of the world if a nuclear exchange happened.)+p(🇰🇵 doesn’t respond by sending nukes|🇺🇸 sent nukes)*E(value of world if 🇺🇸 sent nukes to 🇰🇵, but 🇰🇵 doesn’t respond with nukes)-E(value of the world if nukes aren’t sent (from the perspective of 🇺🇸 at the time of deciding) {here I split it up into potential outcomes, and the value of doing thing A instead of thing B is E(A)-E(B)}, and this equals
p(🇰🇵 responds by sending nukes|🇺🇸 sent nukes)*(E(value of the world (in the 5-months period after this happens) if a nuclear exchange happened)+E(the same thing but all time after the first 5 months))+p(🇰🇵 doesn’t respond by sending nukes|🇺🇸 sent nukes)*(E(value of world if 🇺🇸 sent nukes to 🇰🇵, but 🇰🇵 doesn’t respond with nukes, but only how it effects people who were in 🇰🇵 at the time)+E(the same thing but how it effects everyone who weren’t in 🇰🇵 at the time.))-E(value of the world if nukes aren’t sent (from the perspective of 🇺🇸 at the time of deciding) {here I split it up by time and by people (here “people” means “anything that has inherent value in the eyes of 🇺🇸”).}
Here, a conclusion could be to try and increase E(value of the world if nukes aren’t sent (from the perspective of 🇺🇸 at the time of deciding) in order to prevent a nuclear exchange, since that’s pretty aligned with many moral perspectives.(the same thing goes if we replace 🇺🇸&🇰🇵 with 🇷🇺&🇺🇸, or 🇰🇵&🇺🇸, etc., in this example)
More generally, one could try and increase how much value people ascribe to the world where [said people take the actions you want them to take].[1]
Another similar strategy I’ve been using recently is using certain models of the world that account for every scenario for which the model accounts for every impact my decision could have, or at least be a close approximate.
In this example, one model might be that any given country has some choices in regards to nukes (namely, a probability distribution of when, if ever, a nuke launches from some section in space and lands in another section of space; Ability to communicate with other relevant decision-making nations or otherwise manipulate the information they have (often in a way that better reflects reality), manipulate their own information, manipulate their and others’s amount of and power of nukes, manipulate theirs and others’s preferences on what each nation should decide. They are also allowed to use probabilities to make any of these decisions. (The equivalent of them having plenty of dice in the decision room). Some of their options are limited, so each nation only has a certain set of options. Canada can’t send one million nukes to the moon by 2023, for example, largely because 2023 already happened.), and they can also manipulate theirs and others’s options.
Each nation has a preference on which combination of [decisions that each nation makes] should happen, such that, if given the choice between two options of this combination, they would consistently choose one of them irrespective of factors other than their preferences and information.
Note that this model is specifically tailored such that the lessons learned from game theory can be applied here. This might be a model that, say, [an ambassador to a nation with nuclear weapons] might use, but a model tailored to a reporter might mainly include probability of certain things that effect consumers of the 🗞️ news. (e.g., Peace talks, nukes used, jobs created by any given program, how any given action impacts how each scenario effects them (e.g., if it were announced that 🇺🇸 went through with the “star wars” idea to make 🇺🇸 not get hit if a nuke was sent to them, that would effect readers, and would be a major news story.))
A doctor might use the model of “everything that effects a certain part of the body has an impact. going through the list of parts of the body, we can develop a list of ways any given issue effects each facet of the body.”
This can then be used by the doctor by them going through the list of body parts and imagining what things might cause them to not function normally, thus anticipating much of the potential negative health impacts that might exist in the world.
I’m also pretty good with numbers(I feel comfortable thinking about (e^(1/0)), so if you have any questions about math, I’m happy to help!
I can also help with work and productivity tips (e.g., to stay awake, do something actively. Pause your all-nighter, play one round of Call Of Duty, and then get back to work. It’ll keep you up and productive all night long.)
I can also provide research assistance, summarize information, check things for errors (Not very well though😅), and more. Feel free to ask if I can help with _______, and hopefully/probably, I’ll say yes!
In some cases, the very knowledge that someone might try to [make you want to change your decision to something better for them] might make you hold off on a decision, so you can make a more informed one later, since you know that a lot might change between now and then. One practical application of this is that the simple promise that one might improve the version of the world where [some specific decisions that you want made] are made, then the relevant decision-makers might hold off on a decision in the anticipation of one scenario improving enough that it changes their choice.
I will note that most change of this scale doesn’t arise from methods like this. This could aide in giving a rough sense of how likely this is to work. Here’s some examples of things like this working:
And here are some examples of efforts that have required broader support:
(Note: this was all off the top of my head.)
Hey, did y’all consider the possibility of a conscious being’s experiences being influenced by:
And have we thought about if a conscious person can use their free will to alter those thing?
If someone is both experiencing and deciding on matters from a different dimension, can’t it be argued that they are sort of living in multiple dimensions?
There is sort of precedent for this: science used to be much more argumentative, and now, most of science is done in very intelligent ways, aimed at getting to the RIGHT answer, and not “their answer”. This led to many, if not most or all, scientific problems being solved*.
In addition, if you aim to be a powerful scientist, fighting for “your answer” makes it much harder than it is if you were fighting for the RIGHT answer. Similarly, if this project worked well, it would be much harder to gain power if you fought for “your values” than if you fought for the RIGHT values!
it seems boggling at first glance that this would work, but in summary, it would work like this: Sometimes, in an argument, one or more sides doesn’t care about reaching the RIGHT conclusion, they just care about it reaching a conclusion they approve of. This is often the difficulty with arguments.
However, when everyone is brought to the table and wants to reach the RIGHT conclusion, you find that the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!
This project would basically bring world leaders to the table, where they would look for the RIGHT conclusion to major problems, which should lead to the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!
Message to any world leaders who aren’t willing to change their values: If you can successfully stop this from happening if you tried, then it wouldn’t work, so there’s no point in trying to stop me. It would be comparable to voting in an election determined by people’s opinions, not by how they voted (the equivalent of writing on a random piece of paper, “I vote like so: __”).
I say this because in any scenario where, even assuming every world leader who has completely unwavering moral values tried really hard to stop our program AND cooperated with one another, IF such an effort would potentially be successful, then our program would fail.
To expand on that: If your efforts make the difference between our program succeeding and failing or otherwise affecting its success, we would have a huge incentive to ensure that this program isn’t bad for you. This is because, if [you think it would be better for [your values] to try and prevent any given facet/part of our program], you would logically do so, and we don’t want that, so we will make sure [You are happy with each of those facets of the program].
Basically, you don’t need to stop our program. The threat that you might try to stop our program has the same effect.
If we can help you in a way that doesn't come at a cost to us (e.g., reschedule meetings so the time of the meetings work better for you), we will!
As an analogy, if you had the option to get rid of a country, then you don’t have to worry about them being bad for you, because they have a massive incentive to be good for you: not getting destroyed.
Here’s another analogy: Someone is making you food. You don't have to spend thousands of dollars to ensure that the person makes good food since you can simply throw the food away if the food does not taste good, and the person making the food already has a massive incentive to make food that tastes good to you: not getting the food thrown out.
All of this goes without saying, but saying it makes it clear.
Another reason world leaders might support this is that they think the program would have a good result (namely by them thinking that their current goals would be the goals that would be landed on, namely because they might think their goals are right and that the program would land on the right goals or goals close to the right ones), and that that result would become even better with their participation.