G

Gentzel

190 karmaJoined

Comments
38

FWIW, Los Alamos claims they replicated Hiroshima and the Berkeley Hills Fire Smoke Plumes with their fire models to within 1 km of plume height. It's pretty far into the presentation though, and most of their sessions are not public, so I can hardly blame anyone for not encountering this. 

This is a really good summary. I think the main area I have remaining uncertainty about these types of arguments is around if tech decisively favors finders or not. I believe there's been a lot of analysis that implies that modern nuclear submarines are essentially not feasible to track and destroy, and there are others that argue AI-enabled modeling and simulation could be applied as a "fog of war" machine to simulate an opponent's sensor system and optimize concealment of nuclear forces. 

Nevertheless, without more detail these sorts of  counterarguments seem highly contingent to me: they depend on the state of technology that actually gets deployed, and there may be  decisive counter-counter arguments based on other military capabilities. 

It would be interesting to see more EAs "grok" these sorts of arguments and to think through corresponding nuclear modernization and arms control strategies that would be implied by trying to assure a desirable long-term future. I've tended to think more redundant and accurate arsenals of low-yield weapons like neutron bombs would tend to strengthen deterrence while eliminating most of the risk of nuclear winter and long-term radiation, but it's easy to imagine there might be better directions yet to ratchet competition and negotiation since that kind of proposal could also be very destabilizing!

I am not sure we should focus more into this area, I just want to make sure that in general, people who go into policy or advocacy don't propagate bad ideas, or discredit EA with important people who would otherwise be aligned with our goals in the future.

I do think that knowing the history of transformative technologies (and policies that effected how they were deployed) will have a lot of instrumental value for EAs trying to make good decisions about things like gene editing and AI.

You seem to be missing the part where most people are disagreeing with the post in significant ways.

F-15s and MRAPs still have to be operated by multiple people, which requires incentive alignment between many parties. Some autonomous weapons in the future may be able to self-sustain and repair, (or be a part of a self-sustaining autonomous ecosystem) which would mean that they can be used while being aligned with fewer people's interests.

A man-at arms wouldn't be able to take out a whole town by himself if more than a few peasants coordinate with pitchforks, but depending on how LAWS are developed, a very small group of people could dominate the world.

I actually agree with a lot of your arguments, but I don't agree overall. AI weapons will be good and bad in many ways, and if the are good or bad overall depends on who has control, how well they are made, and the dynamics of how different countries race and adapt.

That is the point.

The reason it is appropriate to call this ethical reaction time, rather than just reaction time is because the focus of planning and optimization is around ethics and future goals. To react quickly with respect to an opportunity that is hard to notice, you have to be looking for it.

Technical reaction time is a better name in some ways, but it implies too narrow of a focus, while just reaction time implies too wide of a focus. There probably is a better name though.

I just added some examples to make it a bit more concrete.

Load more