KYP

Kinoshita Yoshikazu (pseudonym)

26 karmaJoined Pursuing a doctoral degree (e.g. PhD)

Comments
13

I think it removes the "Hostile AGI is the Great Filter" scenario, which I recall seeing a few times but it doesn't make much sense to begin with.

"Food supply collapse" isn't a simple binary switch, though. 

It's possible that whatever food leftover will be distributed in the most militarily efficient way possible, and a large number of civilians will be left to starve so that the remenants of conventional military forces could continue their fight to the death.

 

Of course, I think this scenario will not lead to outright human extinction. But it does make the post-nuclear war situation a lot more difficult, despite civilisation nominally surviving the ordeal.

Before going too deep into the "should we air strike data centres" issue, I wonder if anyone out there has good numbers about the current availability of hardwares for LLM training. 

Assuming that the US/NATO is committed to shutting down AI development, how much impact does a serious restriction on chip production/distribution have on the ability of a foreign actor to train advanced LLMs? 

I suspect there are enough old GPUs out there that can be repurposed into training centres, but how much more difficult would it be that no/little new hardwares are coming in? 

And for those old GPUs inside consumer machines or crpto farms, is it possible to cripple their LLM training capability through software modifications? 

Assuming that Microsoft and Nvidia/AMD are onboard, I think it should be possible to push a modification to the firmware of almost every GPU installed inside windows machines that are connected to the internet (that...should be almost everything). If software modification can prevent GPUs/whatever from being used effectively in LLM training runs, this will hopefully take most existing GPU stocks (and all newly manufactured GPUs) out of the equation for at least sometime. 

I agree with your post in principle, we should take currently unknown, non-human moral agents into the calculation of X-risks.

On the other hand, I personally think leaving behind an AGI (which, afterall, is still an "agent" influenced by our thoughts and values and carries it on in some manner) is a preferable end game for human civilisation compared to a lot of other scenarios, even if the impact of an AGI catastrophe is probably going to span the entire galaxy and beyond.

Grey goo and other "AS-catastrophes" are definitely very bad.

From "worst" to "less bad", I think the scenarios would line up something like this:

1: False vacuum decay, obliterates our light-cone.

2: High velocity (relativistic) grey goo with no AGI. Potentially obliterates our entire light-cone, although advanced ailen civilisations might survive.

3: Low velocity grey goo with no AGI, sterlises the Earth with ease and potentially spreads to other solar systems or the entire galaxy but probably not beyond (the intergalatic travel time would probably be too long for the goo to maintain their function). Technological ailen civilisations might survive.

4: End of all life on Earth from other disasters.

5: AGI catastrophe with spill over in our light cone. I think an AGI's encounter with intelligent alien life is not guaranteed to follow the same calculus as its relationship with humans, so even if an AGI destroys humanity, it is not necessarily going to destroy (or even be hostile to) some ailen civilisation it encounters.

 

For a world without humans, I am a bit uncertain about whether the Earth has enough "time left" (about ~500 million years before the sun's increasing luminosity makes the Earth significantly less habitable) for another intelligent species to emerge after a major extinction event (say, large mammals took the same hit as dinosaurs) that included humanity. And whether the Earth would have enough accessible fossil fuel for them to develop technological civilisation.

This is true, I do wonder what could be done to get around the fact that we really can't handle remembering complex passwords (without using some memory aid that could be compromised). 

Biometrics makes sense for worker/admin access, but I'm not sure about the merits of deploying it en masse to the users of a service. 

Despite all the controversies surrounding that (in?)famous XKCD comic, I would still agree with Randall that passphrases (I'm guilty of using them) are okay if we make them long enough. And the memory aids that one might need for pass phrases are probably less easy to compromise (e.g. 

I imagine it's not too hard for an average human to handle a few pass phrases of 10 words each, so maybe bumping "allowed password length" from 16-30 characters to 100 would solve some problems for security-minded users. 

Another tool I imagine might be good is allowing unicode characters in passwords, maybe mixing Chinese into passwords could let us have "memorable" high entropy passwords.

I suspect that all three political groups (maybe not the libertarians) you mentioned could be convinced to turn collectively against AI research. Afterall, governmental capacity is probably the first thing that will benefit significantly from more powerful AIs, and that could be scary enough for ordinary people or even socialists. 

Perhaps the only guaranteed opposition for pausing AI research would come from the relevant corporations themselves (they are, of course, immensely powerful. But maybe they'll accept an end of this arms race anyway), their dependents, and maybe some sections of libertarians and progressives (but I doubt there are that many of them committed to supporting AI research). 

The public opinion is probably not very positive about AI research, but also perhaps a bit apathetic about what's happening. Maybe the information in this survey, properly presented in a news article or something, could rally some public support for AI restrictions.

Do you think it's a serious enough issue to warrant some...not very polite responses? 

Maybe it would be better if policy makers just go and shut AI research down immediately instead of trying to make reforms and regulations to soften its impact? 

Maybe this information (that AI researchers themselves are increasingly pessimistic about the outcome) could sway public opinion enough to that point? 

I do think what Plague Inc is doing..is far from a simulation of an infectious disease...

The pathogen in PI receives "updates" from a handler, and cannot be cleared from a host without intervention (nobody in PI recovers from the pathogen unless a cure is distributed). This reminds me more about computer malware than any biological agent...

I would agree with 1., yeah. Generally a disease that doesn't transmit during the "dormat" period would not be much different from a disease that is very acute. 

I think "mild acute illness that lays dormat and comes back later" can blur the lines a bit. Say, if we have a disease similar to HIV that causes flu-like illness in the acute phase and was highly infectious at point of time (but doesn't show up to be a serious illness until much later and wasn't transmissive during the dormat period) would probably make the non-transmissive dormat period relevant for our responses. 

 

I'm the most curious of 2...I know it is unlikely for herpes/EBV/measles to kill someone after they recover from an acute infection, but I wonder if a very different virus could cause "serious illness later in life" in a larger proportion of people, without being significantly more deadly in the acute phase. 

--Which, I would call it "the plague inc. winning disease", since the easiest winning strategy of PI was to infect as many people as possible before bringing in the serious symptoms that kill...It is of course impossible for a disease to "mutate" serious symptoms in every infected person simultaneously, but what if it already has a "delayed payload" built in, when it's spreading uncontrolled...

And maybe this is a particular scenario to watch out for when considering possible engineered pathogens.

Vaccination using cowpox seems to be the kind of technology that didn't require a lot of "prerequisites" in its way. I wonder how different history would've been if cowpox was discovered much earlier, and cowpox-vaccination became a widespread practice in at least some regions before the 1000s or so. 

Could smallpox eradictation be achieved on a national/regional level in a pre-industrial society? And how much would that change the course of history? 

Load more