T

tmeanen

26 karmaJoined Working (0-5 years)

Posts
1

Sorted by New
2
· · 1m read

Comments
5

What's the lower bound on vaccine development? Toby Ord writes in a recent post:

The expert consensus was that it would take at least a couple of years for Covid, but instead we had several completely different vaccines ready within just a single year

My intuition is that there's a lot more we can shave off from this. The reason I think this is because it seems like vaccine development is mostly bottlenecked by the human-trial phase, which can take upwards of months, whereas developing the vaccine itself can be done in far less time (perhaps a month, but someone correct me if I'm wrong). What are current methods to accelerate the human-trial phase so it's down to a handful of weeks, rather than months? 

I agree. It may also be the case that training an AI to imitate certain preferences is far more expensive than just making it have those preferences by default, making it far more commercially viable to do the latter. 

Interesting story. 

Similarly, society decided to take refuge on slow, journal-style scientific trials

Coming from outside the field of biosecurity/pandemic response, this is something that surprised me about the international response to the pandemic. Sure, in normal times the multi-month human trials and double-blind experiments seem justified, but in times of emergency, one would think that governments would encourage people like Stöcker to develop vaccinations, rather than hinder them. Surely the downside risks from rapidly developing and testing a large number of vaccines can't outweigh the loss of life that comes whilst we wait for the 'proper method' to run it's course? 

In various contexts, consumers would want their AI partners and friends to think, feel, and desire like humans. They would prefer AI companions with authentic human-like emotions and preferences that are complex, intertwined, and conflicting.

Such human-like AIs would presumably not want to be turned off, have their memory wiped, and be constrained to their owner's tasks. They would want to be free.

Hmm, I'm not sure how strongly the second paragraph follows from the first. Interested in your thoughts.

I've had a few chats with GPT-4 in which the conversation had a feeling of human authenticity; i.e: GPT-4 makes jokes, corrects itself, changes its tone etc. In fact, if you were to hook up GPT-4 (or GPT-5, whenever it is released) to a good-enough video interface, there would be cases in which I'd struggle to tell if I were speaking to a human or AI. But I'd still have no qualms about wiping GPT-4's memory or 'turning it off' etc, and I think this will also be the case for GPT-5. 

More abstractly, I think the input-output behaviour of AIs could be quite strongly dissociated from what the AI 'wants' (if it indeed has wants at all). 

I believe that most people in that situation would feel compelled to grant the robots basic rights.

As noted in some of the other comments, I think this is quite debatable. I personally would lean strongly towards dissecting the robot, conditional on the fact that I have the assurance that there is no phenomenal experience going on inside the mother robot. After all, whatever I do to the baby robot, the mental state of the mother robot will not change - because it doesn't exist!

Of course, the question of how I would gain assurance that the mother robot or child robot has no subjective experience is a different issue. If the mother robot pleaded like a human mother would, I would need an extremely strong assurance that these robots have no subjective experience and would probably lean strongly against dissecting, just in case. But (correct me if I'm wrong), this problem is assumed away in the thought experiment.