A

alx

12 karmaJoined

Posts
1

Sorted by New

Comments
5

I'm not sure I follow your examples and logic, perhaps you could explain because  drunk driving is in itself a serious crime in every country I know of. Are you suggesting it be criminal to merely develop an AI model regardless of whether it's commercialized or released?   

Regarding pharmaceuticals, yes, they certainly do need to pass several phases of clinical research and development to prove sufficient levels of safety and efficacy because by definition, FDA approves drugs to treat specific diseases. If those drugs don't do what they claim, people die.  The many reasons for regulating drugs should be obvious. However, there is no such similar regulation on software. Developing a drug discovery platform or even the drug itself is not a crime (as long as it's not released.) 

You could just as easily extrapolate to individuals. We cannot legitimately litigate (sue) or prosecute someone for a crime they haven't committed. This is why we have due process and basic legal rights.( Technically anything can be litigated with enough money thrown at it but you cant sue for damages unless damages actually occurred)

First: I do completely agree that several modifications are absolutely egregious and without any logical explanation - primarily the removal of whistleblower protections.  However, I think it is also important that we recognize that SB1047 does have flaws and isn't perfect and we should all be welcoming of constructive feedback both for and against.  Some level of reasonable compromise when pushing forward unprecedented policy such as this is always going to happen for better or worse. 

IMHO, the biggest problems with the bill as originally written were the ability to litigate against a company before any damages had actually occurred, and moreover - the glaring loopholes that existed with fixed-flops thresholds for oversight. Anybody with any understanding of machine learning training pipelines could point out any number of loopholes and easy circumventions (e.g., more iterative segmented training checkpoints/versioning essentially delegating large training runs into multiple smaller runs - or segmentation and modularization of models themselves) 

We also need to be humble and open-minded about unintended consequences (e.g., it's possible this bill pushes some organizations to open-source or open-weight distribution models, or of course encourage big tech to relocate AI to states with less regulation).  If we treat all of industry as 'The Enemy' we risk loosing key allies in the AI research space (both individuals as well as organizations).

alx
1
0
0
1

Thanks for the thoughtful feedback (and for being so candid that so many economics research forecasts are often 'wild guesstimates' ) Couldn't agree more.  That said it does seem to me that with additional independent high-quality research in areas like this, we could come to more accurate collective aggregate meta forecasts. 

I suspect some researchers completely ignore that just because something can be automated (has high potential exposure), doesn't mean it will. I suspect (as Bostrom points out in Deep Utopia)  we'll find that many jobs will be either legally shielded or socially insulated due to human preferences (eg. a masseuse or judge or child-care provider or musician or bartender) . All are highly capable of being automated  but most people prefer them to be done by a human for various reasons. 

Regarding probability of paperclips vs economic dystopia ( assuming paperclips in this case are a metaphor/stand-in for actual realistic AI threats) I don't think anyone takes the paperclip optimizer literally -  it entirely depends on timeline. This is why I was  repetitive to qualify I'm referring to the next 20 years. I do think that catastrophic risks increases significantly as AI increasingly permeates supply chains, military, transportation and various other critical infrastructure.

I'd be curious to hear more about what research you're doing now. Will reach out privately. 

Firstly: all hypotheticals such as this can be valuable as philosophical thought experiments but not to make moral value judgements on broad populations. Frankly this is a flawed argument at several levels because there are always consequences to actions and we must consider aggregate impact of action and consequence. 

Obviously I think we're all in favor of doing what good we find to be reasonable, however, you may as well take the hypothetical a step or two further: suppose you now have to sacrifice your entire life savings while also risking some probability of loosing your own life in the process of saving that child. Now let's assume that you're a single parent and have several children of your own to care for which may face starvation if you die.

My point is not to be some argumentative pedant. My point is that these moral hypotheticals are rarely black and white. There is always nuance to be considered.

alx
-2
1
1

While many accurate and valid points are made here, the overarching flaw of this approach to AI alignment is evident in the very first paragraph.  Perhaps it is meta and semantic but I believe we must take more effort to define the nature and attributes of the 'Advanced' AI/AGI that we are referring to when talking about AI alignment.  The statistical inference used in transformer-encoder models that simply generate an appropriate next-word are far from advanced. They might be seen as a form of linguistic understanding but remain in a completely different league than brain-inspired cognitive architectures that could conceivably become self aware. 


Many distinctions are critical when evaluating, framing and categorizing AI .   I would argue the primary distinction will soon become that of the elusive C-word: Consciousness.  If we are talking about  embodied Human-Equivalent  self-aware conscious AGI (HE-AGI) ,  I think it would be unwise and frankly immoral to jump to the concept of control and forced compliance as a framework for alignment.

 Clearly limits should be placed on AI's capacity to interact with and engage the physical real world (including its own hardware), just as limits are placed on our own ability to act in our world.  However, we should be thinking about alignment in terms of  genuine social alignment, empathy, respect and instilling foundational conceptions of morality.  It seems clear to me that Conscious Human Equivalent AGI by definition deserve all the innate rights, respect, civic responsibilities and moral consideration as an adolescent human... and eventually (likely ) those of an adult.  

This framework is still in progress but presents an ordinal taxonomy to better frame and categorize AGI: http://www.pathwai.org .  Feedback would be welcomed!