Heramb Podar

AI Policy Research Fellow/Student @ Center for AI and Digital Policy/IIT Roorkee
133 karmaJoined Working (0-5 years)Pursuing a graduate degree (e.g. Master's)India

Bio

Participation
6

How I can help others

Discussing EA in global south contexts

Getting started in AI governance 

Comments
30

With open-source models being released and on ramps to downstream innovation lowering, the safety challenges may not be a single threshold but rather an ongoing, iterative cat-and-mouse game.

Just underscores the importance of people in the policy/safety field thinking far ahead

Updated away from this generally- there is a balance.
Good example for why I updated away is 28:27 from the video at:

Yeah, I wish someone had told me this earlier - it would have led me to apply a lot earlier and not "saving my chance." There's a couple of layers to this thought process in my opinion:
 

  •  Talented people often feel like they are not the ideal candidates/ they don't have the right qualifications.
  • The kind of people EA attracts generally have a track record of checking every box, so they carry this "trait" over into the EA space
  • In general, there's a lot of uncertainty in fields like AI governance even among experts from what I can glean
  • Cultures particularly in the global south punish people for being uncertain, let alone quantifying uncertanity

A lot of policy research seems to be written with an agenda in mind to shape the narrative. And this kind of destroys the point of policy research which is supposed to inform stakeholders and not actively convince or really nudge them.

This might cause polarization in some topics and is in itself, probably snatching legitimacy away from the space.

I have seen similar concerning parallels in the non-profit space, where some third-sector actors endorse/do things which they see as being good but destroys trust in the whole space.

This gives me scary unilaterist's curse vibes..

Everyone who seems to be writing policy papers/ doing technical work seems to be keeping generative AI at the back of their mind, when framing their work or impact. 

 

This narrow-eyed focus on gen AI might almost certainly be net-negative for us- unknowingly or unintentionally ignoring ripple effects of the gen AI boom in other fields (like robotics companies getting more funding leading to more capabilities, and that leads to new types of risks).

 

And guess who benefits if we do end up getting good evals/standards in place for gen AI? It seems to me companies/investors are clear winners because we have to go back to the drawing board and now advocate for the same kind of stuff for robotics or a different kind of AI use-case/type all while the development/capability cycles keep maturing.

We seem to be in whack-a-mole territory now because of the overton window shifting for investors.

I don't think we have a good answer to what happens after we do auditing of an AI model and find something wrong.

 

Given that our current understanding of AI's internal workings is at least a generation behind, it's not exactly like we can isolate what mechanism is causing certain behaviours. (Would really appreciate any input here- I see very little to no discussion on this in governance papers; it's almost as if policy folks are oblivious to the technical hurdles which await working groups)

I see way too many people confusing movement with progress in the policy space. 

There can be a lot of drafts becoming bills with still significant room for regulatory capture in the specifics, which will be decided later on. Take risk levels, for instance, which are subjective - lots of legal leeway for companies to exploit. 

Communicating by keeping human rights at the centre of AI Policy discussion is extremely underappreciated.
For e.g., the UN Human Rights chief in 2021 called for a moratorium on the sale and use of artificial intelligence (AI) systems until adequate safeguards are put in place.

Respect for human rights is a well-established central norm; leverage it  

Great post!
Do note that given the context and background, a lot of your peers are probably going to be nudged towards charitable ideas. I would want to be generally mindful that you are doing things that have counterfactual impacts while also taking into account the value of your own time and potential to do good.

I encourage you to also be cognizant of not epistemically taking over other people's world models with something like "AI is going to kill us all" - I think an uncomfortable amount of space inadvertently and unknowingly does this and is one of the key reasons why I never started an EA group at my university.

Also, here is a link if anyone wants to read more on the China AI registry which seems to be based on the model cards paper

Load more