Stanford - A.I. research.
Founder of an A.I. technology company, Brooklyn Artificial Intelligence Research (Skopos Labs, Inc.), which owns the investment management firm, Brooklyn Investment Group (https://bkln.com).
Conducted research funded by the U.S. National Science Foundation and the U.S. Office of Naval Research. Created first A.I. course at the NYU School of Law. Published research on A.I., finance, law, policy, economics, and climate change. Publications at http://johnjnay.com, and Twitter at https://twitter.com/johnjnay.
Unfortunately, I think the upside of considering amendments to lobbying disclosure laws to attempt to address implications of this outweigh downsides of people learning more about this.
Also, the more well-funded special interest groups are more likely to independently discover and advance AI-driven lobbying than the less well-funded / more diffuse interests of average citizens.
I think AI alignment can draw from existing law to a large degree. New legal concepts may be needed but I think there is a lot legal reasoning, legal concepts, legal methods, etc. that are directly applicable now (discussed in more detail here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4218031).
Also, I think we should keep the involvement of AI in law-making (broadly defined) as limited as we can. And we should train AI to understand when there is sufficient legal uncertainty that a human is needed to resolve the correct action to be taken.
This is a great post.
Law is the best solution I can think to address the issues you raise.
This was cross-posted here as well:
A follow-up thought based on conversations catalyzed by this post:
Much of the research on governing AI and managing its potential unintended consequences currently falls into two ends of a spectrum related to assumptions of the imminence of transformative AGI. Research operating under the assumption of a high probability of near-term transformative AI (e.g., within 10-15 years) is typically focused more on how to align AGI with ideal aggregations of human preferences (through yet to be tested aggregation processes). Research operating under the assumption of a low probability of near-term transformative AI is typically focused on how to reduce discriminatory, safety, and privacy harms posed by present-day (relatively "dumb") AI systems. The proposal in this post seeks a framework that, over time, bridges these two important ends of the AI safety spectrum.
Related paper: https://law.stanford.edu/publications/large-language-models-as-fiduciaries-a-case-study-toward-robustly-communicating-with-artificial-intelligence-through-legal-standards/
And related post: https://forum.effectivealtruism.org/posts/cWeioTmbs73iZjs25/large-language-models-as-fiduciaries-to-humans