Hide table of contents

While I’ve been working on my Law-Following AI Sequence, researchers from Stanford University released a very interesting paper and accompanying dataset and models called “Pile of Law.” Pile of Law contains interesting (and encouraging, in my opinion) evidence about the feasibility of constructing Law-Following AI (“LFAI”) systems, as I have defined it.

Relevant Paper Contents

The Pile of Law paper focuses most directly on the law and ethics of dataset compilation for NLP, including such issues as copyright, privacy preservation, and bias and toxicity management. As the authors correctly note, legal systems face their own versions of these problems when publishing publicly available legal sources.[1] Since legal systems’ solutions to those problems are implicit in the distribution of legal data, the authors hypothesized that training LLMs on such data could cause the models to learn those solutions, and thereby avoid the “need to reinvent the law."[2]

In a series of small experiments, the researchers tried to learn “contextual privacy rules,” such as whether to pseudonymize a party’s name, from legal corpora. In Case Study 1, an LLM trained on immigration data correctly learns to preferentially pseudonymize the names of asylees, refugees, and victims of torture.[3] Case Study 2 similarly showed that training an LLM on Pile of Law improved the model’s ability to correctly pseudonymize names in “sensitive and highly personal” court cases.[4]

Implications for LFAI

In addition to contributing a useful new dataset to the field, Pile of Law provides hints that LFAI is a tractable near-term research direction. As the authors say, “Pile of Law encodes signals about privacy standards that can be learned to produce more nuanced recommendations about filtering.” This accords nicely with one of the driving beliefs behind LFAI: that law and LLM safety have natural, untapped synergies due to the volume, structure, and political legitimacy of legal data. When I first began thinking about LFAI theoretically, I expected LLMs fine-tuned on legal data to both (a) behave better (by legal standards) in certain ways,[5] and (b) have the ability to augment the legal compliance of AI systems as a whole. The Pile of Law paper provides empirical evidence for (a), and the authors indeed suggest that such systems could be integrated into data workflows to accomplish (b).[6]

Pile of Law primarily analyzed data privacy and toxicity.[7] LFAI as a long-term safety measure is more ambitious, with the goal of creating AI systems or modules that learn legal rules and help conform agentic AI systems to law. Pile of Law shows a very weak form of legal rule-learning, insofar as the fine-tuned models pick up on contextual law-derived trends in legal data. But full LFAI would need to go well beyond this, to incorporate much more data about both facts and law and explicitly analyze the legal consequences of an agent’s behavior, rather than relying on implicit learning of probabilistic trends. Full LFAI would also need to be embedded into, and constrain, agentic AI systems, which would require nontrivial engineering. Thus, I cannot claim that Pile of Law is a major vindication of or achievement in LFAI. Still, its empirical evidence makes me more bullish on a major premise of LFAI, which had previously been mainly theoretical: there are significant and underexplored synergies between legal corpora, AI safety, and large language models. I am very thankful to the authors for their work, and for releasing the dataset to enable further explorations along this line.


  1. Peter Henderson & Mark S. Krass et al., Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset 2–3 (“When releasing internal documents concerning individuals, courts and governments have long struggled to balance transparency against the inclusion of private or offensive content. Model creators now face a similar struggle: what content to filter before pretraining a large language model on the data.”). ↩︎

  2. Id. at 3. The authors “do not take the position that legal rules are optimal nor monolithic,” while noting the legitimating procedural benefits of relying on legal sources. See id. I have made or plan to make similar points in the LFAI sequence. ↩︎

  3. Id. at 6–7. Cf. 8 CFR § 208.6(a). ↩︎

  4. Henderson & Krass et al. at 7. ↩︎

  5. Though admittedly I had not specifically thought of learning privacy and toxicity rules from legal corpora! ↩︎

  6. See id. at 7 (“These experiments show that the Pile of Law encodes signals about privacy standards that can be learned to produce more nuanced recommendations about filtering. Such contextualized filters may help ensure that generative models strike the right balance between accuracy and privacy protection, for example by accurately distinguishing benign releases of names and contact information (e.g., in response to queries about government officials) from harmful ones (sensitive circumstances where harm is plausible).”). ↩︎

  7. Toxicity examined in id. § 4.2. ↩︎

Comments2
Sorted by Click to highlight new comments since:

This is a really interesting piece of research. It is certainly a good omen for access to justice, both directly and indirectly. 

The issue of balancing privacy with transparency is an interesting one, and one I've done a lot of work in within Criminal Justice. It's never an easy decision to make, and I had never considered what good training material it would make for privacy-centred LLMs.

I'm still not completely sold on LFAI, but I agree that this is a promising factor in bringing it from theory to a more experimental basis.

The Pile of Law paper provides empirical evidence for (a), and the authors indeed suggest that such systems could be integrated into data workflows to accomplish (b).[6]

Based on § 4.2., it seems that the AI would assist humans to catch up with already accepted antidiscrimination decisionmaking, such as that mitigating racial biases. Human internalization of these norms could lead to further bias reduction in law, to which AI would help catch up some individuals who could otherwise decide based on earlier legislation.

§ 3.2 suggests that  a variety of microaggressions would be detected, which could increase the focus on subjective experiences of the legal subjects.  Anonymization in immigration and civil litigation could initially support but eventually contradict this effort. If decisionmakers could be biased by a name or a characteristic, then anonymization could improve the legal compliance of their judgments. However, if some individuals need to be protected by anonymization, then their subjective experience can still be worse than that when they do not. Thus, if legal compliance can be improved without de facto anonymization, that can be better. Alternatively, it should be continuously phased out for specific groups.

Curated and popular this week
Relevant opportunities