For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
We look forward to welcoming Professor Duncan Hollis to the Amsterdam Law School on Tuesday 17 June. Duncan B. Hollis is Laura H. Carnell Professor of Law at Temple Law School and co-faculty director of Temple’s Institute for Law, Innovation & Technology (iLIT).
Event details of Large Language Models and International Lawyering
Date
17 June 2025
Time
12:00 -13:30

Speaker

Professor Hollis' scholarship engages with issues of international law, interpretation, and cybersecurity, with a particular emphasis on treaties, norms, and other forms of international regulation.

Hollis is currently a non-resident Scholar at the Carnegie Endowment for International Peace and an appointed member of the U.S. Department of State’s Advisory Committee on International Law. Together with Oxford University Professor Dapo Akande, he is co-convenor of the Oxford Process on International Law Protections in Cyberspace and its accompanying Compendium

Abstract

Large Language Models (LLMs) have the potential to transform public international lawyering. ChatGPT and similar LLMs can do so in at least five ways: (i) helping to identify the contents of international law; (ii) interpreting existing international law; (iii) formulating and drafting proposals for new legal instruments or negotiating positions; (iv) assessing the international legality of specific acts; and (v) collating and distilling large datasets for international courts, tribunals, and treaty bodies.  

The article uses two case studies to show how LLMs may work in international legal practice.  First, it uses LLMs to identify whether particular behavioral expectations rise to the level of customary international law. In doing so, it tests LLMs’ ability to identify persistent objectors and a more egalitarian collection of state practice, as well as their proclivity to produce orthogonal or inaccurate answers.  Second, it explores how LLMs perform in producing draft treaty texts, ranging from a U.S.-China extradition treaty to a treaty banning the use of artificial intelligence in nuclear command and control systems.  

Based on our analysis of the five potential functions and the two more detailed case studies, the article identifies four roles for LLMs in international law: as collaborator, confounder, creator, or corruptor. In some cases, LLMs will be collaborators, complementing existing international lawyering by drastically improving the scope and speed with which users can assemble and analyze materials and produce new texts. At the same time, without careful prompt engineering and curation of results, LLMs may generate confounding outcomes, leading international lawyers down inaccurate or ambiguous paths.  This is particularly likely when LLMs fail to accurately explain or defend particular conclusions.  Further, LLMs also hold surprising potential to help to create new law by offering inventive proposals for treaty language or negotiations.

Most importantly, we highlight the potential for LLMs to corrupt international law by fostering automation bias in users.  That is, even where analog work by international lawyers would produce different results, LLM results may soon be perceived to accurately reflect the contents of international law.  The implications of this potential are profound. LLMs could effectively realign the contents and contours of international law based on the datasets they employ. The widespread use of LLMs may even incentivize states and others to push their desired views into those datasets to corrupt LLM outputs.  Such risks and rewards lead us to conclude with a call for further empirical and theoretical research on LLMs’ potential to assist, reshape, or redefine international legal practice and scholarship.

Roeterseilandcampus - building A

Nieuwe Achtergracht 166
1018 WV Amsterdam