by Andrew Flake
Last week I attended an excellent conference, a gathering of AAA-IDCR arbitrators from across the country, and much of our discussion was AI-focused. And for good reason: the integration of generative artificial intelligence (GenAI) into arbitration, by both arbitrators and advocates, represents more than just another technological efficiency; it already has been, and will continue to be, transformative.
But GenAI’s infusion into ADR raises fundamental questions we also need to understand and address, ones about the nature of dispute resolution itself. What makes an arbitration process fair, transparent, and just? Which elements of arbitration are essentially human, and which can be enhanced—or perhaps even replaced—by increasingly sophisticated algorithms?
My intention in coming blogs is to share my current thinking on these questions, grounded in both the specifics of new GenAI capabilities and in foundational ethical safeguards.
The earliest of those, the ABA-AAA Code of Ethics for Arbitrators in Commercial Disputes, is now complemented by the AAA-ICDR’s “Principles Supporting the Use of AI in Alternative Dispute Resolution” (November 2023) and the Silicon Valley Arbitration & Mediation Center’s “Guidelines on the Use of Artificial Intelligence in Arbitration” (April 2024), both of which expand on the Code’s core principles for the integration of AI in arbitration practice.
In this post, I’ll start with Canon V: An arbitrator should make decisions in a just, independent and deliberate manner. Canon V includes the requirements that an arbitrator “should decide all matters justly, exercising independent judgment, and should not permit outside pressure to affect the decision” and that she “should not delegate the duty to decide to any other person.”
Similarly, Principal V of the AAA-ICDR Guidelines stresses that “while AI can provide valuable insights, ADR professionals must “exercise independent judgment” and “should not unthinkingly rely on AI outputs, but evaluate all critically based on expertise, experience, and judgment.” As does SVAMC’s Guideline 6: “An arbitrator shall not delegate any part of their personal mandate to any AI tool,” the use of which “shall not replace their independent analysis of the facts, the law, and the evidence.”
It’s Canon V and these guidelines that I think of when someone asks about an “AI arbitrator,” expressing concern that a tool or platform might take the place of an actual person as the decisionmaker.
Putting technology barriers aside, because those will eventually be overcome, do we want AI deciding our most important personal and business disputes? To what degree would permitting AI to do so sacrifice nuance, empathy, understanding, and the innumerable more subtle evaluations of the neutral, including assessing witness credibility?
If anything, because of extremely rapid pace of technological change – witness the pace of GPT model improvements and the increasingly near-term predictions of some form of artificial general intelligence (AGI) – protecting and supporting the Canon V duty of independent judgment is more important than ever that we stay anchored in first principles and our existing, human-centered ethical framework.
Nor do I believe we’re facing a binary choice between AI or human arbitration. I see us moving toward nuanced hybrid models, in which AI handles routine tasks and analyses, while human arbitrators focus on the aspects of dispute resolution that require judgment, creativity, and ethical and principled reasoning.
It’s a promising vision, and a key consideration in realizing it will be matching the right technology not only to the appropriate dispute complexity but to the right work in a given arbitration. But the final judgment—the weighing of evidence, the application of legal principles to specific facts, the exercise of discretion—should, and I predict will, remain human. ABF