Share on Facebook
Share on X
Share on LinkedIn

by Andrew Flake

The pace of development of GenAI, powered by vast amounts of energy and chips specially configured to support the huge amounts of data they analyze, has been astounding. These technologies are already reshaping the way dispute resolution professionals approach dispute resolution, offering new potential methods for analysis, communication, and decision-making.  

At the same time, the swift evolution of AI technologies presents mediators with a unique challenge: how to integrate these tools effectively while ensuring compliance with established mediation standards and core values, like neutrality, confidentiality, and self-determination. 

In mediation and in other areas of dispute resolution, for mediators and especially mediation advocates, GenAI has emerged as a tool not only with transformative potential but one that can already be utilized in multiple productive ways. They include document summarization and analysis; case research; document drafting; identifying gaps in information; and even generating options for settlement.

At a program last week for the Atlanta Bar Dispute Resolution Section, my co-panelists and I discussed and demonstrated some of these use cases, with each of them, addressing both the potentials and the implications from a number of ethical standards applicable to the mediation process.

Some of these standards are found in binding rules, like the Georgia Office of Dispute Resolution’s Ethical Standards for Neutrals and the Georgia Rules of Professional Responsibility. Others are found in guidance documents, like the ABA Model Standards for Mediators or the AAA-ICDR®’s Principles Supporting the Use of AI in Alternative Dispute Resolution. However framed, all of them reflect the same core ethical considerations. 

Self-Determination 

Self-determination is a fundamental principle of mediation, ensuring that parties retain control over the outcome of the process. GenAI can potentially enhance party self-determination in mediation by providing tools and insights that empower parties to make informed decisions.  

For instance, AI-powered platforms can offer personalized summaries of key points, identify potential areas of agreement, and suggest creative solutions. This can help parties better understand their options and negotiate more effectively; reflect on proposals and counterproposals at their own pace; and generally, reduce pressure and promote more thoughtful decision-making. 

On the other hand, the potential for a GenAI tool to inadvertently bias the mediation process is a concern. It is important for parties and mediators to be attuned to the possibility of bias in any given AI tool. If the AI is trained on data that reflects existing biases, for example, it may suggest solutions or approaches that perpetuate those biases.  

There is also the risk that reliance on AI could lead to a decreased emphasis on human interaction and empathy. While AI can provide valuable information and support, it cannot fully replace the interpersonal skills and understanding that human mediators bring to the table. 

Impartiality 

Neutrality is a cornerstone of mediation, requiring mediators to remain impartial and avoid any appearance of favoritism or bias. According to Standard II of the Georgia Ethical Standards for Neutrals, the mediator “must demonstrate impartiality in word and deed.” The use of AI tools must not compromise this neutrality.  

Mediators must therefore ensure that AI-generated suggestions or analyses do not skew the mediation process in favor of one party over another, and they must critically evaluate AI outputs to ensure they do not influence the process in ways that could undermine the mediator’s impartiality

Competence 

Whether a mediator is using technology herself, or receiving technology outputs from the parties, the use of GenAI requires a certain level of competence on the part of the mediator and mediation advocate. Both Model Standard IV for mediators and Georgia Rule of Professional Conduct (GRPR) 1.1 address competence.

Mediators must be adequately trained in the use of AI tools to ensure they are used effectively and ethically. This includes understanding the limitations of AI, the potential biases inherent in AI systems, and how to interpret AI-generated outputs. For instance, if an AI tool is used to predict the likelihood of a settlement’s success, the mediator must understand the data used to train the AI and how that data might affect the prediction’s accuracy.  

For mediation advocates, I would also suggest the same considerations of professional responsibility and attention to detail – including verifying any AI-generated discussion of legal authority or citation – should apply to submissions to the mediator. The appearance of “hallucinations” and erroneous output from GenAI is still a very real one, and competence in the use of such tools involves quality control and oversight from the lawyer and mediator. 

Confidentiality 

Confidentiality is critical in fostering open and honest communication during mediation, and AI tools, particularly those hosted on third-party platforms, may pose risks to confidentiality if not properly managed. Mediation counsel and the mediator must ensure that any data processed by AI systems is secure and that the use of such tools complies with confidentiality agreements made with the parties. 

The confidentiality requirements attendant to mediation – including Standard II along with GRPC 1.6 and Comment 24 to that Rule, make clear the importance of safeguarding confidential information – that of the advocate’s client, and generally, all information pertaining to the mediation.  

That in turn requires understanding how information is used by an AI tool and may mean adjusting or limiting uploaded information and queries. The content of documents uploaded to the public version of ChatGPT, for example, would be much more constrained than content uploaded into a secure legal AI tool, and may require the user to minimize, redact, or de-identify certain information. 

At a minimum, prior to using a GenAI tool in connection with mediation, the advocate should review the terms of use for the tool and each of its settings for use of information uploaded and for storage, retention and memory of such information.

More than ever, as GenAI permits us to venture further and further toward new technological and practice frontiers, it is these ethical “first principles” and standards that should be lighting our way.