Kluwer Arbitration Blog
Artificial Intelligence (“AI”) follows the logic that if all attributes of learning and intelligence is accurately traced in-depth, it can be simulated through a computer program. In other words, ‘what holds good for [Human Intelligence], also applies to AI’.1) AI is increasingly being used for in the legal industry for various tasks, including practice management (e.g., Smokeball and Clio), conflicts management (e.g., Conflicts Manager), contract review and due diligence (e.g., ThoughtRiver and Leverton), legal assistance (e.g., Blue J L&E, KNOMOS, and Voicea), e-discovery review (e.g., EDR), and outcome prediction (e.g., Motions). At the outset, AI tools can be categorized into four based on their functional complexity.2)
First, simple AI tools used for accurate and efficient legal research (e.g., LexisNexis, DoNotPay, ExaMatch, and Ross Intelligence). Second, AI tools used for selecting suitable experts, counsels, and arbitrators (e.g., Arbitrator Intelligence and BillyBot). Third, AI tools used for facilitating procedural automation by translating, transcribing, summarizing evidence, and even drafting compilatory parts of legal documents and arbitral awards (e.g., Opus2, NDA, and Property Contract Tools). Fourth, AI tools used for their use in the adjudication process (including the ‘tools of predictive justice’). The scope and limitations of AI in the first three levels are well-defined. In fact, it is said that ‘80% of Top 10 firms have already established or begun piloting artificial intelligence solutions’.3)
This blog discusses the use of AI in arbitration (and, legalities thereof), and fear of this so-called ‘disruptive technology’ vis-à-vis resilience of arbitration as against AI.
1. Understanding the Basics of AI
The difference between AI and other tools for automation and legal tech is their ability to learn and evolve each time they are deployed. There are primarily two types of AI mechanisms: rule-based learning and machine learning. Instead of the former, which is ideal for static and slowly-changing scenarios, most AI tools today use machine learning, wherein the AI identifies patterns and varies its algorithm based on already-existing data and user feedback. A subset of machine learning is deep learning models (or, artificial neural networks), which are inspired by the structure of a human brain. It identifies features without human intervention by learning from heavy volumes of pre-existing data. It is the most effective for unstructured data. AI tools today often make use of deep learning and natural language processing to perform tasks that require human intelligence and present them in comprehensible form.4)
Another subset is the classical machine learning models, which make use of probabilities to make predictions, wherein statistical methods are used to obtain the output. The expression ‘tools of predictive justice’ comes from predictive analytics, wherein historic and current facts are used to predict unknown future values or provide actionable insights. Amongst the most commonly used models of predictive analytics is the decision tree model, which determines a course of action, derives possible outcomes of a decision, and consequences thereof (e.g., TreeAge).
Regardless of the model employed, there are two universal assumptions in regard to the use of AI: first, the model performance will improve only with the increase in available data for training and testing, and second, there will always be a trade-off between computational efficiency and interpretability, i.e., automation will require more reliance on data analytics, which would inevitably imply lesser human intervention and domain knowledge that experts contribute. Thus, AI developers will have to strike the right balance between the two, particularly in law. These assumptions are crucial in determining the resilience of arbitration as against AI.
2. Using AI in International Arbitration
The first- and second-level AI tools are very beneficial to arbitration. For example, Arbitrator Intelligence produces AI Reports, which are generated by analysing the data from awards and Arbitrator Intelligence Questionnaires, to accurately assess the arbitrator’s inclinations from its decision-making at different stages of arbitrations in the past. This, coupled with tracing the relevant experience of the arbitrators on various types of issues and disputes, provides for a reliable resource for arbitrator selection. In fact, pre-selecting potential arbitrators based on subject-matter, required expertise, and other defined criteria can lead to better arbitrator selection and efficient resolution of the dispute. The procedural autonomy accorded to parties in arbitration also allows for the use of third-level AI tools (See UNCITRAL Model Law, arts. 19(1), 19(2)). Translation and transcribing programs are widely used in arbitrations, drastically augmenting the efficiency of arbitration processes. Further, text-mining is used during document production to scan and process documents to assess their contents.5) This way, heavy volumes of evidence can be summarized by simple classification and clustering models. Similarly, the relevant excerpts of facts, common and disputed positions of the parties, and procedural history can be inserted to assist in drafting the award with the help of AI tools.6) The use of AI tools in this manner can, perhaps, go a long way in resolving the problem of ‘user disappointment’ caused due to lengthy and costly proceedings, judicialisation of arbitration, and inflexible formalities.7) However, as Prof. Dr Scherer puts it – ‘[t]ech-savvy arbitrators are as rare as vegan butchers’.
The use of AI tools in the decision-making process can become problematic when technology is allowed to ‘interfere excessively with the adjudication process’.8) The US courts have even used AI tools in criminal proceedings – predictive analytics has been used to determine bail and parole cases.9) While arbitration only concerns private rights, there are still certain mandatory requirements in place – both procedural and formal – to protect the interest of parties and integrity of arbitration. Using AI tools in adjudication risks violation of due process rights and public policy of the seat. Thus, AI’s use in adjudication process should be very limited. It can be used to research and summarise law, process and analyse parties’ submissions, and cross-check tribunal’s decision against that of the AI.10) AI tools must only be deployed with both parties’ consent and appropriate protocols in place, and without causing any disadvantage (resulting in unequal treatment) to either of the parties in the arbitration.
One of the areas where AI has made a huge impact is e-discovery, wherein AI tools based on predictive coding are employed for efficient document production and review. In Pyrrho Investments Ltd. v. MWB Property Ltd.,11) predictive coding for e-discovery was allowed for the first time in the U.K. This involved sorting documents according to their relevance determined on set parameters and criteria in the protocol agreed by the parties (e.g., TeCSA/SCL/TECBAR eDisclosure Protocol and CIArb’s eDisclosure Protocol), and narrowing down from millions of documents. It was observed that the cost of technology must be proportionate and the final determination must be done on a case-by-case basis.12)
3. AI as ‘Most Disruptive Technology’: Myth or Reality?
Despite its benefits, it is notoriously known as the ‘most disruptive technology’ in the sector, effects whereof will be seen over the next decade. This is a result of the resistance that has its roots in the lingering fear that AI will give us a run for our jobs. It is speculated that in the coming years, AI will be performing the tasks of paralegals and associates.13) The credibility of this assertion cannot be ascertained, as it is not ruled out in toto. However, as J. Goodman puts it – ‘[y]ou may or may not like your mother-in-law, but she’s going to have an influence on your relationship one way or another’.14) The case of AI in arbitration is no different.
In order to address this fear, we must note that AI necessarily requires a large data set and user feedback (as discussed earlier). This is particularly relevant in the context of arbitration, as most of the documents are confidential, and exist in much smaller data sets in comparison to other practices. The multiplicity of laws and diverse practice areas further limits the scope of training and testing. Further, there is no system of precedents in arbitration, and cases are decided on individual circumstances of the case. Another relevant consideration is that the practical understanding and domain knowledge of experts in specialised fields cannot be fully substituted, as AI tools will process information in a manner which is closer to inductive reasoning, rather than deductive reasoning.15) All of the aforesaid factors make it extremely difficult for AI to mimic many aspects of arbitration.
Thus, we can safely conclude that arbitration has proven to be particularly resilient as against AI. While it has automated many processes to the benefit of the practitioners, it is not enough to replace junior associates. As Hugh Carlson puts it, the disruption would arrive when one could say: ‘Alexa, prepare for me three to four paragraphs explaining why cash flow is an inappropriate valuation methodology in this case and send me highlighted PDFs of the authorities upon which you rely’, and the AI would reliably complete such a task.16)
In recent times, the possibility of AI-arbitrators (or, machine arbitrators) replacing human arbitrations is also widely discussed. In brief, legal decision-making requires cognitive and emotional capabilities that AI does not possess (and, perhaps, never will). Nevertheless, assuming such a possibility for the sake of argument, current laws across the globe are centred towards natural persons and do not allow for such a possibility.17) Additionally, parties prefer understanding the reason why arbitrators arrived at a decision, which is rarely done away with. AI cannot satisfy this requirement of giving reasons for the award, as it’s better suited to provide a binary response based on probabilistic inference, which would lack legitimacy.18) As a result, it may obscure many controversies under the guise of objective analysis.19) Thus, reasoning will always be inherently and uniquely a human task.20) Further, the limits of AI dictate that the datasets would often include selective information only; algorithms could be based on discriminatory assumptions; and AI tools, though well-designed, can be used in a dysfunctional manner.21) Some commentators believe that AI can result in absolute independence and impartiality. However, data bias would prove to be a far greater problem than arbitrator bias, as the latter can be inferred and the arbitrator challenged. For example, if most arbitrators inherently favour the investors in investor-state arbitrations, so will the AI. Similarly, data bias may even result in racist or gender-biased machine arbitrators.
4. Concluding Remarks
We must keep in mind that AI is not magic, just glorified statistics. The attempt to automate law (particularly, the time-consuming and labour-intensive processes) has been ongoing for decades. So far, it has only been successful in performing bespoke legal tasks and aiding the practitioners. AI has revolutionised many processes like e-discovery, and greatly improved procedural efficacy. Thus, practitioners and law firms should adapt to the changes to increase their efficiency. At the same time, technological interference in adjudication should be very limited, as otherwise would do more harm than good. This makes it pertinent for one to be apprised of the many limitations of AI. AI cannot substitute human practitioners and arbitrators. The much-discussed concept of machine arbitrator, following from a contrary belief, is merely fiction. The day technology would allow decision-making by machine arbitrators, we can safely assume it to be the opening of the seventh seal.
This article was first published on the Kluwer Arbitration Blog, available here.