International arbitration has long been valued for its flexibility, neutrality, and adaptability. In recent years, however, the emergence of artificial intelligence (AI) has introduced a new set of opportunities and challenges that are likely to reshape arbitral practice. Unlike earlier waves of technological change, AI has a particularly pervasive impact: it is capable of touching almost every stage of the arbitral lifecycle; from pre-dispute planning and arbitrator selection to evidentiary and document review, hearings, award drafting, and enforcement.
AI in the Context of International Arbitration
AI in arbitration may be grouped into several broad categories:
• Language and speech technologies: real-time transcription, machine translation, speech analytics, and voice synthesis.
• Document and data analysis: technology-assisted review (TAR), document clustering, contract analytics, and predictive search.
• Reasoning and drafting support: summarisation, brief drafting, case law synthesis, and award-structuring tools.
• Forensics and authenticity: detection of manipulated evidence, such as deepfakes, and analysis of metadata.
• Decision support and analytics: outcome prediction, damages modelling, and arbitrator selection analytics.
Each class of AI raises distinct questions regarding admissibility, transparency, fairness, and due process, all of which are central to the credibility, integrity and legitimacy of arbitral proceedings.
Impact Across the Arbitral Lifecycle
Pre-dispute Planning and Arbitrator Selection
AI is increasingly shaping arbitration before disputes even arise. Contract drafters now anticipate AI-related risks by including specific provisions in arbitration clauses; for example, restrictions on uploading confidential information to public AI systems, or agreement on translation protocols.
In arbitrator selection, AI-driven analytics tools reveal past decision-making patterns, areas of expertise, and potential conflicts. These tools broaden candidate pools and might assist in promoting diversity. However, there is also a danger of over-reliance on statistical patterns, creating feedback loops that favour “safe” or well-documented profiles, while sidelining lesser-known but equally qualified candidates. The key future challenge might be ensuring that arbitrator appointments remain a human decision, informed (but not dictated) by algorithms.
Pleadings and Written Submissions
AI tools assist counsel in drafting, citation-checking, and issue-spotting, leading to faster production of submissions. However, they also raise the very real risk of ‘hallucinations’, in which non-existent cases or inaccurate authorities are cited. If not carefully verified, such errors may undermine the credibility of submissions and result in disciplinary and costs sanctions.
Tribunals may need to implement integrity protocols requiring parties to certify that authorities cited have been human-verified and that any AI-generated drafting has been carefully reviewed. In short, efficiency gains must not come at the expense of accuracy and reliability.
Evidence and Document Production
One of the most transformative effects of AI is in document review. TAR and clustering tools can reduce costs and streamline discovery, especially in multilingual disputes. But new problems may arise:
• Privilege risks: Uploading confidential or privileged documents into public AI systems may inadvertently waive privilege or breach confidentiality obligations.
• Authenticity concerns: The rise of deepfakes means tribunals must adopt more robust standards for authenticating video, audio, and photographic evidence.
Best practices include adopting AI evidence protocols that require disclosure of the tool used, validation steps, and an auditable chain of custody. Tribunals should also anticipate the need for forensic experts to test the reliability of AI-processed evidence.
Although AI can accelerate document production, it can also magnify risks of privilege breaches and fabricated evidence.
Hearings
AI is already embedded in arbitral hearings through transcription and machine translation. While these tools enhance accessibility, they introduce risks of misinterpretation that may unfairly affect witness credibility. More troubling is the possibility of covert AI assistance during testimony; for example, perhaps even the rather outlandish-sounding risk that a witness might receive real-time AI-generated prompts.
Tribunals should consider addressing these risks in their procedural orders by:
• approving specific transcription and translation tools,
• prohibiting generative assistance during testimony, and
• ensuring technological parity so that neither party has an unfair advantage.
Going forward, procedural fairness is likely to require careful management of AI use during hearings.
Deliberations and Award Drafting
AI may certainly help arbitrators structure factual chronologies or verify consistency within an award. However, using AI in deliberations themselves raises two fundamental risks:
• Breach of confidentiality: Uploading draft awards to external AI systems may compromise deliberation secrecy.
• Improper delegation: If arbitrators rely on ‘opaque’ algorithms to decide on questions of facts or law, the award may be vulnerable to challenge under the New York Convention.
The appropriate role for AI should therefore be limited to clerical or stylistic support, with substantive determinations reserved for the tribunal. Arbitrators must ensure that their awards are demonstrably the product of human reasoning. AI should assist, but never replace, the tribunal’s independent judgment.
Post-Award Challenges and Enforcement
AI use in arbitration could foreseeably feature prominently in set aside and enforcement proceedings. Parties may challenge awards on the grounds that undisclosed reliance on AI deprived them of due process (New York Convention, Article V(1)(b)) or that the award violates public policy (Art. V(2)(b)).
Tribunals should mitigate such risks by keeping sealed records of any AI assistance used in drafting, limited to clerical tasks. This approach allows them to rebut speculative challenges without breaching deliberation secrecy.
Regulatory and Ethical Considerations
AI use runs the risk of introducing several cross-border tensions:
• Data protection: Rules such as the EU’s GDPR, China’s PIPL, and Brazil’s LGPD complicate the use of AI platforms that transfer or store personal data abroad.
• Confidentiality: Many consumer AI systems retain and train on user data, which conflicts with arbitration confidentiality obligations.
• Export controls and sanctions: Some AI technologies are subject to restrictions, which may impact their use depending on the seat of arbitration.
• Professional duties: Counsel must exercise competence and candour when using AI. Submitting unverified AI-generated content may breach professional ethics.
Regulatory compliance and ethical oversight are essential in order to safeguard the legitimacy of arbitration.
Costs, Time, and Environmental Impact
AI can reduce costs by streamlining document review and shortening timelines, but it can also generate inefficiencies if inappropriately used. For example, hallucinated citations may necessitate costly corrections.
From an environmental perspective, AI may reduce travel by enabling remote hearings, though large scale computation carries its own carbon footprint. It is likely that, in the future, tribunals will increasingly scrutinise whether parties’ AI-related expenditures are proportionate and recoverable as costs of arbitration.
AI can undoubtedly make arbitration faster and cheaper if deployed responsibly, but careless use can equally have the opposite effect.
Snapshot of Strategic Opportunities and Risks
Opportunities:
• More accurate multilingual proceedings through AI translation.
• Faster and more efficient document review
• Enhanced damages modelling and tribunal analytics.
• Broader and more diverse arbitrator lists.
Risks:
• Hallucinated citations and unreliable outputs.
• Privilege waivers from inappropriate AI use.
• Undisclosed reliance on AI during testimony or deliberations.
• Awards undermined by improper delegation to AI systems.
Some Recommendations for Good Practice
To integrate AI responsibly into international arbitration, tribunals and parties should adopt the following measures:
• Include explicit AI provisions in Procedural Order No. 1, covering use, disclosure, authentication, and sanctions.
• Require the use of enterprise-grade AI tools that do not train on confidential inputs.
• Approve common translation and transcription platforms to ensure parity.
• Mandate disclosure of method statements and validation for AI-processed evidence.
• Establish forensic protocols for ‘deepfake’ detection.
• Ensure that all substantive decisions remain with the tribunal.
• Maintain audit trails of AI usage for accountability.
• Allocate costs proportionately, rewarding efficient use and penalising misuse.
• Safeguard deliberation secrecy by prohibiting external AI in award drafting.
• Prepare enforcement-ready records to counter challenges under the New York Convention.