Silicon Valley Arbitration & Mediation Center Issues AI Guidelines
May 2, 2024, 6:07 PM
On April 30, 2024, the Silicon Valley Arbitration and Mediation Center published the 1st edition of its Guidelines on the Use of AI in Arbitration, which “shall apply when and to the extent that the parties have so agreed and/or following a decision by an arbitral tribunal or an arbitral institution to adopt these Guidelines.”
For all participants in arbitration, the Guidelines provide that all participants are responsible for familiarizing themselves with the AI tools and their intended uses and making reasonable efforts to understand relevant limitations, biases, and risks, including ensuring that any use of AI tools is consistent with confidentiality obligations. Disclosure of the use of AI tools is not required as a general matter, and decisions regarding disclosure are to be made on a case-by-case basis.
Parties and their representatives are directed to observe all applicable ethical rules and professional standards when using AI tools and to refrain from using AI in ways that would affect the integrity of or otherwise disrupt arbitration proceedings, including falsifying or compromising the authenticity of evidence or misleading the arbitral tribunal or opposing parties.
For arbitrators, the Guidelines emphasize that no part of the decision-making process should be delegated to any AI tool, including analysis of the facts, law, and evidence, and that arbitrators shall not rely on AI-generated information outside the record without making appropriate disclosures to the parties. In deciding how to address submissions containing AI-induced errors or inaccuracies, the tribunal may consider whether an AI-induced error is legitimately inadvertent or inconsequential or whether it would compromise the integrity of the proceedings. Arbitrators have a duty to disclose any reliance on AI-generated outputs outside the record that influence their understanding of the case and to allow parties the opportunity to comment to the extent any AI-generated outputs are used, subject to the acknowledgment that disclosure requirements may vary depending on the specific AI application used.
The Guidelines further indicate that there is no single definition of AI, and the definition adopted is meant to be broad enough to encompass both existing and future foreseeable types of AI but not encompass every type of computer-assisted automation tool.
Any questions or suggestions regarding the Guidelines are directed to AITaskForce@svamc.org.
The publication of these general principles for the use of AI is a fitting tribute to SVAMC's tenth anniversary and its collective industriousness and dedication to promoting fairness, efficiency and transparency in arbitral proceedings.

To subscribe to our publications, click here.
Tags
News & Insights
News & Insights
IPWatchdog Sixth Annual Live Conference
Speaking Engagement
Intellectual Property
ABA White Collar Crime Institute 2026
Speaking Engagement
GCR Live Cartels: 2026
Speaking Engagement
Antitrust
SCCE 14th Annual European Compliance & Ethics Institute
Speaking Engagement
Antitrust
Noerr Competition Day 2026
Speaking Engagement
Antitrust
Axinn Antitrust Insight: "New" HSR Form Remains in Effect For Now – Fifth Circuit Temporarily Freezes District Court Order that Vacated the New HSR Rule
Axinn Viewpoints
Antitrust
Consumer Brands CPG Legal Forum 2026
Speaking Engagement
NBA CLS 39th Annual Corporate Counsel Conference
Sponsorship
Antitrust
University of Pennsylvania Journal of Business Law Annual Symposium 2026
Speaking Engagement
Antitrust
Chambers Recognizes Axinn’s Antitrust Practice in 2026 Global Rankings — With New Recognition in Cartel Category
Awards & Recognitions
Antitrust