Silicon Valley Arbitration & Mediation Center Issues AI Guidelines
May 2, 2024, 6:07 PM
On April 30, 2024, the Silicon Valley Arbitration and Mediation Center published the 1st edition of its Guidelines on the Use of AI in Arbitration, which “shall apply when and to the extent that the parties have so agreed and/or following a decision by an arbitral tribunal or an arbitral institution to adopt these Guidelines.”
For all participants in arbitration, the Guidelines provide that all participants are responsible for familiarizing themselves with the AI tools and their intended uses and making reasonable efforts to understand relevant limitations, biases, and risks, including ensuring that any use of AI tools is consistent with confidentiality obligations. Disclosure of the use of AI tools is not required as a general matter, and decisions regarding disclosure are to be made on a case-by-case basis.
Parties and their representatives are directed to observe all applicable ethical rules and professional standards when using AI tools and to refrain from using AI in ways that would affect the integrity of or otherwise disrupt arbitration proceedings, including falsifying or compromising the authenticity of evidence or misleading the arbitral tribunal or opposing parties.
For arbitrators, the Guidelines emphasize that no part of the decision-making process should be delegated to any AI tool, including analysis of the facts, law, and evidence, and that arbitrators shall not rely on AI-generated information outside the record without making appropriate disclosures to the parties. In deciding how to address submissions containing AI-induced errors or inaccuracies, the tribunal may consider whether an AI-induced error is legitimately inadvertent or inconsequential or whether it would compromise the integrity of the proceedings. Arbitrators have a duty to disclose any reliance on AI-generated outputs outside the record that influence their understanding of the case and to allow parties the opportunity to comment to the extent any AI-generated outputs are used, subject to the acknowledgment that disclosure requirements may vary depending on the specific AI application used.
The Guidelines further indicate that there is no single definition of AI, and the definition adopted is meant to be broad enough to encompass both existing and future foreseeable types of AI but not encompass every type of computer-assisted automation tool.
Any questions or suggestions regarding the Guidelines are directed to AITaskForce@svamc.org.
The publication of these general principles for the use of AI is a fitting tribute to SVAMC's tenth anniversary and its collective industriousness and dedication to promoting fairness, efficiency and transparency in arbitral proceedings.

To subscribe to our publications, click here.
Tags
News & Insights
News & Insights
ACI 22nd Annual Paragraph IV Conference
Speaking Engagement
Intellectual Property
Informa 35th Annual Advanced EU London Conference
Speaking Engagement
Antitrust
AHLA Health Care Transactions Program 2026
Sponsorship
Antitrust
Kenina Lee Selected as a Member of Law360’s 2026 Competition Editorial Advisory Board
Awards & Recognitions
Antitrust
Axinn Associates at the Antitrust Spring Meeting: You Get a Deal, You Get a Deal—A Deal in Every Environment
Axinn Viewpoints
Antitrust
Axinn Associates at the Spring Meeting: Trends in Federal Antitrust Enforcement and Policy
Axinn Viewpoints
Antitrust
Axinn Associates at the Antitrust Spring Meeting: Cartel Enforcement Trends and Developments
Axinn Viewpoints
Antitrust
Axinn Associates at the Antitrust Spring Meeting: Tech-Related Enforcement
Axinn Viewpoints
Antitrust
Leadership Across the Generations: A Conversation Between Mentor and Mentee
Byline Articles
Antitrust
Ultramarathons Make Me A Better Lawyer
Byline Articles
Antitrust