The U.S. Department of the Treasury has introduced two new resources to help guide the use of artificial intelligence in the financial sector. The releases—a shared Artificial Intelligence Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF)—are intended to support the President’s AI Action Plan, which emphasizes clear standards, a shared understanding, and risk-based governance for responsible deployment of AI.
“Implementing the President’s AI Action Plan requires more than aspirational statements, it requires practical resources that institutions can use,” said Derek Theurer, who is performing the duties of Deputy Secretary of the Treasury. “By establishing a common language for AI and a tailored framework for managing AI risks in financial services, these deliverables help protect consumers while supporting responsible innovation.”
The two resources were developed through collaboration with the Financial and Banking Information Infrastructure Committee and the Financial Services Sector Coordinating Council’s Artificial Intelligence Executive Oversight Group (AIEOG). Their aim is to strengthen common terminology and risk management practices related to artificial intelligence. This is expected to aid quicker adoption of AI in financial services by improving cybersecurity measures and operational resilience.
Increased reliance on artificial intelligence by financial institutions has led to challenges due to inconsistent terminology and varying approaches to risk management. The newly released AI Lexicon sets out standardized definitions for key concepts, capabilities, and risk categories within AI. This standardization seeks to improve communication among regulators, technology providers, legal teams, and business leaders.
“Clear terminology and pragmatic risk management are essential to accelerating AI adoption in financial services,” said Paras Malik, Chief Artificial Intelligence Officer at the U.S. Department of the Treasury. “These resources are designed to help institutions move faster with AI by reducing uncertainty and supporting consistent, scalable implementation.”
The FS AI RMF adapts existing NIST guidelines specifically for use in financial services environments. It provides practical tools for evaluating potential uses of artificial intelligence within an institution, managing associated risks throughout an AI system's lifecycle, and ensuring transparency as well as accountability in decision-making processes involving such technologies.
Josh Magri, CEO of the Cyber Risk Institute stated: "In an era where AI is rapidly reshaping financial services, ensuring security and building trust are paramount. The FS AI RMF not only aligns closely with NIST standards but also offers practical, scalable guidance tailored to the varying stages of AI adoption," said Mr. Magri. "It's an essential resource for both community and multinational institutions alike, empowering them to effectively manage AI risks while driving growth and innovation."
Both documents form part of a broader set of AIEOG initiatives focusing on issues such as identity verification, fraud prevention, explainability in algorithms, and best data practices. These efforts underline ongoing public-private cooperation aimed at strengthening trustworthiness as well as accountability across increasing deployments of artificial intelligence.
Treasury officials indicated they will continue collaborating with other regulators at federal or state levels along with industry representatives as part of ongoing work under the President’s plan promoting safe expansion of artificial intelligence within U.S. finance.
