U.S. Treasury Releases AI Risk Management Tools for Financial Institutions
March 3, 2026
By: David A. Bowen
The U.S. Department of the Treasury has released two new artificial intelligence (“AI”) resources intended to help banks and other financial institutions adopt AI more securely and consistently: (1) an Artificial Intelligence Lexicon, and (2) a Financial Services AI Risk Management Framework (“FS AI RMF”). These tools are the first deliverables in a broader six-part Treasury initiative focused on AI cybersecurity, governance, and operational resilience for the financial sector.
These resources were developed through the Artificial Intelligence Executive Oversight Group, a public-private partnership led by the U.S. Department of the Treasury in coordination with the Financial Services Sector Coordinating Council and the Financial and Banking Information Infrastructure Committee. The initiative was launched in alignment with the President’s AI Action Plan, which calls for clear standards, collective understanding, and risk-based governance to ensure the safe and responsible deployment of artificial intelligence.
While the Treasury describes these publications as practical guidance rather than strict requirements, they are likely to shape regulator expectations, exams, and industry practices as AI is utilized more in core banking, fraud prevention, cybersecurity, and customer-facing functions.
The Artificial Intelligence Lexicon
The AI Lexicon was created to establish a unified terminology for effective communication and collaboration concerning AI-related issues within the financial sector. This document consolidates widely recognized risk management and technical terminology, with definitions relevant to the implementation of AI within financial services. The definitions are sourced from various standards, academic literature, and government publications, and are informed by contributions from financial institutions, AI service providers, and public-sector entities. The Lexicon will be updated as advancements occur in AI technology.
The Financial Services AI Risk Management Framework (FS AI RMF)
The FS AI RMF aligns with and adapts NIST’s AI Risk Management Framework (“NIST AI RMF”) for financial institutions. Unlike the NIST AI RMF, the FS AI RMF is operational rather than just principles-based. The FS AI RMF toolkit offers comprehensive resources to facilitate the practical implementation of the FS AI RMF, including an AI adoption questionnaire, a risk and control matrix encompassing approximately 230 control objectives, a user guidebook, and a reference guide featuring illustrative examples.
The FS AI RMF addresses key risk themes such as AI lifecycle governance, data quality and provenance, third-party and vendor AI risk, cybersecurity and adversarial threats, and human oversight of automated systems. The controls are structured to scale based on an institution’s size, complexity, and level of AI adoption.
Practical Implications
Even as non-binding guidance, these resources are likely to become reference points for regulators, auditors, boards, and risk committees. Institutions deploying AI in higher-risk use cases (e.g., credit decisioning, fraud detection, identity verification, customer communications, or cybersecurity) should expect increased regulatory scrutiny of AI risk management governance and controls.
Financial institutions should begin taking steps to align with these resources, including:
- Conducting a gap assessment comparing existing AI governance and controls to the FS AI RMF;
- Developing an enterprise AI inventory (or updating their existing inventory) covering all AI technology used by their institution; and
- Refreshing third-party due diligence and contracting standards to reflect AI-specific risks, controls, and audit expectations.
To receive alerts when the Treasury releases additional AI resources you can Subscribe to Krieg DeVault Financial Institutions Insights. For assistance or guidance on applying these tools to AI governance, vendor contracting, and examination readiness, please contact David A. Bowen or any member of Krieg DeVault’s Financial Services practice group.
Disclaimer: The contents of this article should not be construed as legal advice or a legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult with counsel concerning your situation and specific legal questions you may have.
Practices
Industries
March 3, 2026
By: David A. Bowen
The U.S. Department of the Treasury has released two new artificial intelligence (“AI”) resources intended to help banks and other financial institutions adopt AI more securely and consistently: (1) an Artificial Intelligence Lexicon, and (2) a Financial Services AI Risk Management Framework (“FS AI RMF”). These tools are the first deliverables in a broader six-part Treasury initiative focused on AI cybersecurity, governance, and operational resilience for the financial sector.
These resources were developed through the Artificial Intelligence Executive Oversight Group, a public-private partnership led by the U.S. Department of the Treasury in coordination with the Financial Services Sector Coordinating Council and the Financial and Banking Information Infrastructure Committee. The initiative was launched in alignment with the President’s AI Action Plan, which calls for clear standards, collective understanding, and risk-based governance to ensure the safe and responsible deployment of artificial intelligence.
While the Treasury describes these publications as practical guidance rather than strict requirements, they are likely to shape regulator expectations, exams, and industry practices as AI is utilized more in core banking, fraud prevention, cybersecurity, and customer-facing functions.
The Artificial Intelligence Lexicon
The AI Lexicon was created to establish a unified terminology for effective communication and collaboration concerning AI-related issues within the financial sector. This document consolidates widely recognized risk management and technical terminology, with definitions relevant to the implementation of AI within financial services. The definitions are sourced from various standards, academic literature, and government publications, and are informed by contributions from financial institutions, AI service providers, and public-sector entities. The Lexicon will be updated as advancements occur in AI technology.
The Financial Services AI Risk Management Framework (FS AI RMF)
The FS AI RMF aligns with and adapts NIST’s AI Risk Management Framework (“NIST AI RMF”) for financial institutions. Unlike the NIST AI RMF, the FS AI RMF is operational rather than just principles-based. The FS AI RMF toolkit offers comprehensive resources to facilitate the practical implementation of the FS AI RMF, including an AI adoption questionnaire, a risk and control matrix encompassing approximately 230 control objectives, a user guidebook, and a reference guide featuring illustrative examples.
The FS AI RMF addresses key risk themes such as AI lifecycle governance, data quality and provenance, third-party and vendor AI risk, cybersecurity and adversarial threats, and human oversight of automated systems. The controls are structured to scale based on an institution’s size, complexity, and level of AI adoption.
Practical Implications
Even as non-binding guidance, these resources are likely to become reference points for regulators, auditors, boards, and risk committees. Institutions deploying AI in higher-risk use cases (e.g., credit decisioning, fraud detection, identity verification, customer communications, or cybersecurity) should expect increased regulatory scrutiny of AI risk management governance and controls.
Financial institutions should begin taking steps to align with these resources, including:
- Conducting a gap assessment comparing existing AI governance and controls to the FS AI RMF;
- Developing an enterprise AI inventory (or updating their existing inventory) covering all AI technology used by their institution; and
- Refreshing third-party due diligence and contracting standards to reflect AI-specific risks, controls, and audit expectations.
To receive alerts when the Treasury releases additional AI resources you can Subscribe to Krieg DeVault Financial Institutions Insights. For assistance or guidance on applying these tools to AI governance, vendor contracting, and examination readiness, please contact David A. Bowen or any member of Krieg DeVault’s Financial Services practice group.
Disclaimer: The contents of this article should not be construed as legal advice or a legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult with counsel concerning your situation and specific legal questions you may have.
