ARTIFICIAL INTELLIGENCE
Use and Oversight in Financial Services
Report to Congressional Committees
United States Government Accountability Office
For more information, contact Michael E. Clements, clementsm@gao.gov.
Highlights of GAO‑25‑107197, a report to congressional committees
Use and Oversight in Financial Services
Why GAO Did This Study
AI generally entails machines doing tasks previously thought to require human intelligence. Its use in financial services has increased in recent years, driven by more advanced algorithms, increased data availability, and other factors. Federal financial regulators have also begun using AI tools to oversee regulated entities and financial markets.
The Dodd-Frank Wall Street Reform and Consumer Protection Act includes a provision for GAO to annually report on financial services regulations. This report reviews (1) the benefits and risks of AI use in financial services, (2) federal financial regulators’ oversight of AI use in financial services, and (3) the regulators’ AI use in their supervisory and market oversight activities. GAO reviewed studies by federal agencies, academics, industry, and other groups; examined documentation and guidance from federal financial regulators; and interviewed regulators, consumer and industry groups, researchers, financial institutions, and technology providers.
What GAO Recommends
GAO reiterates its 2015 recommendation that Congress consider granting NCUA authority to examine technology service providers for credit unions. GAO also recommends that NCUA update its model risk management guidance to encompass a broader variety of models used by credit unions. NCUA generally agreed with the recommendation.
What GAO Found
Financial institutions’ use of artificial intelligence (AI) presents both benefits and risks. AI is being applied in areas such as automated trading, credit decisions, and customer service (see figure). Benefits can include improved efficiency, reduced costs, and enhanced customer experience—such as more affordable personalized investment advice. However, AI also poses risks, including potentially biased lending decisions, data quality issues, privacy concerns, and new cybersecurity threats.
Federal financial regulators primarily oversee AI using existing laws, regulations, guidance, and risk-based examinations. However, some regulators have issued AI-specific guidance, such as on AI use in lending, or conducted AI-focused examinations. Regulators told GAO they continue to assess AI risks and may refine guidance and update regulations to address emerging vulnerabilities.
Unlike the other banking regulators, the National Credit Union Administration (NCUA) does not have two key tools that could aid its oversight of credit unions’ AI use. First, its model risk management guidance is limited in scope and detail and does not provide its staff or credit unions with sufficient detail on how credit unions should manage model risks, including AI models. Developing guidance that is more detailed and covers more models would strengthen NCUA’s ability to address credit unions’ AI-related risks.
Second, NCUA lacks the authority to examine technology service providers, despite credit unions’ increasing reliance on them for AI-driven services. GAO previously recommended that Congress consider granting NCUA this authority (GAO-15-509), but as of February 2025, Congress had not yet done so. Such authority would enhance NCUA’s ability to monitor and mitigate third-party risks, including those associated with AI-service providers.
The federal financial regulators are increasingly integrating AI into their general agency operations and supervisory and market oversight activities, with usage varying across agencies. The regulators use AI to identify risks, support research, and detect potential legal violations, reporting errors, or outliers. Most regulators told GAO that AI outputs inform staff decisions but are not used as sole decision-making sources.
Abbreviations
AI |
artificial intelligence |
CFPB |
Consumer Financial Protection Bureau |
CFTC |
Commodity Futures Trading Commission |
FDIC |
Federal Deposit Insurance Corporation |
Federal Reserve |
Board of Governors of the Federal Reserve System |
FINRA |
Financial Industry Regulatory Authority |
FSOC |
Financial Stability Oversight Council |
NCUA |
National Credit Union Administration |
NIST |
National Institute of Standards and Technology |
OCC |
Office of the Comptroller of the Currency |
OMB |
Office of Management and Budget |
SEC |
Securities and Exchange Commission |
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
May 19, 2025
Congressional Committees
Since the launch of the artificial intelligence (AI) application ChatGPT in November 2022, interest has grown in AI’s potential impact on various industries, including financial services.[1] AI broadly refers to computer systems that are capable of solving problems and performing tasks that traditionally required human intelligence, with the ability to continually improve at these tasks.[2] Advances in algorithms, the availability of larger volumes of data, and improvements in data storage and processing have led to increased use of AI, including in financial services. In addition, federal financial regulators have begun using AI tools to oversee regulated entities and financial markets.
In light of these developments, Members of Congress and other stakeholders have raised questions about the use of AI by financial service institutions and federal financial regulators. Section 1573(a) of the Department of Defense and Full-Year Continuing Appropriations Act, 2011, amended the Dodd-Frank Wall Street Reform and Consumer Protection Act to include a provision for us to annually review financial services regulations, including the impact of regulation on the financial marketplace.[3] This report examines (1) the potential benefits and risks of AI use in financial services, (2) federal financial regulators’ oversight of AI use in the financial services industry, and (3) federal financial regulators’ use of AI in their supervisory and market oversight activities.
For the first objective, we analyzed reports and studies from federal agencies, industry groups, international nongovernment organizations, and research and academic institutes.[4] We also interviewed representatives of seven federal financial regulators; six industry groups representing banks, credit unions, financial technology companies, and securities and derivatives market participants; three consumer and investor advocacy organizations; five research and consulting groups; four depository institutions; and three technology providers.[5] We selected interviewees that could provide perspectives on AI in banking or in securities or derivatives markets. The information gathered from these interviews cannot be generalized to all companies that develop or use AI in financial services.
For the second objective, we reviewed laws, regulations, guidance, and other agency documentation relevant to the oversight of financial institutions’ use of AI.[6] We assessed model risk management guidance issued by the prudential regulators—the Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), National Credit Union Administration (NCUA), and Office of the Comptroller of the Currency (OCC)—for general alignment with leading practices in the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. In doing so, we focused on the framework’s recommended practices for overseeing regulated entities’ design, development, deployment, and monitoring of AI systems.[7] In addition, we assessed NCUA’s model risk management guidance against the strategic goals and objectives in its 2022–2026 strategic plan. We also reviewed documentation from the federal financial regulators on AI-related training efforts, AI-related committees, and other initiatives.
For the third objective, we reviewed laws, executive orders, agency reports, strategic plans, policies and procedures, and other agency documents related to the federal financial regulators’ current and planned uses of AI.[8] We also reviewed federal financial regulators’ inventories of AI uses and interviewed officials from the seven federal financial regulators regarding use of AI in supervisory activities. For more detailed information about our scope and methodology, see appendix I.
We conducted this performance audit from November 2023 to May 2025 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Background
Definitions of AI
The scientific community and industry have yet to agree on a common definition for AI, and definitions vary within the government.[9] However, the definitions generally entail machines doing tasks previously thought to require human intelligence. AI technologies and uses vary substantially, and there is not always a clear difference between AI and more traditional quantitative modeling. For instance, some regression analysis techniques that have been used for decades could arguably be considered a form of AI.[10]
Two terms are particularly relevant for describing AI used in the financial services industry:
· Machine learning programs automatically improve their performance at a task through experience, without relying on explicit rules-based programming.[11] Machine learning is the basis for natural-language processing and computer vision, which help machines understand language and images, respectively. Examples of natural language processing include translation apps and personal assistants on smart phones. Computer vision includes algorithms and techniques to classify or understand image or video content.
· Generative AI is a type of machine learning that can create content such as text, images, audio, or video when prompted by a user. It differs from other AI systems in its ability to create novel content, its reliance on vast amounts of training data, and the greater size and complexity of its models. Content created through generative AI is based on data, often text and images sourced from the internet at large.[12]
Characteristics of Trustworthy AI
NIST’s AI Risk Management Framework describes leading practices for organizations that design, develop, deploy, or use AI systems.[13] It also identifies key characteristics intended to help manage the risks of AI while promoting trustworthy and responsible AI development and use (see fig. 1).
According to NIST, addressing these characteristics individually will not ensure AI system trustworthiness. Trade-offs are usually involved, as not all characteristics apply equally to every context and some will be more or less important depending on the situation. When managing AI risks, organizations can face difficult decisions in balancing these characteristics. For example, organizations may need to balance privacy against predictive accuracy or fairness in certain scenarios. Approaches that enhance trustworthiness can reduce risks, according to NIST, and neglecting these characteristics can increase the likelihood and magnitude of negative consequences.
Federal Oversight of Financial Institutions
Multiple federal regulators oversee financial institutions based on their individual charters or activities. Table 1 explains the basic oversight functions of the federal regulators within the scope of this review.
Regulator |
Responsibilities |
Supervised entities and oversight function |
Board of Governors of the Federal Reserve System |
Safety and soundness, consumer financial protection, and financial stability |
Supervises state-chartered banks that opt to be members of the Federal Reserve System; bank and thrift holding companies and the nondepository institution subsidiaries of these institutions; and nonbank financial companies and financial market utilities designated as systemically important by the Financial Stability Oversight Council for consolidated supervision and enhanced prudential standards. Supervises state-licensed branches and agencies of foreign banks and regulates the U.S. nonbanking activities of foreign banking organizations. |
Federal Deposit Insurance Corporation |
Safety and soundness, consumer financial protection, and financial stability |
Insures the deposits of all banks and thrifts approved for federal deposit insurance; supervises insured state-chartered banks that are not members of the Federal Reserve System, as well as insured state savings associations and insured state-chartered branches of foreign banks; resolves all failed insured banks and thrifts; and may be appointed to resolve large bank holding companies and nonbank financial companies supervised by the Federal Reserve. Has backup supervisory responsibility for all federally insured depository institutions. |
National Credit Union Administration |
Safety and soundness and consumer financial protection |
Charters and supervises federally chartered credit unions and insures savings in federal and most state-chartered credit unions. |
Office of the Comptroller of the Currency |
Safety and soundness and consumer financial protection |
Charters and supervises national banks, federal savings associations, and federal branches and agencies of foreign banks. |
Consumer Financial Protection Bureau |
Consumer financial protection |
Regulates the offering and provision of consumer financial products or services under the federal consumer financial laws. Has exclusive examination authority as well as primary enforcement authority for the federal consumer financial laws for insured depository institutions with over $10 billion in assets and their affiliates. Supervises certain nondepository financial entities and their service providers and enforces the federal consumer financial laws. Enforces prohibitions on unfair, deceptive, or abusive acts or practices and other requirements of the federal consumer financial laws for persons under its jurisdiction. |
Commodity Futures Trading Commission |
Derivatives industry |
Regulates derivatives markets and seeks to protect market users and the public from fraud, manipulation, abusive practices, and systemic risk related to derivatives subject to the Commodity Exchange Act. Also seeks to foster open, competitive, and financially sound futures markets. |
Securities and Exchange Commission |
Securities industry |
Regulates securities markets, including offers and sales of securities and regulation of securities activities of certain participants such as securities exchanges; broker-dealers; investment companies; clearing agencies; transfer agents; and certain investment advisers and municipal advisors. Oversees the Financial Industry Regulatory Authority (FINRA). FINRA seeks to promote investor protection and market integrity by developing rules, examining securities firms for compliance, and taking actions against violators. Approves self-regulatory organizations’ rules that govern the conduct of their members. |
Source: GAO analysis of laws and agency documents. | GAO‑25‑107197
AI Can Provide Benefits to Consumers, Institutions, and Financial Markets but Also Poses Risks
AI has the potential to bring numerous benefits to financial services, including lower costs, enhanced efficiency, and improved accuracy and customer convenience. However, AI use also poses risks, such as the potential for biased lending decisions and disruptions to financial institutions and markets.
AI Has a Variety of Uses in the Financial Services Industry
Financial institutions are using AI for many activities, according to our review of documents from federal financial regulators, financial institutions, industry and advocacy groups, and international and research organizations, as well as our interviews with these stakeholders. These activities include automated trading, countering threats and illicit finance, credit decisions, customer service, investment decisions, and risk management (see table 2).
Function |
Use |
Description of how artificial intelligence (AI) may be used |
Automated trading |
Order placement and execution |
Assess how an order for securities or derivatives should be placed to optimize execution. Execute large trade orders in a dynamic way to minimize the impact on price. |
Countering threats and illicit finance |
Assessing customer risk Detecting illicit activity Cybersecurity |
Identify fake IDs, recognize different photos of the same person, and screen clients against sanctions and other lists. Analyze transaction data (such as bank account and credit card data) and unstructured data (such as email, text, and audio data) to detect evidence of possible money laundering, terrorist financing, bribery, tax evasion, insider trading, market manipulation, and other fraudulent or illegal activities. Detect and mitigate cyber threats through real-time investigation of potential attacks, flagging and blocking of new ransomware, and identification of compromised accounts and files. |
Credit decisions |
Analyzing creditworthiness |
Analyze data from sources such as credit reports, financial statements, and transaction history to predict an applicant’s creditworthiness. Assess creditworthiness of applicants without a credit score by analyzing information not contained in a traditional credit report, such as rent and utility payments (also known as alternative data). |
Customer service |
Chatbots Automated response routing |
Provide customer service by simulating human conversation through text and voice commands. Respond to basic customer questions regarding bank account balances and transaction history, investment portfolio holdings, address changes, and password resets. Accept and process trade orders within certain thresholds or initiate account openings. Process and triage customer calls and screen and classify incoming customer emails according to key features, such as the email address or subject line. Respond to emails containing common or routine inquiries. |
Investment decisions |
Investment strategies Advisory services |
Analyze large amounts of data, including from nontraditional sources like social media and satellite imagery, to predict price movement of assets, such as stocks. Provide financial advice to individual investors by analyzing their assets, spending patterns, debt balances, internet activity, and prior communications (such as emails and chats). |
Risk management |
Credit risk Liquidity risk |
Analyze loan portfolio and market data to predict borrower defaults and inform decisions about when to write off debt. Analyze substantial historical data along with current market data to identify trends, note anomalies, and optimize liquidity and cash management. |
Source: GAO analysis of information from federal agencies, financial institutions, industry and advocacy groups, and international and research organizations. | GAO‑25‑107197
Explainability Explainability refers to the ability to understand how and why an AI system produces decisions, predictions, or recommendations. According to the Consumer Financial Protection Bureau and the banking and credit union regulators, insufficient explainability in AI models can pose several challenges. It may inhibit a financial institution’s understanding of a model’s conceptual soundness and reliability, inhibit independent review and audit, and make compliance with laws and regulations more difficult. Source: GAO analysis of federal agency documents. | GAO‑25‑107197 |
Most of these AI applications use machine learning, which makes predictions, identifies patterns, or automates processes. In more limited cases, financial institutions are using generative AI, generally to enhance employee productivity. For example, one regional financial institution is piloting an internal generative AI chatbot that answers employees’ questions about policies and procedures. Additionally, some banks are training generative AI models on customer service call center conversations to enhance customer support, according to one banking association.[14] These models can provide recommendations to help employees address customer issues, such as replacing a debit card. One large financial institution is piloting generative AI tools to assist employees with writing code, summarizing customer interactions, searching legal documents, and conducting market research.
Financial institution representatives told us they have been more cautious about adopting AI for activities where a high degree of reliability or explainability is important or where they are unsure how regulations would apply to a particular use of AI. For example, the greater complexity of generative AI models makes explainability more challenging and, as discussed below, can produce inaccurate outputs. According to the Department of the Treasury, financial institutions currently limit generative AI use to activities where the institution deems lower levels of explainability to be sufficient.[15]
AI May Improve Consumer Products and Services and May Enhance Financial Institutions’ Operations
AI may be faster, more efficient, and more accurate than prior methods for performing various tasks, yielding benefits to consumers, investors, financial institutions, and financial markets, according to reports we reviewed and stakeholders we interviewed.
Potential Benefits for Consumers and Investors
Lower cost. AI can enable financial institutions to provide certain products and services at a lower cost. For example, robo-advisers that use AI to automate investment management may provide investment advice at lower fees and require smaller account minimums compared with traditional advisory services.[16] AI could also make lower-cost products and services more accessible. For example, AI-powered automation of credit underwriting services enables credit unions to deliver faster credit decisions, according to one AI provider. This feature allows its customers to avoid higher-cost lending options when funds are needed quickly.
Greater financial inclusion. AI may have the potential to expand financial services to customers typically lacking access. In lending, AI may benefit customers with thin credit files or no credit record at all.[17] Further, representatives of one AI provider told us that credit unions that implemented its AI model reported a 40 percent increase in credit approvals for women and people of color. In addition, AI could introduce investment management to populations not previously using these services.[18]
Greater convenience. AI could enhance the convenience and quality of customer service. For example, chatbots can provide immediate customer support any time of day, and AI can help them better understand and respond to customer questions. In addition, one credit union reported using an AI system to personalize its customer service by recommending a member’s frequent tasks, such as transferring funds, when they interact with the chatbot.
Potential Benefits for Financial Institutions and Financial Markets
Increased security. AI could improve the security of institutions and markets through better detection of cyber threats and illicit finance (e.g., fraud, money laundering, and insider trading). For example, AI can help combat synthetic identity fraud by identifying cases that human analysts cannot easily detect.[19] AI may also reduce false positives, which constitute the vast majority of money laundering and terrorist financing alerts, allowing institutions to focus on genuinely suspicious cases, according to the International Monetary Fund.[20]
Higher profitability. Financial institutions may realize higher profits if AI enhances revenue generation or reduces costs. AI can potentially help improve predictive power and increase profitability for financial institutions.[21] For example, according to one study, chatbots saved approximately $0.70 per customer interaction compared with human agents.[22] The aggregate savings can be substantial—for example, one large financial institution reported that its AI chatbot had had over 2 billion customer interactions.[23] In addition, one AI provider told us its AI model reduced the time and resources needed for financial institutions to make credit decisions by up to 67 percent. However, the cost of developing or acquiring AI could be high and may be prohibitive for smaller financial institutions, according to representatives from trade associations representing banks and credit unions.
Reduced market impact. Capital markets could see less price volatility from large trades when AI is used to place and execute orders. According to the Organisation for Economic Cooperation and Development, AI can minimize the market impact of large trades by optimizing size, duration, and order size dynamically, based on market conditions.[24]
Improved compliance and risk management. Financial institutions and markets could benefit if AI enhances risk management and compliance with laws and regulations. According to the International Monetary Fund, AI has improved compliance by leveraging broad sets of data in real time and automating compliance decisions.[25] Improved compliance could lead to safer securities markets, according to the Financial Industry Regulatory Authority (FINRA).[26] Additionally, AI can monitor thousands of risk factors daily and test portfolio performance under thousands of economic scenarios, further enhancing financial institutions’ risk management.[27]
AI Presents Risks for Consumers, Financial Institutions, and Markets
AI has the potential to amplify risks inherent in financial activities, and these risks can affect consumers, investors, financial institutions, and financial markets.[28] Factors that can increase risk include complex and dynamic AI models, poor-quality data, and reliance on third parties. In addition, AI may introduce new vulnerabilities, such as hallucinations and novel cyber threats.[29]
Consumer and Investor Risks
Fair lending risk. Bias in credit decisions is a risk inherent in lending, and AI models can perpetuate or increase this risk, leading to credit denials or higher-priced credit for borrowers, including those in protected classes.[30] For example, one researcher testified that some AI models could infer loan applicants’ race or gender from application data or create complex interactions between variables that could result in disproportionately negative effects on certain groups.[31] Further, the Financial Stability Oversight Council (FSOC) has warned that as AI models grow more complex, identifying and correcting biases can become increasingly difficult.[32] The use of AI to market credit products also carries fair lending risks. According to one consumer advocate, AI could steer borrowers, including those in protected classes, toward inferior or costlier credit products.
Conflict of interest risk. AI models used in investment advisory services could increase the potential for conflicts of interest between advisers or brokers and their clients. For example, AI models could optimize higher profits for advisers, potentially at investors’ expense.[33] The complexity of AI may obscure such conflicts. For example, one consumer advocate we spoke with noted that it may not be apparent when AI is providing conflicted advice that is not in the client’s best interest.
Privacy risk. AI can expose consumers to new privacy risks. For example, according to reports we reviewed, certain machine learning and generative AI models may leak sensitive data directly or by inference, including by deducing identities from anonymized data.[34] To safeguard against such privacy risks, one financial institution we spoke with restricts its employees’ access to publicly available generative AI applications. Further, AI enables financial institutions to collect and analyze increasing amounts of sensitive consumer data, increasing privacy risks. For example, according to FINRA, AI use in the securities industry may involve collecting, analyzing, and sharing personally identifiable information and biometrics, customer website or app usage data, geospatial location, social media activity, and written, voice, and video communications.[35]
Financial institutions also may rely on third parties to develop AI models or to store data, which could heighten privacy risk. For example, cloud computing used in AI exacerbates privacy risks when financial institutions lack the expertise to conduct effective due diligence on cloud services.[36]
False or misleading information risk. AI models may produce inaccurate information about financial products or services, potentially causing harm to consumers or investors. According to the Commodity Futures Trading Commission (CFTC), AI-based customer interactions may lead to unintended biases and deceptive or misleading communications.[37] The prudential regulators and the Consumer Financial Protection Bureau (CFPB) have noted that AI can create or heighten risks of unfair, deceptive, or abusive acts or practices.[38] Further, generative AI can produce outputs that are false but convincing, known as hallucinations, which can be especially problematic for consumer-facing applications.[39]
Financial Institution Risks
Operational and cybersecurity risk. AI could lead to technical breakdowns that disrupt the operations of financial institutions. According to the prudential regulators, the use of AI could result in failures related to internal processes, controls, or information technology, as well as risks associated with the use of third parties and models, all of which could affect a financial institution’s safety and soundness.[40] AI may also increase the scope for cyber threats and introduce new vulnerabilities. Novel threats could allow attackers to evade detection and prompt AI to make wrong decisions or extract sensitive information.[41]
Model risk. AI models may underperform and result in financial losses or reputational harm to a financial institution. The function and outputs of AI can be negatively affected by data quality issues, such as incomplete, erroneous, unsuitable, or outdated data; poorly labelled data; data reflecting underlying human prejudices; or data used in the wrong context. According to the prudential regulators and CFPB, AI may perpetuate or even amplify bias or inaccuracies inherent in the training data, or make incorrect predictions if that data set is incomplete or nonrepresentative.[42] In addition, according to reports we reviewed, some AI models may struggle to adapt to new conditions.[43] For example, these models may produce inaccurate results when encountering different market conditions or changing customer behavior.
Challenges in assessing the quality of AI inputs and outputs could heighten model risk. Financial institutions may find it difficult to evaluate the data sources used to train AI models, especially if the sources are opaque or unavailable, according to the Financial Stability Board.[44] Reports we reviewed indicate that testing and validating certain machine learning models may be more challenging than with traditional models due to their dynamic nature.[45] Specifically, AI models that learn continuously from live data can lead to shifts in underlying data, variable relationships, or statistical characteristics, potentially leading to model underperformance or inaccurate outputs. According to FSOC, expert analysis may be needed to evaluate the accuracy of generative AI output.[46]
According to representatives of one large bank we spoke with, hallucinations are a key reason banks avoid using generative AI for activities that warrant a high degree of accuracy, such as credit underwriting or risk management. However, they noted that techniques are being developed to limit hallucinations.[47]
Compliance risk. AI may produce results that violate or inhibit compliance with consumer or investor protection laws or that are inconsistent with safe and sound banking practices or market conduct law. For example, some AI models’ limited explainability may inhibit financial institutions from complying with fair lending laws if they cannot provide specific reasons for denying an application for credit or taking other adverse actions.
In addition, under certain circumstances, AI risk factors such as explainability challenges and poor data quality can introduce risks to safety and soundness, according to FSOC.[48] Financial institutions that use AI face increased risks if they do not adhere to appropriate operational and cybersecurity standards. Further, AI used in trading applications may contribute to inappropriate trading behavior or market disruptions, according to CFTC.[49] One study found that an AI model identified market manipulation as an optimal investment strategy without being explicitly programmed to do so, illustrating how AI can inadvertently violate the law.[50]
Financial Market Risks
Concentration risk. Financial instability could arise from a reliance on a concentrated group of third-party AI service providers (e.g., data providers, cloud providers, and technology firms). This concentration could increase the financial system’s vulnerability to single points of failure.[51] While the risk of concentration is not unique to AI, the technology’s need for significant data and computational power contributes to this concern. The Financial Stability Board has noted that highly concentrated service provider markets exist across important aspects of the AI supply chain, including in hardware, infrastructure, and data aggregation.[52]
Herding risk. AI could lead to herding behavior—where individual actors make similar decisions—resulting in systemic risk to financial markets. While this risk is not unique to AI, such behavior could arise from a concentration of third-party AI providers or homogeneity in data used to train models, and the effect depends on the sector where the behavior takes place, according to reports we reviewed.[53] For example, herding in capital markets could amplify market volatility or trigger crashes. In addition, herding in lending or risk management could amplify economic booms and busts. However, whether AI could lead to herding behavior is subject to debate, and some market participants argue that AI models, datasets, and uses will be sufficiently diverse to mitigate this risk, according to the International Organization of Securities Commissions.[54]
Regulators Use Existing Frameworks to Oversee AI, but NCUA’s Key AI Oversight Tools Are Limited
Regulators Generally Use Existing Regulatory and Supervisory Frameworks
Officials from federal financial regulators noted that existing federal financial laws and regulations generally apply to financial service activities regardless of whether AI is used.[55] For example, the Congressional Research Service explained that lending laws apply both to lenders making decisions with pencil and paper and to those using AI models.[56] Appendix II presents selected laws and regulations that are not specific to AI but may help address AI-related risks.
Similarly, financial regulator officials told us that existing guidance generally applies to financial institution activities regardless of AI use. For example, officials from FDIC, the Federal Reserve, and OCC identified supervisory guidance for managing model risk and third-party risk (discussed below) as relevant to AI use, depending on the facts and circumstances.[57] While these guidance documents may not explicitly mention AI, their principles can help financial institutions manage key AI risks and ensure AI systems operate as intended.
Officials from most of the financial regulators we spoke with told us they believe existing statutory authorities are generally sufficient to supervise regulated entities’ AI use at this time. NCUA officials cited their agency’s lack of third-party oversight authority as a reason they believe existing authorities are not sufficient, as discussed in more detail below.
Representatives from several industry groups and financial institutions generally expressed similar views. However, several of these representatives also noted opportunities to clarify AI-related guidance. For example, some said regulators could clarify how guidance applies to AI technologies posing different levels of risk. Some also suggested that regulators clarify their expectations for explainability and adverse action notice requirements.[58] In addition, some said that financial service companies’ uncertainty about how regulations could apply makes them take a cautious approach to AI adoption.
The Financial Stability Board reported that existing frameworks address many AI-related vulnerabilities, but future developments could introduce new ones that are not adequately addressed by current regulatory and supervisory frameworks.[59] Some of the regulators made similar comments and are considering updates to regulations and guidance.[60] For example, SEC officials stated that they are assessing if existing regulations and guidance adequately address AI-related risks. CFTC staff also reported that AI’s rapid development could necessitate new or supplemental guidance or regulations.[61] Federal Reserve officials stated that they are considering whether additional policy steps or clarification for regulated entities is needed.
Federal financial regulators also told us they rely on their standard risk-based examination processes to assess if financial institutions’ AI use is consistent with applicable laws and regulations.
· Prudential regulators. FDIC, Federal Reserve, OCC, and NCUA officials told us that if AI is determined to be relevant through risk-based planning and prioritization processes, it would typically be reviewed as part of broader examinations of safety and soundness, information technology, or compliance with applicable laws and regulations. OCC officials added that examinations depend on the extent of a financial institution’s AI use and its risk management practices.
· CFPB. In June 2024, CFPB officials said their examinations focus on compliance and enforcement of federal consumer financial laws, regardless of whether AI is used. They said they typically examine regulated entities by product lines, policies, or practices, which may include reviews of institutions’ AI use.
· Financial market regulators. SEC and CFTC officials said their examinations focus on ensuring regulated entities comply with applicable statutory and regulatory requirements, regardless of AI use.[62] SEC officials added that AI can either be the focus of an examination or part of a broader examination of firm practices. CFTC officials told us that they would examine their regulated entities’ AI use for consistency with regulatory requirements.[63] They said they are in the planning stages of incorporating reviews that specifically focus on a regulated entity’s use of AI into their examinations.
If warranted, concerns or noncompliance identified during examinations or other supervisory activities can result in supervisory findings, corrective actions, or enforcement actions. Examples of recent AI-related actions include the following:
· OCC has issued 17 matters requiring attention related to AI use since fiscal year 2020.[64]
· CFPB has taken six AI-related enforcement actions since 2020, including a 2022 action against a large bank for using an automated fraud detection system that unlawfully froze accounts.[65]
· NCUA has issued one document of resolution related to AI use since fiscal year 2020. In addition, it issued a regional director letter to a credit union for insufficient governance, reporting, and risk mitigation for an AI-driven program that instantly approved loans without traditional underwriting steps, such as income verification.[66]
· SEC brought at least eight AI-related enforcement actions in 2023 and 2024, including charging parties with making false and misleading statements about their purported use of AI in violation of the federal securities laws.[67]
Some Regulators Have Developed Guidance or Conducted Supervisory Activities Specific to AI
In addition to broadly applicable guidance, some regulators have issued guidance specifically addressing financial institutions’ use of AI in certain areas, such as fair lending and derivatives markets. For example, CFPB issued two circulars, in May 2022 and September 2023, to clarify adverse action notification requirements when creditors use AI or other complex credit models in decision-making.[68] Several CFTC divisions issued an advisory in December 2024 discussing legal requirements that may be relevant when regulated entities use AI to facilitate and monitor derivatives transactions.[69]
Some regulators have also conducted AI-focused supervisory activities.[70] For example, in 2023, SEC conducted examinations of the AI disclosures and governance of approximately 30 registered investment advisers. The primary purposes of these examinations were to identify candidates for more intensive examinations and to enhance SEC’s understanding of the industry’s AI uses and associated risks. SEC officials said most of the advisers they examined did not have comprehensive policies and procedures governing their AI use. Additionally, officials said several had misrepresented their use of AI.
The Federal Reserve, OCC, and CFPB have conducted reviews across several financial institutions that focused on AI use. For example, OCC conducted an AI-focused review of seven large banks from 2019 to 2023. OCC did not issue any matters requiring attention during this review, but it provided several AI-related observations and recommendations to the banks. Many of the large banks reviewed had satisfactory risk management practices for their AI and machine learning models. However, OCC found that some banks’ risk assessments did not explicitly capture risk factors and complexities unique to AI models. For example, some banks classified their AI and machine learning models as low risk, meaning they may be evaluated less frequently and be subject to less stringent governance requirements.[71] In addition, OCC found that the seven banks provided limited information on their efforts to evaluate bias and fair lending issues associated with AI or machine learning use. OCC reported plans to continue assessing how banks are managing potential bias and fair lending concerns with the AI models.
Federal Reserve officials stated they have conducted thematic reviews of supervised organizations to understand how AI is being used and risks are being managed. CFPB officials said they conducted reviews across several financial institutions that focused on tools marketed as AI. In June 2023, CFPB issued a report in response to one of these reviews, which discussed how financial institutions use chatbot technologies and the associated challenges chatbots could pose to consumers.[72]
Regulators have also issued warnings about AI misuse. For example, in January 2024, SEC, the North American Securities Administrators Association, and FINRA issued a joint investor alert warning of increased investment fraud involving the purported use of AI and other emerging technologies.[73] On the same day, CFTC issued a customer advisory warning the public about AI-related scams.[74]
NCUA Does Not Have Two Key AI Oversight Tools Available to the Other Prudential Regulators
FDIC, the Federal Reserve, and OCC have issued model risk management guidance that can help inform regulated entities about supervisory expectations for managing risks from the use of AI models, but NCUA’s guidance is limited. Further, NCUA lacks authority to examine third-party services provided to regulated entities.
Model Risk Management Guidance
Model risk management is a key governance and risk management tool that financial institutions use to manage AI risks. Financial institutions use models, including AI, for predictive analytics, creditworthiness assessments, investment decisions, and risk management. However, weaknesses or unreliability in these models can lead to unsound strategic decisions that can affect a financial institution’s performance, profitability, and reputation. Effective model risk management helps organizations mitigate these potential adverse effects.
The banking regulators—FDIC, the Federal Reserve, and OCC—have issued supervisory guidance on model risk management that includes principles applicable to all model types, including AI models.[75] Principles include sound development, consideration of complexity and transparency, and evaluation of performance through testing and independent validation. The banking regulators’ model risk management guidance is not specific to AI and is meant to apply more broadly to all types of models.
The principles described in the banking regulators’ model risk management guidance generally align with NIST’s leading practices for AI models.[76] For example, guidance from all three banking regulators stresses the importance of understanding and documenting model capabilities. In addition, the regulators’ guidance emphasizes the importance of ongoing monitoring and periodic review of the risk management process for models. NIST’s framework states that AI users do not need to implement all of its leading practices to have reasonable risk management. Instead, application of the framework should be tailored to how a firm or sector uses AI. As discussed previously, developments in AI could raise issues that may warrant future updates to regulators’ risk management guidance.
NCUA has also issued guidance on model risk management, but it is limited in scope and detail:
· The guidance addresses only one model type. The model risk management guidance contained in NCUA’s examiner guide addresses only interest rate risk modeling.[77] This stands in contrast to the banking regulators’ model risk management guidance, which broadly addresses all types of models, including AI models. However, NCUA officials noted that credit unions are increasingly using models in most aspects of financial decision-making, including loan underwriting, stress testing, and fraud detection. NCUA officials also stated that their model risk management guidance, last updated in October 2016, focuses on interest rate risk modeling because this was historically the most common type of modeling used by credit unions.
· The guidance provides limited detail. The model risk management guidance contained in NCUA’s examiner guide does not have sufficient detail to ensure examiners and credit unions follow key risk management practices, including those related to managing risks from AI models.[78] The guidance briefly describes some model risk management activities, such as components of a model validation framework. It also states that policies should designate responsibility for model development, implementation, and use. However, it does not provide sufficient detail or examples of what these practices should entail or how examinations should be tailored based on specific model uses and associated risks. Furthermore, NCUA’s guidance does not address other key model risk management activities, such as model selection or the frequency with which policies should be reviewed and updated.
NCUA officials identified additional agency guidance documents, such as their letter on enterprise risk management, as relevant to credit unions’ model risk management practices.[79] However, this guidance also does not provide detail or examples that clearly describe model risk management expectations.
NCUA’s guidance contrasts with that of the banking regulators, which includes details and examples for key model risk management activities. For example, the banking regulators’ guidance outlines expectations for independence, expertise, and authority for those responsible for validating models. It also provides examples of how to evaluate models’ conceptual soundness.
We could not compare NCUA’s guidance against the NIST AI Risk Management Framework because the guidance’s limited scope did not allow for a meaningful comparison. These limitations suggest that the current NCUA guidance is not adequate to ensure effective oversight of credit unions’ model use, including AI models. NCUA officials say they have considered updating their guidance but have not yet taken steps to do so.
NCUA’s examiner guide refers examiners to other banking regulators’ model risk management guidance. NCUA officials told us that while examiners are not required to follow that guidance, they can use it as a best practice when applicable. Similarly, federally insured credit unions can refer to other banking regulators’ guidance to inform their model risk management.
However, developing its own more detailed guidance for staff and regulated entities would help NCUA ensure that its expectations for addressing risks are clear, consistently implemented, and effectively monitored. Further, such guidance would strengthen NCUA’s ability to address credit unions’ AI-related risks. This approach would align with NCUA’s 2022–2026 strategic plan, which calls for ensuring that its policies appropriately address emerging financial technologies, including credit unions’ ability to manage opportunities and risks.[80] By establishing more detailed model risk management guidance—designed to also apply to AI models—NCUA could enhance its oversight and reduce potential risks to consumers and to the safety and soundness of credit unions.
Third-Party Risk Management and Oversight Authority
Third-party risk management is another key tool that financial institutions use to manage risks associated with AI.[81] In recent years, FDIC, the Federal Reserve, and OCC issued updated guidance on third-party risk management, highlighting the importance of financial institution oversight of third parties, which may include third parties providing AI technologies.[82]
As part of their standard supervisory processes, FDIC, the Federal Reserve, and OCC review how their regulated entities manage third-party risks. Certain services provided by third parties on behalf of regulated entities are subject to regulation and examination by these agencies to the same extent as if such services were being performed by the regulated entity itself on its own premises.[83] According to interagency guidance, the agencies may pursue corrective measures when necessary to address violations of laws and regulations or unsafe or unsound banking practices by the banking organization or its third party.[84]
Similar to the banking regulators, NCUA evaluates credit unions’ third-party risk management practices during examinations.[85] However, NCUA’s enabling law—the Federal Credit Union Act—does not grant NCUA the same authority as the banking regulators to examine services performed by third parties on behalf of the entities it regulates.[86] According to a 2022 NCUA report, credit unions rely on third parties for a wide range of products, services, and activities.[87] NCUA officials noted that credit unions currently use third-party AI models to support core business functions, such as data processing, stress testing, risk management, loan decisions, and customer support. Credit unions typically do not have the resources to develop AI systems in-house, according to representatives we interviewed from NCUA, a credit union association, and a credit union.
In a 2015 report, we found that third-party services were integral to the operations of many credit unions, and that deficiencies in third-party providers’ operations could quickly result in financial and other harm for credit unions.[88] We concluded that granting NCUA the authority to examine third-party service providers would enhance the agency’s ability to monitor the safety and soundness of credit unions. We recommended that Congress consider modifying the Federal Credit Union Act to give NCUA the authority to examine technology service providers of credit unions. As of February 2025, Congress had not enacted legislation to implement this recommendation. Credit unions’ increasing reliance on third party technology services—which may include AI-related services—underscores the need to grant NCUA this authority. This would, in turn, enable NCUA to more effectively monitor and mitigate third-party risks, including those associated with AI-service providers, and ensure the safety and soundness of credit unions.
Regulators Have Taken Steps to Increase Their AI Knowledge
Regulators are taking steps to enhance their knowledge and understanding of AI, including training and hiring knowledgeable staff, establishing AI-related working groups and offices, and collaborating with domestic and international agencies, as well as nonfederal stakeholders.
Training. Financial regulators have provided staff with training opportunities that help supervisory staff better understand the benefits and risks of AI and how financial institutions are deploying the technology.[89] Some of the training has focused on institutions’ use of cutting-edge applications, such as generative AI. Representatives from three private-sector organizations we interviewed expressed concern that regulators might unnecessarily burden regulated entities—such as through unwarranted examinations or supervisory actions—if supervisory staff lack experience overseeing AI. Training efforts could help address these concerns.[90]
Internal AI-related initiatives. Financial regulators have established AI-related initiatives to help assess and monitor the potential effects of AI in the financial services industry. For example, one of FDIC’s divisions has an AI working group that has developed short- and longer-term strategies aimed at gathering data on and evaluating AI usage by regulated institutions. SEC’s Strategic Hub for Innovation and Financial Technology serves as a central hub for issues related to developments in financial technology, including AI.
Collaboration with domestic and international financial supervisors. Regulators have undertaken several interagency efforts to help increase their understanding of AI. For example, FSOC established an AI working group for FSOC members to share information, and its recent annual reports have included sections focused on AI.[91] Additionally, the Federal Reserve, OCC, FDIC, and NCUA work together on an interagency basis to better understand and monitor the risks and benefits associated with financial institutions’ use of AI. CFPB has also held periodic meetings with supervisors across the federal government focusing on emerging technologies, including a recent discussion on generative AI.[92]
Several financial regulators have also collaborated with their foreign counterparts. For example, staff from SEC are leading the International Organization of Securities Commissions’ FinTech Task Force AI Working Group. In addition, FDIC, Federal Reserve, and OCC officials said they participate in the Basel Committee on Bank Supervision, where they share AI practices across jurisdictions.
External communication and collaboration. Regulators also engage with nonfederal stakeholders to gain insights into AI use in the financial services sector. For example, regulators have issued several requests for comment on the opportunities and risks associated with the sector’s AI use.[93] In addition, OCC recently published a request that solicited academic research papers on the use of AI in banking and finance and planned to invite the authors to present their findings to OCC staff and academic and government researchers.[94]
Regulators Are Using AI to Enhance Their Supervisory Activities and Adopting Different Approaches to Expand Its Use
Federal financial regulators vary in their current and planned uses of AI, including using it for general agency operations and supervisory and market oversight activities. All regulators that were using AI as of December 2024 reported using AI outputs in conjunction with other information to inform decisions.
Regulators Use AI for a Variety of Activities, with Use Varying by Agency
Regulators’ Current and Planned Uses of AI
The extent of regulators’ current and planned AI use varies by agency, according to the federal financial regulators’ inventories of AI uses and other regulator information (see fig. 2).[95] As of December 2024, the Federal Reserve and SEC had the most activities using AI, and FDIC and the Federal Reserve had the most activities for which they plan to use AI in the future.
Figure 2: Number of Activities for Which Federal Financial Regulators Use or Plan to Use AI, as of December 2024
aAs of February 2025, OCC had not published its AI use inventory. The figure reflects information provided by OCC officials and may not represent its full inventory of current and planned AI uses.
Regulators are using or plan to use AI for a variety of general agency operations. For example, FDIC, the Federal Reserve, NCUA, and SEC use AI for activities such as creating and editing graphics, videos, and presentations, and answering staff questions. FDIC may use AI to score job application essays, such as to assess their grammar and mechanics, analysis and reasoning, and organization and structure.
In addition, FDIC, the Federal Reserve, and SEC plan to use AI to extract information from files. For example, the Federal Reserve plans to input large amounts of data from paper forms into a digital system, while FDIC plans to extract data from PDF invoices and contracts for reconciliation purposes.
Supervisory and Market Oversight Uses of AI
The federal financial regulators are using AI for supervisory and market oversight activities, including identifying risks, supporting research, and detecting potential legal violations, reporting errors, or outliers.
Identifying risks. NCUA, FDIC, the Federal Reserve, and SEC use AI to analyze data, identify patterns, or make predictions to help identify risks facing regulated financial institutions or financial markets. For example, NCUA uses AI in stress testing, which helps examiners assess the potential impact of various economic conditions on larger credit unions’ financial performance.[96] NCUA officials noted that if AI analysis predicts a credit union will fall short of minimum capital ratio requirements, NCUA examiners consider the prediction alongside other supervisory information before deciding on an action.[97] NCUA officials also told us they are researching AI applications for predicting a credit union’s financial condition over various time frames.
Supporting research. The Federal Reserve, FDIC, and SEC use AI to search, analyze, and extract relevant information from large collections of documents. Examples of the documents include regulatory filings, examination documents, public comment letters on proposed rulemakings, and news articles, depending on the agency. For example, the Federal Reserve uses AI to search bank examination documents, providing examiners with a fast and flexible method to search for specific keywords and phrases. The volume of documents varies by bank, but most collections range from 100 to over 1,000 documents, with individual documents ranging from several to over 100 pages. Prior to AI adoption, examiners reviewed documents manually. Federal Reserve officials said AI use has improved examiners’ productivity by allowing them to focus on supervisory priorities and more complex tasks.
Detecting potential legal violations. FDIC and SEC use AI to complement their other methods for identifying patterns in data that may indicate higher risk of violations of consumer or federal securities laws. For example, officials from SEC told us staff use AI tools in efforts to detect potential insider trading. Subject matter experts review flagged trades to determine if further investigation is warranted, officials said. FDIC also plans to use AI to extract data from credit reports, which will help examiners gather information needed to assist in determining compliance with fair lending laws. As of December 2024, CFPB planned to use AI to help staff analyze consumer complaints.
Detecting reporting errors or outliers. NCUA, CFTC, and the Federal Reserve use AI to help detect errors or outliers in data submitted by financial institutions. For example, NCUA uses AI to predict the future value of items in Call Reports (the quarterly financial reports that credit unions must submit) and flag those that fall outside the predicted range. NCUA officials told us that when discrepancies are flagged, they ask the credit union to review the flagged items and correct them if necessary.
Officials from each regulator told us they were not using generative AI for supervisory or market oversight activities, although some regulators were considering or exploring generative AI use. OCC officials said they intend to use generative AI to help examiners identify relevant information in supervisory guidance and assess risk identification and monitoring in bank documents. Federal Reserve officials said they are exploring potential ways to use generative AI for supervisory purposes.
Officials from all regulators that were using AI as of December 2024 told us they used AI outputs in conjunction with other supervisory information, such as examination reports and financial data, to inform staff decisions. They said they were not using AI to make decisions autonomously or relying on it as a sole source of input.
According to most of the regulators, AI increases their efficiency and effectiveness. For example, FDIC reported that its use of AI to score job applications may speed up processing and reduce costs. Most regulators also said AI tools can identify issues, patterns, and relationships not easily identified by a human. The U.S. House of Representatives’ Bipartisan Artificial Intelligence Task Force found that AI adoption can improve regulators’ efficiency and productivity, assist regulators in understanding and overseeing AI use in the financial services sector, and help identify regulated institutions in noncompliance with regulations.[98]
Policies and Procedures for AI Use
All of the federal financial regulators told us their existing policies and procedures for the general use of technology apply to their AI use as well.[99] These policies cover data protection, IT security, model risk management, and the acquisition or oversight of third-party software.
In addition, some regulators have developed or are developing AI-specific policies and procedures. The Federal Reserve recently introduced AI-specific policies for the design, development, and deployment of AI systems. NCUA recently developed an AI-specific policy that prohibits its employees from accessing certain publicly available AI tools on NCUA-issued equipment and devices. Officials from OCC and SEC told us they are in the process of developing AI-specific policies and procedures. For example, according to officials, OCC is developing an AI risk framework and considering how to incorporate the NIST AI Risk Management Framework and other AI guidance.
Regulators Are Taking Different Approaches to Identify Additional AI Uses
Financial regulators are using or planning to use different approaches to identify additional AI uses, according to documents we reviewed and discussions with officials.
· CFPB was working to develop strategies to allow for the responsible, safe, and effective use of AI and was approving AI use cases for limited pilots, according to agency documentation from September 2024.[100]
· CFTC launched an initiative that identified and scored 27 potential AI uses, based on their technical feasibility and value to the agency. These include uses such as identifying new potential enforcement actions or saving staff time.
· FDIC developed an AI strategy document, which includes goals to (1) develop the ability to increase the size of AI projects and (2) enable FDIC divisions to have autonomy and flexibility in deploying AI. The strategy also describes how the agency plans to achieve its goals. Additionally, FDIC has established a working group to review and assess potential AI uses.
· The Federal Reserve developed a road map to identify potential AI uses, experiment with AI systems, and implement value-added AI uses. In addition, the Federal Reserve established a working group to experiment with and identify potential uses for generative AI, manage AI risks, and ensure compliance with external governance policies.
· NCUA is developing a process for offices to request the use of AI tools developed by third parties, according to officials.
· OCC is working on a process to identify and implement AI tools to enhance its supervision and oversight capabilities, according to officials.
· SEC plans to develop an AI strategy in 2025 to help identify and prioritize AI uses, according to officials. SEC divisions and offices would be responsible for prioritizing the implementation of AI uses that they identify and that the AI steering committee approves. The AI steering committee would also advise the Chief AI Officer on prioritizing agencywide uses, they said.
Conclusions
The rapid advancement and widespread adoption of AI warrant robust oversight tools for federal financial regulators. We therefore reiterate our prior recommendation that Congress consider granting NCUA authority to examine technology service providers of credit unions. Such authority would enhance NCUA’s ability to monitor and address third-party risks, including those associated with AI service providers. Further, unlike the banking regulators, NCUA does not have detailed model risk management guidance that covers a broad variety of models, including AI models. Developing such guidance would strengthen NCUA’s ability to address risks to consumers and to the safety and soundness of credit unions arising from the use of AI.
Recommendation for Executive Action
The Chair of NCUA should update the agency’s model risk management guidance to encompass a broader variety of models used by credit unions and provide additional details on key aspects of effective model risk management. (Recommendation 1)
Agency Comments and Our Evaluation
We provided a draft of this report to CFPB, CFTC, FDIC, the Federal Reserve, NCUA, OCC, and SEC for review and comment. We received written comments from NCUA, which are reproduced in appendix III. In addition, CFPB, CFTC, FDIC, the Federal Reserve, OCC, and SEC provided technical comments, which we incorporated as appropriate.
NCUA generally agreed with our recommendation. In its written comments, NCUA noted that it will review contemporary sound practices on model risk management, such as the banking regulators’ guidance, and provide information and clarity to examiners and credit unions. NCUA’s plans to review current model risk management practices are consistent with the intent of our recommendation. However, we maintain that NCUA should also update its guidance so that it contains more details on key aspects of model risk management and information on additional types of models.
In addition, NCUA acknowledged our recommendation to Congress that it consider providing NCUA authority to examine technology service providers of credit unions. However, the Chairman noted that there are risks to providing NCUA such authority, including a possible reduction in the quality and quantity of services provided to credit unions and financial and operational risks for credit unions. However, we maintain that examination authority over third-party service providers would enhance NCUA’s ability to monitor and address third-party risks and ensure the safety and soundness of credit unions.
We are sending copies of this report to the appropriate congressional committees, the Acting Director of the Consumer Financial Protection Bureau, Acting Chairman of the Commodity Futures Trading Commission, Acting Chairman of the Federal Deposit Insurance Corporation, Chair of the Board of Governors of the Federal Reserve System, Chairman of the National Credit Union Administration, Acting Comptroller of the Currency, Chairman of the Securities and Exchange Commission, and other interested parties. In addition, the report is available at no charge on the GAO website at https://www.gao.gov.
If you or your staff have any questions about this report, please contact me at clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV.
Michael E. Clements
Director, Financial Markets and Community Investment
List of Committees
The Honorable John Thune
Majority Leader
The Honorable Charles E. Schumer
Minority Leader
United States Senate
The Honorable Mike Johnson
Speaker
House of Representatives
The Honorable Steve Scalise
Majority Leader
The Honorable Hakeem Jeffries
Minority Leader
House of Representatives
The Honorable John Boozman
Chairman
The Honorable Amy Klobuchar
Ranking Member
Committee on Agriculture, Nutrition, and Forestry
United States Senate
The Honorable Tim Scott
Chairman
The Honorable Elizabeth Warren
Ranking Member
Committee on Banking, Housing, and Urban Affairs
United States Senate
The Honorable Ted Cruz
Chairman
The Honorable Maria Cantwell
Ranking Member
Committee on Commerce, Science, and Transportation
United States Senate
The Honorable Bill Hagerty
Chairman
The Honorable Jack Reed
Ranking Member
Subcommittee on Financial Services and General Government
Committee on Appropriations
United States Senate
The Honorable Glenn “GT” Thompson
Chairman
The Honorable Angie Craig
Ranking Member
Committee on Agriculture
House of Representatives
The Honorable Brett Guthrie
Chair
The Honorable Frank Pallone, Jr.
Ranking Member
Committee on Energy and Commerce
House of Representatives
The Honorable French Hill
Chairman
The Honorable Maxine Waters
Ranking Member
Committee on Financial Services
House of Representatives
The Honorable Dave Joyce
Chairman
The Honorable Steny Hoyer
Ranking Member
Subcommittee on Financial Services and General Government
Committee on Appropriations
House of Representatives
This report examines (1) the potential benefits and risks of artificial intelligence (AI) use in financial services, (2) federal financial regulators’ oversight of AI use in the financial services industry, and (3) federal financial regulators’ use of AI in their supervisory and market oversight activities. We focused our review on the use of AI by financial institutions and regulators and oversight of AI use in banking and in the securities and derivatives markets.[101] We did not include state or local laws and regulations in our review.
For our first objective, we analyzed reports, studies, and speeches that address the benefits and risks of AI from federal agencies, international nongovernment organizations, foreign regulators, industry groups, and research and academic institutions. We identified the reports and studies by conducting internet research, reviewing literature search results, and obtaining recommendations from entities we spoke with.
We also interviewed representatives of seven federal financial regulators; six industry groups representing banks, credit unions, financial technology companies, and securities and derivatives market participants; three consumer and investor advocacy organizations; five research and consulting groups; four depository institutions; and three technology providers.[102] We identified potential interviewees by conducting internet research, reviewing literature search results, and obtaining recommendations from our initial interviews. We selected interviewees that could provide perspectives on AI in banking or in the securities or derivatives markets. The information gathered from our interviews cannot be generalized to all companies that develop or use AI in financial services.
To identify common AI uses in financial services, we used a snowball sampling approach in which we identified AI uses as we collected documents and conducted interviews. We limited our review of documents to those published in 2020 or later, given the rapid pace of the development of AI. Upon identifying 168 AI uses in 25 sources, we determined that we had a sufficient number of sources to conclude our snowball sampling exercise. To categorize the AI uses, two analysts independently developed preliminary categories based on a review of the identified uses. We developed final categories by comparing and refining the preliminary categories.
To identify the potential benefits and risks of using AI in financial services, we analyzed the reports, studies, and speeches we collected and interviews we conducted. To categorize the benefits and risks, two analysts independently developed preliminary categories based on their review of the collected evidence. We developed final categories by comparing and refining the preliminary categories.
For our second objective, we reviewed federal laws, executive orders, regulations, guidance, and other agency documentation relevant to the oversight of financial institutions’ use of AI. Agency guidance and documentation included interagency guidance on third-party relationships, agency model risk management guidance, and internal examination guides, among others. We also interviewed officials from the seven federal financial regulators and representatives of the industry stakeholders mentioned above to gather additional insights.
We reviewed the model risk management guidance of the prudential regulators—the Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, National Credit Union Administration (NCUA), and Office of the Comptroller of the Currency (OCC).[103] We also assessed the guidance’s general alignment with leading practices in the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework.[104]
We limited our review to model risk management guidance documents because the prudential regulators identified them as key oversight guidance for reviewing regulated entities’ AI use. A review of other regulators’ model risk management guidance was outside the scope of this report.
The NIST AI Risk Management Framework describes leading practices that can foster responsible design, development, deployment, and use of AI systems in general. It consists of four functions—Govern, Map, Measure, and Manage—that include a total of 19 leading practice categories. These categories consist of 72 subcategories. Furthermore, the subcategories typically include several discrete components.
To compare the regulators’ guidance against the NIST framework, we developed yes/no questions for each leading practice described in the framework and categorized the leading practices as (1) “yes” if the guidance reflected all of the elements of a category or subcategory, (2) “partial” if some but not all of the elements of a category or subcategory were present, or (3) “no” if the elements of a category or subcategory were not reflected in the guidance. One analyst scored the model risk management guidance for each of the banking agencies. A second analyst then independently reviewed the first analyst’s scores. The analysts discussed any differences of opinion to determine a final score.
We could not compare NCUA’s model risk management guidance against the NIST framework because NCUA’s guidance lacked sufficient scope and detail to make a meaningful comparison. Instead, we assessed NCUA’s model risk management guidance against relevant goals and objectives described in NCUA’s 2022–2026 strategic plan.[105]
To describe steps federal financial regulators are taking to prepare for increased AI adoption in the financial services industry, we reviewed documentation of their AI-related initiatives, including training efforts, and establishment of internal AI-related entities.
For the third objective, we reviewed federal laws, executive orders, guidance, agency reports, strategic plans, policies and procedures, and other agency documents related to how federal financial regulators are using AI in their supervisory and market oversight activities. Additionally, we interviewed officials from the seven federal financial regulators.
To identify the number of activities for which the seven financial regulators use or plan to use AI, we reviewed their inventories of AI uses.[106] Because OCC had not published its 2024 inventory by the time our audit work was completed, we used information provided by OCC officials on the agency’s supervisory and market oversight AI uses as of December 2024. As a result, the number of uses we report may not represent OCC’s entire inventory of AI uses.
We conducted this performance audit from November 2023 to May 2025 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
AI risk |
AI risk description |
Laws, regulations, and guidelines |
False or misleading information |
AI models may produce false or misleading information about financial products and services, potentially harming consumers or investors. |
The Consumer Financial Protection Act of 2010 prohibits unfair, deceptive, or abusive acts or practices in connection with the offering or provision of consumer financial products and services.a |
Privacy and cybersecurity |
AI may expose consumers’ private information by unmasking anonymized data or leaking sensitive data directly or by inference. It can also increase cyber threats, introducing vulnerabilities that allow attackers to evade detection or manipulate AI decisions. |
The Gramm-Leach-Bliley Act, as implemented through agencies’ regulations and guidelines, restricts financial institutions—such as banks, credit unions, broker-dealers, and investment advisers—from disclosing a consumer’s nonpublic personal information to unaffiliated third parties and requires institutions to implement administrative, technical, and physical safeguards to ensure the security of customer information.b |
Safety and soundness |
AI models can pose safety and soundness risks for financial institutions. For example, flawed models or poor data quality could produce inaccurate results and financial losses. |
The Federal Deposit Insurance Act, as implemented through agencies’ regulations and guidelines, requires regulated banking organizations to operate in a safe and sound manner.c Federal banking regulators have established safety and soundness standards in areas such as internal control, information systems, loan documentation, credit underwriting, interest rate exposure, asset growth and quality, and earnings and compensation. |
Market conduct |
AI-driven trading models could contribute to inappropriate trading behavior or market disruptions. |
SEC’s Market Access Rule is designed to ensure that broker-dealers that have or provide market access to appropriately control related risks so as not to jeopardize their own financial condition, that of other market participants, the integrity of trading on the securities markets, and the stability of the financial system.d |
Source: GAO analysis of relevant laws, regulations, and agency statements. | GAO‑25‑107197
This table presents examples of existing laws, regulations, and guidelines that are not specific to AI but that may help address the risks of AI use in the financial services industry. This table is not an exhaustive list.
a12 U.S.C. §§ 5531, 5536(a)(1)(B). Section 5(a) of the Federal Trade Commission Act also prohibits unfair or deceptive acts or practices in or affecting commerce. 5 U.S.C. § 45(a).
bSee, e.g., 15 U.S.C. §§ 6801, 6802, 6805; 12 C.F.R. app. B to pt. 30, app. D-2 to pt. 208, app. F to pt. 225, app. B to pt. 364, app. A to pt. 748, pt. 1016; 17 C.F.R. subpt. A to pt. 248.
cSee, e.g., 12 U.S.C. §§ 1818, 1831p-1; 12 C.F.R. app. A to pt. 30, pt. 263, app. A to pt. 364. The Federal Credit Union Act regulates the safety and soundness of credit unions. See, e.g., 12 U.S.C. § 1786.
dSecurities and Exchange Commission, Risk Management Controls for Brokers or Dealers With Market Access, Final Rule, 75 Fed. Reg. 69,792 (Nov. 15, 2010) (codified as amended at 17 C.F.R. § 240.15c3-5).
GAO Contact
Michael E. Clements, clementsm@gao.gov
Staff Acknowledgments
In addition to the contact named above, Christine McGinty (Assistant Director), Davis Judson (Analyst in Charge), Aaron Colsher, Marc Molino, Sarah D’Orso, David Raymond, Jennifer Schwartz, and Andrew Stavisky made significant contributions to this report.
The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability.
Obtaining Copies of GAO Reports and Testimony
The fastest and easiest way to obtain copies of GAO documents at no cost is through our website. Each weekday afternoon, GAO posts on its website newly released reports, testimony, and correspondence. You can also subscribe to GAO’s email updates to receive notification of newly posted products.
Order by Phone
The price of each GAO publication reflects GAO’s actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO’s website, https://www.gao.gov/ordering.htm.
Place orders by calling (202) 512-6000, toll free (866) 801-7077,
or
TDD (202) 512-2537.
Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information.
Connect with GAO
Connect with GAO on X,
LinkedIn, Instagram, and YouTube.
Subscribe to our Email Updates. Listen to our Podcasts.
Visit GAO on the web at https://www.gao.gov.
To Report Fraud, Waste, and Abuse in Federal Programs
Contact FraudNet:
Website: https://www.gao.gov/about/what-gao-does/fraudnet
Automated answering system: (800) 424-5454
Media Relations
Sarah Kaczmarek, Managing Director, Media@gao.gov
Congressional Relations
A. Nicole Clowers, Managing Director, CongRel@gao.gov
General Inquiries
[1]ChatGPT is an AI application capable of generating text and media based on user prompts.
2For more information, see White House, Office of Science and Technology Policy, American Artificial Intelligence Initiative: Year One Annual Report (Washington, D.C.: Feb. 2020), and GAO, Artificial Intelligence: Status of Developing and Acquiring Capabilities for Weapon Systems, GAO‑22‑104765 (Washington, D.C.: Feb. 17, 2022).
[3]Pub. L. No. 112-10, §1573(a), 125 Stat. 38, 138-39 (codified at 12 U.S.C. § 5496b).
[4]This report focuses on the use of AI by financial institutions and regulators and oversight of AI use in banking and in the securities and derivatives markets. This report does not address (i) the use of AI by financial institutions in the housing and mortgage markets, which is the subject of separate ongoing work by GAO, and (ii) the use of AI to commit fraud or other crimes within the financial services industry.
[5]The seven federal financial regulators are the Board of Governors of the Federal Reserve System, Commodity Futures Trading Commission, Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, National Credit Union Administration, Office of the Comptroller of the Currency, and Securities and Exchange Commission.
[6]Beginning in January 2025, the President issued a series of Executive Orders and Memoranda in furtherance of government-wide deregulation. See, e.g., Executive Order 14219, Ensuring Lawful Governance and Implementing the President’s “Department of Government Efficiency” Deregulatory Initiative, 90 Fed. Reg. 10,583 (Feb. 25, 2025). These directives relate to federal regulatory actions, including rules, regulations and guidance. As a result, certain rules, regulations, and guidance discussed in this report may be subject to change.
[7]NIST is a Department of Commerce agency that promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology. The AI Risk Management Framework is a resource for organizations designing, developing, deploying, or using AI systems to help manage the risks of AI and promote trustworthy and responsible development and use of AI systems. National Institute of Standards and Technology, Artificial Intelligence Risk Management Framework (Jan. 2023).
[8]Federal agencies’ use of AI has been subject to numerous laws, executive orders, and Office of Management and Budget (OMB) memorandums in recent years. For example, the AI in Government Act of 2020, the Advancing American AI Act, and Executive Order 14110 directed OMB to issue AI-related guidance and requirements. Pub. L. No. 116-260, div. U, tit. I, § 104, 134 Stat. 2286, 2288-89 (codified at 40 U.S.C. § 11301 note); Pub. L. No. 117-263, div. G, tit. LXXII, subtit. B, §§ 7224-25, 136 Stat. 3668, 3669-72 (2022) (codified at 40 U.S.C. § 11301 note); Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88 Fed. Reg. 75,191 (Nov. 1, 2023). OMB issued memorandums in furtherance of these directives, including Advancing Governance, Innovation and Risk Management for Agency Use of Artificial Intelligence, M-24-10 (Mar. 28, 2024). In January 2025, the President revoked Executive Order 14110 and directed OMB to revise Memorandum M-24-10 to align with the administration’s stated policies. See Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence, 90 Fed. Reg. 8,741 (Jan. 31, 2025). In April 2025, OMB published Memorandum M-25-21, which rescinds and replaces Memorandum M-24-10. Office of Management and Budget, Accelerating Federal Use of AI Through Innovation, Governance, and Public Trust, M-25-21 (Apr. 3, 2025).
[9]For example, the John S. McCain National Defense Authorization Act for Fiscal Year 2019 defines AI as including any artificial system (1) that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets; (2) that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action; (3) designed to think or act like a human; or (4) designed to act rationally, among other things. Pub. L. No. 115-232, §§ 238(g), 1051(f), 132 Stat. 1636, 1697-98, 1965 (2018). The National Artificial Intelligence Initiative Act of 2020 defines AI as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Pub. L. No. 116-283, div. E, § 5002(3), 134 Stat. 4523, 4524 (2021) (codified at 15 U.S.C. § 9401(3)).
[10]In 2018, we reported on characteristics and types of AI and described one conceptualization of AI as having distinct waves of development. For example, the first wave of AI often encompassed expert or rules-based systems, whereby a computer was programmed based on expert knowledge or criteria and produced outputs consistent with its programming. Software programs that prepare taxes are examples of expert systems. The second and current wave of AI systems is based on machine learning and begins with data and infers rules or decision procedures to predict specified outcomes. Self-driving automated vehicles and popular generative AI platforms such as ChatGPT and Gemini are examples of machine learning systems. GAO, Technology Assessment: Artificial Intelligence: Emerging Opportunities, Challenges, and Implications, GAO‑18‑142SP (Washington, D.C.: Mar. 28, 2018).
[11]See Erik Brynjolfsson, Tom Mitchell, and Daniel Rock, “What Can Machines Learn, and What Does It Mean for Occupations and the Economy?,” AEA Papers and Proceedings, vol. 108 (May 1, 2018), pp. 43–47, cited in Congressional Research Service, Artificial Intelligence: Overview, Recent Advances, and Considerations for the 118th Congress, R47644 (Aug. 4, 2023).
[12]GAO, Artificial Intelligence: Generative AI Technologies and Their Commercial Applications, GAO‑24‑106946 (Washington, D.C.: June 20, 2024).
[13]NIST’s framework consists of four functions that developers and users of AI can follow to manage risk: (1) Govern: a culture of risk management is cultivated and present; (2) Map: context is recognized and risks related to context are identified; (3) Measure: identified risks are assessed, analyzed, and tracked; and (4) Manage: risks are prioritized and acted upon based on projected impact. Each function has multiple categories and subcategories that describe specific practices. National Institute of Standards and Technology, Risk Management Framework.
[14]An AI model refers to an algorithm “trained” on a set of data. In general, training involves iteratively feeding data (training data) through an optimization process to improve model performance.
[15]Department of the Treasury, Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (Mar. 2024).
[16]See for example, Congressional Research Service, Artificial Intelligence and Machine Learning in Financial Services, R47997 (Apr. 3, 2024). Robo-advisers refer to electronic platforms that provide automated investment advisory services to customers pursuant to computer algorithms developed by the platform sponsors, according to the North American Securities Administrators Association. In effect, robo-advisers replace financial services professionals with computer algorithms. See “Robo-advisers,” North American Securities Administrators Association, accessed Oct. 23, 2024, https://www.nasaa.org/investor-education/young-adult-money-mission/robo-advisers/.
[17]For example, see Financial Stability Board, The Financial Stability Implications of Artificial Intelligence (Nov. 14, 2024).
[18]For example, see Congressional Research Service, Artificial Intelligence and Machine Learning, and Global Financial Innovation Network, Key Insights on the Use of Consumer-Facing AI in Global Financial Services (Jan. 2025).
[19]For example, see Artificial Intelligence Public-Private Forum, Final Report (Feb. 2022). The Bank of England and the U.K. Financial Conduct Authority established the AI Public-Private Forum to further the dialogue between the public sector, the private sector, and academia on AI. Synthetic identity fraud is a crime in which perpetrators combine real and fictitious information, such as Social Security numbers and names, to create identities with which they may defraud financial institutions, government agencies, or individuals. See GAO, Highlights of a Forum: Combating Synthetic Identify Fraud, GAO‑17‑708SP (Washington, D.C.: July 26, 2017).
[20]International Monetary Fund, Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance (Oct. 2021).
[21]For example, AI credit card models can result in up to 10 percent greater predictive power compared with logistic regression models. Artificial Intelligence Public-Private Forum, Final Report.
[22]“Chatbot Conversations to Deliver $8 Billion in Cost Savings by 2022,” Juniper Research, July 24, 2017, archived Mar. 23, 2023, at https://web.archive.org/web/20230323114100/https://www.juniperresearch.com/resources/analystxpress/july-2017/chatbot-conversations-to-deliver-8bn-cost-saving.
[23]Bank of America, “B of A’s Erica Surpasses 2 Billion Interactions, Helping 42 Million Clients Since Launch,” news release, Apr. 8, 2024.
[24]Organisation for Economic Cooperation and Development, Artificial Intelligence, Machine Learning and Big Data in Finance: Opportunities, Challenges and Implications for Policy Makers (2021).
[25]International Monetary Fund, Powering the Digital Economy.
[26]Financial Industry Regulatory Authority, Artificial Intelligence (AI) in the Securities Industry (June 2020).
[27]For example, see Organisation for Economic Cooperation and Development, Artificial Intelligence.
[28]While we organize these risks according to the party they primarily affect, they may affect multiple parties depending on the circumstances. For example, biases in AI models or data could pose both fair lending risk to consumers and performance and compliance risks to financial institutions.
[29]Generative AI models may produce “hallucinations”—credible yet erroneous responses—particularly when users request information that was not in the model’s training data. GAO, Science and Tech Spotlight: Generative AI, GAO‑23‑106782 (Washington, D.C.: June 2023).
[30]Protected classes are groups of people that are protected from discrimination under federal law. For example, the Equal Credit Opportunity Act, as amended, prohibits creditors from discriminating against applicants in a credit transaction on the basis of certain protected characteristics, such as race, color, religion, national origin, and sex. 15 U.S.C. § 1691(a)(1). See also 12 C.F.R. § 1002.4(a).
[31]Melissa Koide, CEO, FinRegLab, Artificial Intelligence in Financial Services, testimony before the Senate Committee on Banking, Housing, and Urban Affairs, 118th Cong., Sept. 20, 2023. A neutral policy or practice that has a disproportionately adverse effect on a protected class is said to have a disparate impact. In the context of AI, disparate impact may occur when an AI model uses a variable that is not a direct measure of a protected class, like race or sex, but still leads to disparate outcomes for certain groups. This can arise from using seemingly neutral data that correlate with those attributes, increasing the risk of bias in the system’s predictions and decisions. The Supreme Court has recognized disparate impact claims under certain federal laws. E.g., Tex. Dep't of Hous. & Cmty. Affairs v. Inclusive Cmtys. Project, Inc., 576 U.S. 519 (2015) (holding that disparate-impact claims were cognizable under the Fair Housing Act). In April 2025, the President issued an Executive Order establishing a policy to eliminate the use of disparate impact liability in all contexts to the maximum degree possible. Executive Order 14281, Restoring Equality of Opportunity and Meritocracy, 90 Fed. Reg. 17,537 (Apr. 28, 2025).
[32]Financial Stability Oversight Council, Annual Report 2023 (Dec. 14, 2023).
[33]For example, see Congressional Research Service, Artificial Intelligence and Machine Learning.
[34]See Artificial Intelligence/Machine Learning Risk & Security Working Group, Artificial Intelligence Risk and Governance (Jan. 11, 2023); Congressional Research Service, Artificial Intelligence and Machine Learning; and International Monetary Fund, Powering the Digital Economy.
[35]Financial Industry Regulatory Authority, Artificial Intelligence.
[36]See International Organization of Securities Commissions, The Use of Artificial Intelligence and Machine Learning by Market Intermediaries and Asset Managers (Sept. 2021).
[37]Commodity Futures Trading Commission, “Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets” (Jan. 25, 2024) (citing Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, and Patrick Hall, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, Special Publication 1270 (National Institute of Standards and Technology, Mar. 2022), section 3.1).
[38]Board of Governors of the Federal Reserve System, Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, National Credit Union Administration, and Office of the Comptroller of the Currency, “Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, Including Machine Learning” (Mar. 31, 2021).
[39]In one example unrelated to financial services, a Canadian civil resolution tribunal found Air Canada liable for inaccurate information provided by its automated chatbot, according to literature we reviewed. The tribunal found that a passenger relied on the information—which was related to bereavement fares—and suffered damages as a result. Barry Sookman, McCarthy Tétrault LLP, “Moffatt v. Air Canada: A Misrepresentation by an AI Chatbot” (Canada: Feb. 19, 2024), available at https://www.lexology.com/library/detail.aspx?g=2b5e5902-5a23-4ed4-91b1-b45e494f1a11, accessed Feb. 21, 2025.
[40]Board of Governors of the Federal Reserve System, Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, National Credit Union Administration, and Office of the Comptroller of the Currency, “Request for Information and Comment” (Mar. 31, 2021).
[41]Novel threats include data poisoning, input attacks, and model inversion attacks. Data poisoning attacks intend to influence an AI model during the training stage by adding special samples to its training data set. These attacks cause the AI to incorrectly learn to classify or recognize information. Data poisoning also may be used to create Trojan models, which hide malicious actions that wait for special inputs to be activated. Input attacks allow attackers to introduce small changes to data inputs and mislead AI systems during operations. For example, attackers could alter images with elements unperceivable to human vision, but which provoke AI image recognition systems to mislabel the images. Model inversion attacks attempt to recover training input and data, or the model itself. International Monetary Fund, Powering the Digital Economy.
[42]Board of Governors of the Federal Reserve System, Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, National Credit Union Administration, and Office of the Comptroller of the Currency, “Request for Information and Comment” (Mar. 31, 2021).
[43]See Financial Stability Oversight Council, Annual Report 2023; International Monetary Fund, Powering the Digital Economy; and Melissa Koide, Artificial Intelligence in Financial Services.
[44]Financial Stability Board, Financial Stability Implications.
[45]See Artificial Intelligence/Machine Learning Risk & Security Working Group, Artificial Intelligence; Artificial Intelligence Public-Private Forum, Final Report; and Board of Governors of the Federal Reserve System, Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, National Credit Union Administration, and Office of the Comptroller of the Currency, “Request for Information and Comment” (Mar. 31, 2021).
[46]Financial Stability Oversight Council, Annual Report 2023.
[47]For example, they mentioned that one technique to limit hallucination risk is using a second generative AI model to verify the output of the first. Another technique is to limit the number of sources a generative AI model is trained from so responses are focused on a specific domain.
[48]Financial Stability Oversight Council, Annual Report 2024 (Dec. 6, 2024).
[49]Commodity Futures Trading Commission, “Request for Comment” (Jan. 25, 2024) (citing Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services (Nov. 1, 2017), 32–34).
[50]Takanobu Mizuta, “Can an AI Perform Market Manipulation at Its Own Discretion? A Genetic Algorithm Learns in an Artificial Market Simulation,” 2020 IEEE Symposium Series on Computational Intelligence (Dec. 2020).
[51]For example, see Congressional Research Service, Artificial Intelligence and Machine Learning; Financial Stability Board, Financial Stability Implications; International Monetary Fund, Powering the Digital Economy; and Organisation for Economic Cooperation and Development and Financial Stability Board, OECD–FSB Roundtable on Artificial Intelligence in Finance: Summary of Key Findings (Sept. 2024).
[52]Financial Stability Board, Financial Stability Implications.
[53]See European Central Bank, Financial Stability Review (May 2024); Financial Stability Board, Financial Stability Implications; Inaki Aldasoro, Leonardo Gambacorta, Anton Korinek, Vatsala Shreeti, and Merlin Stein, “Intelligent Financial System: How AI Is Transforming Finance,” BIS Working Papers No. 1194 (Bank for International Settlements, June 2024); International Monetary Fund, Powering the Digital Economy; and Organisation for Economic Cooperation and Development, Artificial Intelligence.
[54]International Organization of Securities Commissions, Artificial Intelligence in Capital Markets: Use Cases, Risks, and Challenges (Mar. 2025).
[55]See also CFTC Staff Advisory No. 24-17, which states that CFTC-regulated entities must maintain compliance with applicable statutory and regulatory requirements whether they choose to deploy AI or any other technology. Commodity Futures Trading Commission, Use of Artificial Intelligence in CFTC-Regulated Markets, Staff Advisory No. 24-17 (Dec. 5, 2024).
[56]See Congressional Research Service, Artificial Intelligence and Machine Learning.
[57]Unlike a law or regulation, supervisory guidance does not have the force and effect of law, and regulators do not take enforcement actions based on supervisory guidance. Rather, supervisory guidance outlines regulators’ supervisory expectations or priorities and articulates general views regarding appropriate practices for a given subject area. 12 C.F.R. app. A to subpt. F of pt. 4, app. A to pt. 262, app. A to pt. 302, app. A to subpt. D of pt. 791, app. A to pt. 1074.
[58]Federal consumer financial laws impose various adverse action notification requirements on creditors and other parties. For example, creditors are required to provide credit applicants with the specific reasons for taking certain adverse actions, such as denying a credit application or making an unfavorable change to the terms of an account. See, e.g., 15 U.S.C. § 1691(d); 12 C.F.R. § 1002.9.
[59]Financial Stability Board, Financial Stability Implications.
[60]In some cases, regulators have already begun to issue AI-specific guidance, which we discuss later.
[61]Commodity Futures Trading Commission, Staff Advisory No. 24-17.
[62]SEC and CFTC examine market participants alongside self-regulatory organizations, which are nongovernmental entities that generally create and enforce industry regulations and standards. A review of how self-regulatory organizations oversee financial institutions’ AI use was outside the scope of this report.
[63]CFTC-regulated entities include designated contract markets, derivatives clearing organizations, swap execution facilities, and swap data repositories.
[64]OCC uses matters requiring attention to communicate concerns about a bank’s deficient practices. A deficient practice is a practice, or lack of practices, that deviates from sound governance, internal control, or risk management principles and has the potential to adversely affect the bank’s condition, including financial performance or risk profile, if not addressed, or that results in substantive noncompliance with laws or regulations, enforcement actions, or conditions imposed in writing in connection with the approval of any applications or other requests by the bank.
[65]These actions consist of consent orders issued in administrative proceedings and available at https://www.consumerfinance.gov/enforcement/actions, accessed Apr. 29, 2025.
[66]Documents of resolution and regional director letters are two types of informal administrative actions. According to NCUA, the documents of resolution section of NCUA examination reports is the equivalent of matters requiring immediate attention used by the banking regulators. National Credit Union Administration, Role of Supervisory Guidance, Final Rule, 86 Fed. Reg. 7,949, 7,951 (Feb. 3, 2021). Regional director letters are issued to credit unions that have serious or persistent problems that are not being resolved through field supervision alone.
[67]These actions consist of complaints filed in federal court and administrative orders, available at https://www.sec.gov/enforcement-litigation, accessed Apr. 30, 2025.
[68]Consumer Financial Protection Bureau, Adverse Action Notification Requirements in Connection with Credit Decisions Based on Complex Algorithms, Consumer Financial Protection Circular 2022-03 (Washington D.C.: May 26, 2022), and Adverse Action Notification Requirements and the Proper Use of the CFPB’s Sample Forms Provided in Regulation B, Consumer Financial Protection Circular 2023-03 (Washington D.C.: Sept. 19, 2023). See also Consumer Financial Protection Bureau, Limited Applicability of Consumer Financial Protection Act’s “Time or Space” Exception with Respect to Digital Marketing Providers, Interpretive Rule (Aug. 10, 2022). In May 2025, CFPB withdrew the above circulars and interpretive rule. Consumer Financial Protection Bureau, Interpretive Rules, Policy Statements, and Advisory Opinions; Withdrawal, 90 Fed. Reg. 20,084 (May 12, 2025).
[69]Commodity Futures Trading Commission, Staff Advisory No. 24-17.
[70]In addition, the Federal Reserve and OCC have created internal documents to further assist examiners during examinations. These documents provide additional resources to support identifying and assessing banks’ use of AI, such as an overview of effective AI risk management practices.
[71]OCC stated that the banks provided a rationale to support lower risk ratings. However, OCC recommended evaluating the model tiering methodology to also consider reputational, compliance, operational, and other risks when determining model risk.
[72]Consumer Financial Protection Bureau, Chatbots in Consumer Finance (Washington, D.C.: June 6, 2023).
[73]Securities and Exchange Commission, the North American Securities Administrators Association, and the Financial Industry Regulatory Authority, “Artificial Intelligence and Investment Fraud” (Washington, D.C.: Jan. 25, 2024), https://www.finra.org/investors/insights/artificial-intelligence-and-investment-fraud.
[74]Commodity Futures Trading Commission, “CFTC Customer Advisory Cautions the Public to Beware of Artificial Intelligence Scams,” Release No. 8854-24, Jan. 25, 2024.
[75]Board of Governors of the Federal Reserve System, Supervisory Guidance on Model Risk Management, Supervision and Regulation Letter 11-7 (Washington D.C.: Apr. 4, 2011); Office of the Comptroller of the Currency, Sound Practices for Model Risk Management: Supervisory Guidance on Model Risk Management, Bulletin 2011-12 (Washington D.C.: Apr. 4, 2011); and Federal Deposit Insurance Corporation, Adoption of Supervisory Guidance on Model Risk Management, Financial Institution Letter-22-2017 (Washington D.C.: June 7, 2017). FDIC, the Federal Reserve, and OCC incorporate their model risk management supervisory guidance into their examination manuals for use by their examiners during examinations. See Board of Governors of Federal Reserve System, Commercial Bank Examination Manual (Washington, D.C.: Nov. 28, 2023); Federal Deposit Insurance Corporation, Risk Management Manual of Examination Policies (Washington, D.C.: Oct. 28, 2024); and Office of the Comptroller of the Currency, Comptroller’s Handbook: Safety and Soundness: Model Risk Management, version 1.0 (Washington, D.C.: Aug. 2021).
[76]National Institute of Standards and Technology, Risk Management Framework. For additional detail on our assessment of the banking regulators’ model risk management guidance, see app. I.
[77]National Credit Union Administration, Examiner’s Guide (Alexandria, VA: Oct. 11, 2016).
[78]In addition to supervisory guidance issued by banking regulators and NCUA, the regulators make their examination manuals publicly available. These documents offer information about the examination and supervision process that regulated entities may find useful.
[79]National Credit Union Administration, “Implementing Section 704.21—Enterprise Risk Management”, Letter No. 2013-2, Jul. 2013. NCUA officials also identified the final rule on Quality Control Standards for Automated Valuation Models that it issued in conjunction with five other regulators as relevant to model risk management. We did not review the final rule because it was out of scope for this report, as it is related to the use of automated valuation models in the mortgage market. See Board of Governors of the Federal Reserve System, Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, Federal Housing Finance Agency, National Credit Union Administration, and Office of the Comptroller of the Currency, Quality Control Standards for Automated Valuation Models, 89 Fed. Reg. 64,538 (Aug. 7, 2024).
[80]National Credit Union Administration, NCUA Strategic Plan 2022–2026 (Alexandria, VA: 2022). See goal 1, objectives 1.4 and 1.5.
[81]Third-party risk is the possibility that a business will be negatively affected by a third party’s actions or inactions. Negative effects can include financial loss, a data breach, operational disruption, or reputational damage. Third-party risk management is a process that helps organizations identify, analyze, and reduce risks associated with working with third parties.
[82]Federal Deposit Insurance Corporation, Board of Governors of the Federal Reserve System, and Office of the Comptroller of the Currency, Interagency Guidance on Third-Party Relationships: Risk Management, 88 Fed. Reg. 37,920 (June 9, 2023). In addition, FDIC, the Federal Reserve, and OCC published guidance for community banks engaging with third parties. The agencies issued this joint guidance to promote consistency in supervisory approaches; the guidance replaced each agency's existing general guidance on the topic. Id. See also Federal Deposit Insurance Corporation, Board of Governors of the Federal Reserve System, and Office of the Comptroller of the Currency, Third-Party Risk Management: A Guide for Community Banks (Washington, D.C.: May 2024).
[83]See, e.g., 12 U.S.C. § 1867(c).
[84]Federal Deposit Insurance Corporation, Board of Governors of the Federal Reserve System, and Office of the Comptroller of the Currency, “Interagency Guidance on Third-Party Relationships: Risk Management, 88 Fed. Reg. at 37,936-37,937 (June 9, 2023).
[85]When evaluating third-party arrangements, NCUA focuses on ensuring that credit unions conduct third-party risk assessment and planning, due diligence, and risk measurement, monitoring, and control in a manner that is commensurate with the credit union’s size, complexity, and risk profile.
[86]See Pub. L. No. 73-467, 48 Stat. 1216 (1934) (codified as amended at 12 U.S.C. §§ 1751-1795k). NCUA previously had third-party oversight authority but it expired on Dec. 31, 2001. See Examination Parity and Year 2000 Readiness for Financial Institutions Act, Pub. L. No. 105-164, § 3(b), 112 Stat. 32, 35-36 (1998).
[87]National Credit Union Administration, Third-Party Vendor Authority (Alexandria, VA: Mar. 2022).
[88]GAO‑15‑509. We had previously recommended that Congress consider granting NCUA authority to examine third-party service providers that provide services to credit unions in 2003. See GAO, Credit Unions: Financial Condition Has Improved, but Opportunities Exist to Enhance Oversight and Share Insurance Management, GAO‑04‑91 (Washington, D.C.: Oct. 27, 2003).
[89]SEC officials noted that federal agencies have also leveraged government-wide AI trainings, such as those led by the General Services Administration’s AI Community of Practice.
[90]We previously reported that opportunities exist for agencies to improve their workforce planning processes associated with financial technology, such as AI. In September 2023, we recommended that CFPB, FDIC, the Federal Reserve, NCUA, and OCC take steps to fully incorporate leading workforce planning practices in their offices involved in policymaking and oversight related to financial technology. NCUA agreed with the recommendations. CFPB, FDIC, the Federal Reserve, and OCC did not agree or disagree with the recommendations but indicated they would take actions to implement them. As of June 2024, the recommendations had not been addressed. See GAO, Financial Technology: Agencies Can Better Support Workforce Expertise and Measure the Performance of Innovation Offices, GAO‑23‑106168 (Washington, D.C.: Sept. 6, 2023). Additionally, in December 2023, we recommended that SEC prepare a new workforce planning strategy that is aligned with the agency’s 2022–2026 strategic and performance plans. SEC implemented the recommendation. See GAO, Financial Technology: SEC Should Prepare a Workforce Plan, Document Oversight Controls, and Set Goals for Innovation Office, GAO‑24‑106635 (Washington, D.C.: Dec. 15, 2023).
[91]Leadership of the Federal Reserve, CFPB, CFTC, FDIC, NCUA, OCC, and SEC are all voting members of FSOC. FSOC’s 2023 and 2024 annual reports provided an overview of AI use in financial services and made AI-related recommendations. For example, the 2024 annual report recommended that FSOC member agencies continue to monitor the rapid development of the usage of AI technologies in financial services to ensure policies are updated to address emerging risks to the financial system while facilitating efficiency. The report also supported interagency development of expertise to analyze and monitor potential systemic risks associated with the use of AI in the financial services sector and engagement with international counterparts on the risks and benefits of AI in financial services. See Financial Stability Oversight Council, Annual Report 2023 and Annual Report 2024.
[92]See Consumer Financial Protection Bureau, Data Enforcers Convening: Generating Harms: What to do when Generative AI has Harmful Impacts (Washington, D.C.: Oct. 2024), available at https://www.consumerfinance.gov/about-us/events/archive-past-events/data-enforcers-convening-generating-harms-what-to-do-when-generative-ai-has-harmful-impacts, accessed Apr. 29, 2025.
[93]Board of Governors of the Federal Reserve System, Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, National Credit Union Administration, and Office of the Comptroller of the Currency, “Request for Information and Comment” (Mar. 31, 2021); Commodity Futures Trading Commission, “Request for Comment” (Jan. 25, 2024); and Department of the Treasury, “Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector” (June 6, 2024).
[94]Office of the Comptroller of the Currency, OCC Solicits Research on Artificial Intelligence in Banking and Finance (Washington D.C.: Oct. 7, 2024).
[95]In furtherance of the Advancing American AI Act and other directives, OMB required agencies to report and publish inventories of their AI uses annually, starting in December 2024. Pub. L. No. 117-263, § 7225, 136 Stat. 2395, 3672-3673. Specific requirements were set forth in OMB Memorandum M-24-10 and related reporting instructions. As previously discussed, the President directed OMB to revise Memorandum M-24-10 to align with the administration’s stated policies. In April 2025, OMB published Memorandum M-25-21 which rescinds and replaces Memorandum M-24-10.
[96]In 2014, NCUA issued a final rule on capital planning and stress testing that requires credit unions with $15 billion or more in assets to complete annual self-run supervisory stress tests according to NCUA’s instructions. NCUA, Capital Planning and Stress Testing, Final Rule, 79 Fed. Reg. 24,311 (Apr. 30, 2014) (codified as amended and redesignated at 12 C.F.R. § 702, subpt. C). NCUA also conducts its own supervisory stress tests on those credit unions.
[97]For example, credit unions with $20 billion or more in assets must demonstrate the ability to maintain a stress test capital ratio of 5 percent or more. See 12 C.F.R. § 702.306(f).
[98]U.S. House of Representatives, Bipartisan House Task Force Report on Artificial Intelligence (Washington, D.C.: Dec. 2024).
[99]We did not assess regulators’ policies and procedures for the general use of technology for their sufficiency for the use of AI. We previously developed an AI accountability framework to help agencies and other entities in developing policies and procedures for AI use. This framework helps to ensure accountability and responsible use of AI by federal agencies and other entities involved in the design, development, deployment, and continuous monitoring of AI systems. The framework is organized around four complementary principles, which address governance, data, performance, and monitoring. For each principle, the framework describes key practices for federal agencies and other entities that are considering, selecting, and implementing AI systems. GAO, Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities, GAO‑21‑519SP (Washington, D.C.: June 30, 2021).
[100]Consumer Financial Protection Bureau, Compliance Plan for OMB Memoranda M-24-10 (Washington, D.C.: Sept. 2024).
[101]This report does not address (i) the use of AI by financial institutions in the housing and mortgage markets, which is the subject of separate ongoing work by GAO, and (ii) the use of AI to commit fraud or other crimes within the financial services industry.
[102]The seven federal financial regulators are the Board of Governors of the Federal Reserve System, Commodity Futures Trading Commission, Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, National Credit Union Administration, Office of the Comptroller of the Currency, and Securities and Exchange Commission. The six industry groups are the American Bankers Association, America’s Credit Unions, Financial Technology Association, Futures Industry Association, Independent Community Bankers of America, and Securities Industry and Financial Markets Association. The three advocacy organizations are Better Markets, National Consumer Law Center, and Public Citizen. The five research and consulting groups are the Alliance for Innovative Regulation, Brookings Institution, Cato Institute, Cornerstone Advisors, and FinRegLab. One of the technology providers opted to respond to our questions in writing in lieu of participating in an interview.
[103]Board of Governors of the Federal Reserve System, Supervisory Guidance on Model Risk Management, Supervision and Regulation Letter 11-7 (Washington D.C.: Apr. 4, 2011); Office of the Comptroller of the Currency, Sound Practices for Model Risk Management: Supervisory Guidance on Model Risk Management, Bulletin 2011-12 (Washington D.C.: Apr. 4, 2011); Federal Deposit Insurance Corporation, Adoption of Supervisory Guidance on Model Risk Management, Financial Institution Letter-22-2017 (Washington D.C.: June 7, 2017); and National Credit Union Administration, Examiner’s Guide (Alexandria, VA: Oct. 11, 2016), and “Implementing Section 704.21—Enterprise Risk Management,” Apr. 2013. FDIC, the Federal Reserve, and OCC incorporate their model risk management supervisory guidance into their examination manuals for use by their examiners during examinations. See Board of Governors of Federal Reserve System, Commercial Bank Examination Manual (Washington, D.C.: Nov. 28, 2023); Federal Deposit Insurance Corporation, Risk Management Manual of Examination Policies (Washington, D.C.: Oct. 28, 2024); and Office of the Comptroller of the Currency, Comptroller’s Handbook.
[104]National Institute of Standards and Technology, Artificial Intelligence Risk Management Framework (Jan. 2023).
[105]National Credit Union Administration, NCUA Strategic Plan 2022–2026 (Alexandria, VA: 2022). See goal 1, objectives 1.4 and 1.5.
[106]The Office of Management and Budget (OMB) required agencies to report and publish inventories of their AI uses annually, starting in December 2024. Specific requirements were set forth in OMB Memorandum M-24-10 and related reporting instructions. Office of Management and Budget, Advancing Governance, Innovation and Risk Management for Agency Use of Artificial Intelligence, M-24-10 (Mar. 28, 2024). OMB was directed to issue Memorandum M-24-10 pursuant to provisions of the Advancing American AI Act, Executive Order No. 14110, and other authorities. Pub. L. No. 117-263, div. G, tit. LXXII, subtit. B, §§ 7224-25, 136 Stat. 3668, 3669-72 (2022) (codified at 40 U.S.C. § 11301 note); Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88 Fed. Reg. 75,191 (Nov. 1, 2023). In January 2025, the President revoked Executive Order 14110 and directed OMB to revise Memorandum M-24-10 to align with the administration’s stated policies. See, e.g., Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence, 90 Fed. Reg. 8,741 (Jan. 25, 2025). In April 2025, OMB published Memorandum M-25-21, which rescinds and replaces Memorandum M-24-10. Office of Management and Budget, Accelerating Federal Use of AI Through Innovation, Governance, and Public Trust, M-25-21 (Apr. 3, 2025).