Opening Statement of Commissioner Christy Goldsmith Romero, Meeting of the CFTC Market Risk Advisory Committee
December 10, 2024
Remarks as Prepared for Delivery
I am pleased to welcome back the members of the Market Risk Advisory Committee (MRAC). I very much appreciate your service. I also welcome our distinguished speakers today. I am so pleased to see Commissioner Johnson leading today’s meeting and want to thank her for bringing us together today and for her sponsorship of MRAC.
U.S. Treasury Markets
Today we welcome back to the CFTC, Treasury Assistant Secretary Josh Frost for an update on Treasury’s work to build resilience in U.S. Treasury markets, which he first presented at a CFTC Global Market Advisory Committee (GMAC) meeting. I am looking forward to his remarks as well as the discussion of industry and other MRAC members to share their perspectives and views on Treasury markets and effective risk management practices.
As a longstanding markets regulator at the Securities and Exchange Commission (SEC) and CFTC, as well as serving at Treasury for 12 years, I have direct experience in overseeing how Treasury markets are foundational to the U.S. financial system. Treasury markets are the deepest and most liquid markets in the world.
The U.S. Treasury futures markets, which the CFTC regulates, had an average daily trading volume of $750 billion.[1] These markets are transparent and subject to central clearing. In the CFTC’s experience, central clearing has reduced systemic risk and improved transparency. During a time of stress in Treasury markets a few years ago, the Treasury futures market remained resilient. I have talked to Assistant Secretary Frost in the past about how the Treasury futures market can serve as a model for the Treasury markets. Treasury has advocated for an expansion of central clearing. The SEC has now expanded central clearing in cash and repo Treasury transactions. I look forward to continuing the dialogue with industry on these markets and continue exploring how we as regulators can effectively oversee Treasury markets.
As I sponsor the CFTC Technology Advisory Committee (TAC), I want to touch on two areas on the agenda today that TAC covers—cybersecurity and AI.
Cyber Resilience
I believe that regulators are at their best when they are engaged in ongoing dialogue with registered entities and other stakeholders to understand their perspectives, challenges, and what they are doing to meet those challenges. I have had many conversations over the last two years with banks, commodities producers, futures commission merchants, clearinghouses, and other registered entities to understand their perspectives on the challenge of cybersecurity, and the steps they are taking. This dialogue, the events in the ION Markets cyber attack, the TAC’s work on cyber resilience, and my conversations with all of my fellow Commissioners, informed my work in leading the CFTC staff to draft the Commission’s first ever proposal on operational resilience for swap dealers (including banks) and brokers.[2] That proposal, adopted unanimously by the Commission in a 5-0 vote, recognized the need for a framework to address third-party service provider cyber risk.
Today, we welcome a number of speakers to give their perspectives, including Treasury Deputy Assistant Secretary of Cyber and Chief AI Officer Todd Conklin. DAS Conklin is a member of TAC, has presented at TAC meetings on the ION Markets cyber attack and Treasury’s response, and participated in TAC’s exploration of how best to promote cyber resilience. He and I talk often about cyber resilience. I welcome him back to the CFTC, as well as the other speakers.
AI
I understand that DAS Conklin will update the Commission on Treasury’s efforts related to AI, which he discussed earlier at a TAC meeting. DAS Conklin is one of several well-respected AI experts that serve on TAC and contributed to drafting the TAC’s 65-page report entitled “Responsible AI in Financial Markets: Opportunities, Risks, and Recommendations,” issued in May 2024.[3] My understanding is that Treasury’s work dovetails with several of the recommendations in the TAC report, and I look forward to hearing more about it.
The TAC report was a first-of-its kind comprehensive report by a U.S. government entity on AI in financial markets. The TAC report acknowledges that AI has been used for years in U.S. markets and that we are at the forefront of understanding how the evolution of AI, including generative AI, will be used in markets. This report recognizes the opportunities that the evolution of AI presents. It also balances the need for risk management, particularly in areas like AI-enabled market manipulation, cyber attacks, and fraud.
I believe there are great opportunities in the evolution in AI to solve many of the world’s toughest problems. To gain those opportunities, it is critical for regulators to be engaged in dialogue with our registered entities about how they are using AI, how they are thinking about using generative AI, and how they are thinking about the concept known as Responsible AI. Responsible AI is generally known to include five properties including privacy, explainability, transparency, fairness, and robustness. In my discussions with our registered entities, there is agreement that these are important properties, but they may face practical challenges implementing them. I have focused on foundational best practices like governance, development of expertise, training, testing, and considerations of the role of humans.
TAC Recommendations for the CFTC in “Responsible AI in Financial Markets: Opportunities, Risks, and Recommendations”
The TAC took a foundational, iterative approach in findings and recommendations to the CFTC. In the report, the TAC provides a list of potential use cases for generative AI. The TAC also addressed the need for risk management and responsible design and deployment of AI. The recommendations in the TAC report are based on high-level principles rather than prescriptive:
(1) Regulatory dialogue and outreach with CFTC-regulated entities to inform the CFTC about key technical and policy considerations for AI in financial markets, develop common understanding and frameworks, build on the TAC’s report, and establish relationships;
(2) Consider NIST’s AI Risk Management Framework similarly to how the CFTC employs NIST’s Cybersecurity Framework for its cyber requirements;
(3) Develop an inventory of existing regulations that already apply to AI along with conducting a gap analysis, recognizing that existing regulations require registrants to manage risks, while recognizing that regulations may need to be clarified or amended;
(4) Strive to align any CFTC policies with other federal agencies; and
(5) Engage in ongoing AI dialogue and develop CFTC technical expertise.
The CFTC is proceeding with implementing some of these recommendations. I have been and will continue to be engaged in dialogue with our registered entities to understand their perspectives, challenges, and efforts.
TAC Findings for the CFTC on AI in the Report[4]
Appropriate industry engagement and consideration of relevant guardrails to ensure that potential vulnerabilities from using AI applications and tools do not erode public trust in financial markets, services, and products.
AI has been used in impactful ways in the financial industry for more than two decades. AI represents a potentially valuable tool to improve automated processes governing core functions, and to improve efficiency. This includes, for example, in areas of risk management, surveillance, fraud detection, back-testing of trading strategies, predictive analytics, credit risk management, customer service, and to analyze information about customers and counterparties. As AI continues to learn from the trusted dataset, it can adapt and optimize its algorithms to new market conditions. As automation and digitization proliferate in financial markets, it is crucial that markets simultaneously prioritize operational resilience, such as cybersecurity measures that are robust to AI-enabled cyberattacks. AI can monitor transactional data in real-time, identifying and flagging any unusual activities. Advanced machine learning algorithms can aid in the prediction of future attack vectors based on existing patterns, providing an additional layer of cybersecurity.
AI systems can be less beneficial and, in some instances, more dangerous if the potential challenges and embedded biases in AI models are regressive to financial gains, and in worst case scenarios, prompt significant market instabilities due to the interjection of misguided training data, or the goals of bad actors to disrupt markets.
Other well-known AI risks include:
- Lack of transparency or explainability of AI models’ decision-making process (the “black box”);
- Risks related to data relied on by AI systems, including overfitting of AI models to their training data or “poisoning” real world data sources encountered by the AI model;
- Mishandling of sensitive data;
- Fairness concerns, including the AI system reproducing or compounding biases;
- Concentration risks that arise from the most widely deployed AI foundation models relying on a small number of deep learning architectures, as well as the relatively small number of firms developing and deploying AI foundation models at scale; and
- Potential to produce false or invalid outputs, whether because of AI reliance on inaccurate “synthetic” data to fill gaps or because of unknown reasons (hallucinations).
Additionally, where AI resembles more conventional forms of algorithmic decision-making, the risks likely include a heightened risk of market stability, and, especially when combined with high frequency trading, potential institutional and wider market instability.
Where firms use their versions of generative AI, there could be increased risks of institutional and market instability if registered entities do not have a complete understanding of, or control over, the design and execution of their trading strategies and/or risk management programs. If proper data governance is not currently possible for some registered entities employing generative AI models, the CFTC may consider proposals to enable firms to access the AI models in a semi-autonomous way that will balance the intellectual property protection of the AI model provider with the imperative of the firm using the model to properly manage and report its trading and clearing data.
Additionally, new issues and concerns emerge with the use of generative AI particularly if the AI system hallucinates, and/or humans use AI to produce fake content which is not distinguishable from reality (deep fakes). Generative AI also raises a host of legal implications for civil and criminal liability. It also may be unclear who, if anyone, is legally liable for misinformation generated by AI.
More specific areas of risk can be identified within the context of specific use cases in CFTC-regulated markets. Identifying the specific risks that have high saliency for CFTC-regulated markets, and measuring the potential harm if risks are insufficiently managed is important. To aid in these efforts, the Committee identified a partial list of potential use cases for AI in trading and investment, customer advice and service, risk management, regulatory compliance, and back office and operations.
These use cases may introduce novel risks to companies, markets, and investors, especially in high-impact, autonomous decision-making scenarios (for example, business continuity risks posed by dependence on a small number of AI firms; procyclicality risks or other risks caused by multiple firms deploying similar AI models in the same market; erroneous AI output or errors caused by sudden, substantial losses to a particular firm, asset class or market such as a flash crash; data privacy risks; and the potential inability to provide a rationale demonstrating fiduciary duties; etc.).
The Commission (including through the Technology Advisory Committee) should start to develop a framework that fosters safe, trustworthy, and responsible AI systems.
Responsible AI is defined as five typical properties which speak to how AI models are designed and deployed: (1) Fairness refers to the processes and practices that ensure that AI does not make discriminatory decisions or recommendations; (2) Robustness ensures that AI is not vulnerable to attacks to its performance; (3) Transparency refers to sharing information that was collected during development and that describes how the AI system has been designed and built, and what tests have been done to check its performance and other properties; (4) Explainability is the ability of the AI system to provide an explanation to users and other interested parties who inquire about what led to certain outputs in the AI’s modeling (which is essential to generate trust in AI from users, auditors, and regulators); and (5) Privacy ensures that AI is developed and used in a way that protects users’ personal information.
The use of AI by CFTC-registered entities will require further exploration and discussion, particularly raising awareness as to the function of automated decision-making models, and governance. Even where firms disclose their use of technologies, it is not always clear the type of AI they are using (e.g. generative or predictive). Additionally, financial institutions should take care to respect the privacy of customers’ financial data and behaviors, particularly in the collection and surveillance of financial information. They should be encouraged to follow proper procedures and comply with disclosures to the federal government, especially in the face of concerns about national security and financial risk management. Also, the responsible and trustworthy use of AI will require the creation of a talent pipeline of professionals trained in the development and use of AI products.
Governance can set guiding principles for standards and practices such as the NIST AI Risk Management Framework. Governance also applies to various stages of the technology. The first type of AI governance is focused on a series of checks, consultations, reporting and testing at every phase of the lifecycle to make sure that the resulting AI is trustworthy and responsible. This includes “value alignment”—restricting an AI model to only pursue goals aligned to human values, such as operating ethically. This also includes determinations of where humans fit in the design, deployment, and oversight of models. In particular, the role of a human-in-the-loop, and human-out-the-loop will impact governance strategies. Second, AI governance can refer to corporate or public policies/regulations, including the concept of responsible AI. Third is the AI governance that companies put in place internally. Fourth are guardrails set by governments that can influence or set requirements for AI governance.
Given the collective decades of AI experience of TAC members, their findings and recommendations regarding the need for responsible AI practices, as well as the importance of the role of humans, governance, data quality, data privacy, and AI risk management frameworks, should be taken seriously. These expert findings and recommendations are aimed at realizing the opportunities that the evolution of AI can bring, while balancing the need of oversight to safeguard financial markets. I hope that the TAC report will drive future work on this evolving technology.
[1] CME Group, “Treasury Futures,” available at https://www.cmegroup.com/markets/interest-rates/us-treasury.html
[2] See Commissioner Christy Goldsmith Romero, “Advancing Cyber Resilience to Thwart the Continuously Changing Threat of Cybercrime and Protect Critical Infrastructure,” (Dec. 18, 2023).
[3] CFTC Technology Advisory Committee, “Responsible AI in Financial Markets: Opportunities, Risks, and Recommendations.” (May 2, 2024).
[4] The following findings come from the TAC report and were not presented at the MRAC meeting.
-CFTC-