Release Number 8905-24

CFTC Technology Advisory Committee Advances Report and Recommendations to the CFTC on Responsible Artificial Intelligence in Financial Markets

Commissioner Christy Goldsmith Romero Heralds the AI Expertise of the Committee and the Committee’s Foundational, Iterative Approach to Recommendations

May 02, 2024

Washington, D.C. — The Commodity Futures Trading Commission’s Technology Advisory Committee (TAC),[1] sponsored by Commissioner Christy Goldsmith Romero, released a Report on Responsible AI in Financial Markets (Report).  The TAC, which has as its members many well-respected experts in AI, issued a Report that facilitates an understanding of the impact and implications of the evolution of AI on financial markets.  The Committee made five recommendations to the Commission as to how the CFTC should approach this AI evolution in order to safeguard financial markets.  The Committee urges the CFTC to leverage its role as a market regulator to support the current efforts on AI coming from the White House and Congress.

Commissioner Goldsmith Romero said, “I herald the foundational, iterative approach of the Committee to recognize both that AI has been used in financial markets for decades, and that the evolution of generative AI introduces new issues and concerns, as well as opportunities.  Given the collective decades of AI experience of Committee members, their findings regarding the need for responsible AI practices, as well as the importance of the role of humans, governance, data quality, data privacy, and risk-management frameworks targeting AI-specific risks, should be taken seriously by the financial services industry and regulators.  These expert recommendations are geared towards more responsible AI systems with greater transparency and oversight to safeguard financial markets.  I am tremendously grateful for the subcommittee who drafted this report.  I hope that this report will drive future work by the CFTC and other financial regulators as we navigate this evolving technology together.”

Findings of the Committee

Without appropriate industry engagement and relevant guardrails (some of which have been outlined in existing national policies), potential vulnerabilities from using AI applications and tools within and outside the CFTC could erode public trust in financial markets, services, and products.

AI has been used in impactful ways in the financial industry for more than two decades.  In theory, AI represents a potentially valuable tool to improve automated processes governing core functions, and to improve efficiency.  This includes, for example, in areas of risk management, surveillance, fraud detection, back-testing of trading strategies, predictive analytics, credit risk management, customer service, and to analyze information about customers and counterparties.  As AI continues to learn from the trusted dataset, it can adapt and optimize its algorithms to new market conditions.  As automation and digitization proliferate in financial markets, it is crucial that markets simultaneously prioritize operational resilience, such as cybersecurity measures that are robust to AI-enabled cyberattacks.  AI can monitor transactional data in real-time, identifying and flagging any unusual activities.  Advanced machine learning algorithms can aid in the prediction of future attack vectors based on existing patterns, providing an additional layer of cybersecurity.

AI systems can be less beneficial and, in some instances, more dangerous if the potential challenges and embedded biases in AI models are regressive to financial gains, and in worst case scenarios, prompt significant market instabilities due to the interjection of misguided training data, or the goals of bad actors to disrupt markets.

Other well-known AI risks include:

  • Lack of transparency or explainability of AI models’ decision-making process (the “black box”);
  • Risks related to data relied on by AI systems, including overfitting of AI models to their training data or “poisoning” real world data sources encountered by the AI model;
  • Mishandling of sensitive data; 
  • Fairness concerns, including the AI system reproducing or compounding biases;
  • Concentration risks that arise from the most widely deployed AI foundation models relying on a small number of deep learning architectures, as well as the relatively small number of firms developing and deploying AI foundation models at scale; and
  • Potential to produce false or invalid outputs, whether because of AI reliance on inaccurate “synthetic” data to fill gaps or because of unknown reasons (hallucinations).

Additionally, where AI resembles more conventional forms of algorithmic decision-making, the risks likely include a heightened risk of market stability, and, especially when combined with high frequency trading, potential institutional and wider market instability.

Where firms use their versions of generative AI, there could be increased risks of institutional and market instability if registered entities do not have a complete understanding of, or control over, the design and execution of their trading strategies and/or risk management programs.  The Financial Stability Oversight Council discussed this in its 2023 Annual Report, stating, “With some generative AI models, users may not know the sources used to produce output or how such sources were weighted, and a financial institution may not have a full understanding or control over the data set being used, meaning employment of proper data governance may not be possible.”  If proper data governance is not currently possible for some registered entities employing generative AI models, the CFTC may have to propose some rules or guidance to enable firms to access the AI models in a semi-autonomous way that will balance the intellectual property protection of the AI model provider with the imperative of the firm using the model to properly manage and report its trading and clearing data.

Additionally, new issues and concerns emerge with the use of generative AI particularly if the generated content is deemed offensive, the AI system hallucinates, and/or humans use AI to produce fake content which is not distinguishable from reality (deep fakes).  Generative AI also raises a host of legal implications for civil and criminal liability.  It also may be unclear who, if anyone, is legally liable for misinformation generated by AI.

More specific areas of risk can be identified within the context of specific use cases in CFTC-regulated markets; therefore, it should be a central focus for the CFTC and registered entities to identify the specific risks that have high saliency for CFTC-regulated markets, and measure the potential harm if risks are insufficiently managed.  To aid in these efforts, the Committee identified a partial list of use cases for AI and likely-relevant risks.  These use cases fall in the areas of trading and investment, customer advice and service, risk management, regulatory compliance, and back office and operations.

These use cases may introduce novel risks to companies, markets, and investors, especially in high-impact, autonomous decision-making scenarios (for example, business continuity risks posed by dependence on a small number of AI firms; procyclicality risks or other risks caused by multiple firms deploying similar AI models in the same market; erroneous AI output or errors caused by sudden, substantial losses to a particular firm, asset class or market such as a flash crash; data privacy risks; and the potential inability to provide a rationale demonstrating fiduciary duties; etc.).

The Commission (including through the Technology Advisory Committee) should start to develop a framework that fosters safe, trustworthy, and responsible AI systems.

Responsible AI is defined as five typical properties which speak to how AI models are designed and deployed: (1) Fairness refers to the processes and practices that ensure that AI does not make discriminatory decisions or recommendations; (2) Robustness ensures that AI is not vulnerable to attacks to its performance; (3) Transparency refers to sharing information that was collected during development and that describes how the AI system has been designed and built, and what tests have been done to check its performance and other properties; (4) Explainability is the ability of the AI system to provide an explanation to users and other interested parties who inquire about what led to certain outputs in the AI’s modeling (which is essential to generate trust in AI from users, auditors, and regulators); and (5) Privacy ensures that AI is developed and used in a way that protects users’ personal information.

The use of AI by CFTC-registered entities will require further exploration and discussion, particularly raising awareness as to the function of automated decision-making models, and the necessary governance.  Even where firms disclose their use of technologies, it is not always clear the type of AI they are using (e.g. generative or predictive).  Additionally, financial institutions should take care to respect the privacy of customers’ financial data and behaviors, particularly in the collection and surveillance of financial information.  They should be encouraged to follow proper procedures and compliance with disclosures to the federal government, especially in the face of concerns about national security and financial risk management.  Also, the responsible and trustworthy use of AI will require the creation of a talent pipeline of professionals trained in the development and use of AI products.

The typical properties and deployment practices of AI are beginning to require governance structures, which enable relevant guardrails that protect both the consumers and contexts in which the technology is deployed.  Governance can set guiding principles for standards and practices such as the Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights” and the NIST AI Risk Management Framework.

Governance also applies to various stages of the technology.  The first type of AI governance is focused on a series of checks, consultations, reporting and testing at every phase of the lifecycle to make sure that the resulting AI is trustworthy and responsible.  This includes “value alignment”—restricting an AI model to only pursue goals aligned to human values, such as operating ethically.  This also includes determinations of where humans fit in the design, deployment, and oversight of models.  In particular, the role of a human-in-the-loop, and human-out-the-loop will impact governance strategies.  Second, AI governance can refer to corporate or public policies/regulations, including the concept of responsible AI.  Third is the AI governance that companies put in place internally.  Fourth are guardrails set by governments that can influence or set requirements for AI governance.

The Committee’s Recommendations

  1. The CFTC should host a public roundtable discussion and CFTC staff should directly engage in outreach with CFTC-registered entities to seek guidance and gain additional insights into the business functions and types of AI technologies most prevalent within the sector.

    The intended purpose of these roundtables and supervisory discussions and consultations is to inform the CFTC about key technical and policy considerations for AI in financial markets, develop common understanding and frameworks, build upon the Committee’s Report, and establish relationships.  Discussion topics should include but not be limited to: humans-in-or-around-the-loop of the technology; acceptable training data use cases; and the development of best practices and standards as it relates to the role of AI.  This will aid the CFTC in ascertaining how AI systems are used in markets and how future AI developments may impact markets.
  2. The CFTC should consider the definition and adoption of an AI Risk Management Framework (RMF) for the sector, in accordance with the guidelines and governance aspects of the National Institute of Standards and Technology’s (NIST), to assess the efficiency of AI models and potential consumer harms as they apply to regulated entities, including but not limited to governance issues.

    The intended purpose of this recommendation is to ensure some certainty, understanding and integration of some of the norms and standards being developed by NIST, and to introduce these practices to regulated industries and firms.  A potential outcome is a proposed CFTC rule implementing the NIST framework, thus ensuring financial markets and a regulatory system that is more resilient to emerging AI technologies and associated risks.

    The Committee is not recommending that the CFTC impose additional enumerated AI-related risks to existing risk management requirements (at least initially) and instead to develop appropriate firm-level governance standards over AI systems.  This would accord with the NIST Framework.
  3. The CFTC should create an inventory of existing regulations related to AI in the sector and use it to develop a gap analysis of the potential risks associated with AI systems to determine compliance relative to further opportunities for dialogue on their relevancy, and potential clarifying staff guidance or potential rulemaking.

    The Committee recognizes that existing regulations require registrants to manage risks—regulations that likely already reach many AI-associated risks.  In other areas, regulations may need to be clarified through staff guidance or amended through rulemaking.  The intended purposes of this recommendation confirms the CFTC’s oversight and jurisdiction over increasingly autonomous models, and to make more explicit compliance levers. 
  4. The CFTC should strive to gather and establish a process to gain alignment of their AI policies and practices with other federal agencies, including the SEC, Treasury, and other agencies interested in the financial stability of markets.

    The intended purpose of this recommendation is to leverage and utilize best practices across agencies, and potentially drive more interagency cooperation (including through interagency meetings) and enforcement.  The Committee notes the strong nexus between the remits of the SEC and CFTC and the presence of many dual registrants.
  5. The CFTC should work toward engaging staff as both observers and potential participants in ongoing domestic and international dialogues around AI, and where possible, establish budget supplements to build the internal capacity of agency professionals around necessary technical expertise to support the agency’s endeavors in emerging and evolving technologies.

    The intended purpose is to build the pipeline of AI experts and to ensure necessary resources for responsible engagement by internal and external stakeholders.

About the TAC

The Technology Advisory Committee (TAC) was created in 1999 to advise the Commission on complex issues at the intersection of technology, law, policy, and finance.  The TAC’s objectives and scope of activities shall be to conduct public meetings, to submit reports and recommendations to the Commission, and to otherwise assist the Commission in identifying and understanding the impact and implications of technological innovation in the financial services, derivatives, and commodity markets. The TAC will provide advice on the application and utilization of new technologies in financial services, derivatives, and commodity markets, as well as by market professionals and market users.  The TAC may further provide advice to the Commission on the appropriate level of investment in technology at the Commission to meet its surveillance and enforcement responsibilities, and inform the Commission’s consideration of technology-related issues to support the Commission’s mission of ensuring the integrity of the markets and achievement of other public interest objectives.

In September 2022, Commissioner Goldsmith Romero reconstituted the TAC after it had been dormant.  She included well-known AI experts as members of the TAC.  Commissioner Goldsmith Romero included the study of AI in financial services in every TAC meeting.  Additionally, Commissioner Goldsmith Romero created the TAC Subcommittee on Emerging and Evolving Technologies, who drafted this Report.  Tony Biagioli serves as the TAC Designated Federal Officer.  Ben Rankin serves as the TAC Assistant Designated Federal Officer for the Subcommittee on Emerging and Evolving Technologies.  Scott Lee serves as the Commissioner’s senior counsel advising on TAC.

There are five active Advisory Committees[2] overseen by the CFTC.  They were created to provide advice and recommendations to the Commission on a variety of regulatory and market issues that affect the integrity and competitiveness of U.S. markets.  These Advisory Committees facilitate communication between the Commission and market participants, other regulators, and academics.  The views, opinions, and information expressed by the Advisory Committees are solely those of the respective Advisory Committee and do not necessarily reflect the views of the Commission, its staff, or the U.S. government.


[1] A complete list of Members of the Technology Advisory Committee is available at https://www.cftc.gov/About/AdvisoryCommittees/TAC.

[2] A list of the active Advisory Committee is available at https://www.cftc.gov/About/AdvisoryCommittees/index.htm.

-CFTC-