Opening Remarks of Commissioner Kristin N. Johnson at GAIM Ops AI Summit: Using AI To Combat Cybersecurity and Fraud Risks
April 07, 2025
Good afternoon. Thank you to the event organizers for the generous invitation to join you to kick off the AI Summit. The Summit will explore critical topics—data quality and security, good governance for AI, critical third-party service providers, and the integration of generative AI in operating infrastructure, trade execution, clearing, and settlement, and trade surveillance, among others.
I’d like to highlight two risks implicated by the integration of AI in our markets—cybersecurity and fraud risks.
Cyber and fraud risks are ever-present in our markets. Sophisticated AI models have the potential to facilitate high-quality, near-flawless, synthetic content, enabling stunning heists. AI models train, test, and refine their functionality by aggregating and analyzing vast amounts of data, creating enticing targets for cyber intrusion campaigns.
While the threats are well-documented, we have not yet fully explored the potential for AI to address cyberthreats and AI-driven fraud. In the least, carefully studying coordinated efforts to develop cyber resilience may teach us some important lessons regarding how to use AI to mitigate cyber and fraud threats in our markets.
We are witnessing an increasing number of cyber and fraud threats executed using AI technologies. In some instances, the technology that drives these cyber and fraud threats may be an important offensive and defensive tool.
Your agenda rightly aims to identify pathways to good AI governance and best practices for individual firms and the broader financial ecosystem.[1]
AI and Financial Markets
Over the last few years, markets have witnessed the increasing potential for AI to engender efficiencies, reduce costs, harness and analyze vast amounts of data, and enable personalized access to markets. Many firms quickly discovered the potential for AI to streamline trade reporting, anti-money laundering (AML), and other regulatory compliance obligations. Financial services firms have used AI tools for many years, but “maturity in utilization and deployment of AI systems varies by institution and continues to evolve.”[2]
In addition, financial services firms use AI tools in both cyber and fraud threat assessments. Integrating innovative AI into legacy systems may, however, create vulnerabilities.
In recent years, firms have discovered that AI may become a tool for addressing these vulnerabilities. Machine learning or generative AI may replace or enhance legacy tools for fraud and cyber detection and risk management strategies. AI is enabling firms to educate employees and customers and to identify gaps in their cybersecurity and fraud detection and prevention measures.[3]
These issues are at the heart of the work of the U.S. Commodity Futures Trading Commission (CFTC) and its mission[4] and resonate with my experiences as a lawyer in private practice, in-house, and my service as a Commissioner.[5] At the CFTC, I sponsor the Market Risk Advisory Committee (MRAC), a multi-stakeholder group of market participants that examines risk management issues and makes recommendations on how to improve market structure, mitigate risks, and enhance market integrity and stability for global derivatives markets.[6] MRAC has spent a significant amount of time considering cybersecurity and recommendations to enhance cyber resilience.[7] Fraud-related risks and applications are part of these conversations.
We know that algorithmic models that may be accurately described as AI have long been employed in financial services markets[8] and that these applications include regulatory surveillance and compliance monitoring.[9] In recent years, however, the use and integration of predictive technologies has increased.
In January of 2024, the CFTC issued a request for comment seeking to learn more about the uses of AI in CFTC-regulated markets.[10] I applaud the Commission for issuing the RFC as a pathway to increase visibility and better understand the implications of AI use in our markets. This dialogue between the Commission and market participants aims to enable markets and the Commission to leverage the benefits of evolving AI models while mitigating risks.
AI fraud and cyber threat prevention, detection, and mitigation represent common ground areas where the Commission and market participants are focused on the potential for AI to enhance market integrity.[11]
AI Fueled Cyber and Fraud Threats
About a year ago, the U.S. Department of the Treasury (Treasury) released a report on Managing Artificial Intelligence-Specific Risks in the Financial Services Sector.[12] Several of the observations in the Treasury Report are unlikely to surprise this audience—cyber and fraud-related incidents continue to increase and, in parallel, the losses that firms experience as a result of these threats increase.[13]
Surveyed market participants indicate that cyberthreat actors benefit from lower barriers to entry, increasingly sophisticated automation, and decreasing time-to-exploit.[14] Firms face cyberthreats from actors including opportunistic fraudsters with access to advanced AI tools to sophisticated nation-state hackers who deploy targeted attacks.
AI-Driven Fraud
Evidence suggests that hackers are repurposing AI-based tools previously used in cyber defense tactics to identify weaknesses in networks and cybersecurity applications.[15] These weaknesses open back doors for cyber attacks. Generative AI may enable sophisticated actors to execute more convincing phishing campaigns. Deep fakes and similar campaigns may be more difficult to identify. Generative AI may accelerate the creation of new malware variants, lowering the barrier to entry and empowering a greater number of less sophisticated threat actors.[16] As a result, time-to-exploit is shrinking and the overall risk level to financial organizations is climbing. Notwithstanding many AI developers’ efforts to prevent the adaptation of their models to facilitate fraud, there is a rising tide of misuse of AI technologies.
Vulnerabilities of Technology
In addition to cyber threats, the vulnerability of AI systems is equally concerning. Through data poisoning, model evasion, and model extractions, those seeking to adapt models may introduce false data, model weights, and similar tactics to corrupt the AI models to manipulate outputs to benefit their outcome and distort or steal from AI-driven processes.[17] These adaptations potentially undermine the reliability of the models as well as features designed to enable cybersecurity and fraud detection. Data privacy also presents a notable concern.
Synthetic Identities and Impersonation
Identity impersonation and synthetic identity fraud are becoming ever more sophisticated. “Fraudsters can use AI to mimic voice, video, and other behavioral identity factors that financial institutions use to verify a customer’s identity.”[18] The ability to generate near-flawless fake credentials and believable digital appearances raises the stakes for banks, insurers, payment processors, and other financial entities that have traditionally relied on physical or behavioral markers for identification. Fraudsters posing as CEOs and CFOs have caused millions in losses by using AI to execute elaborate schemes to develop synthetic identities to convince company employees to make unauthorized transfers.[19] In response to these concerns, the Commission has issued customer education and outreach announcements to enhance market participants and customers’ awareness of these threats.[20]
Third Party Risks
Addressing these threats requires a comprehensive and collaborative approach to third-party risk management and data security.
According to the Treasury Report, “financial institutions should appropriately consider how to assess and manage the risks of an extended supply chain, including potentially heightened risks with data and data processing of a wide array of vendors, data brokers, and infrastructure providers.”[21]
In some instances, there may be high barriers to entry for providing third-party services. For example, few firms have the capability to offer globally accessible cloud-based services that demonstrate the requisite security protocols to enable financial services market participants to comply with substantial data security, integrity, and transfer standards.
As a result, only a few service providers may have the capability to deliver the quality of services needed or to respond to the vast amounts of data or information stored or processed by financial services firms. The limited competition for services may lead to a significant percentage of market participants relying on a handful of service providers.
We may describe these concerns as concentration risks.[22] While CFTC-regulated entities must “assess the risks of using AI and update policies, procedures, controls, and systems, as appropriate, under applicable CFTC statutory and regulatory requirements,”[23] the Commission, as a regulator, should also take an active role in understanding these risks.
Each of these links in the supply chain introduces potential vulnerabilities, especially with the increasing volume of data and the complexity of AI models. I have repeatedly raised these concerns.[24] It is important that all partners adhere to robust data protection, privacy guidelines, and contingency planning. These protocols are not only essential for safeguarding financial services firms, but also crucial for the resilience of the entire financial system.
Next Steps
The Treasury Report suggested next steps that identify both challenges and opportunities. I’d like to highlight a few of them that resonate with me and some proposals that I have advocated for during my service at the CFTC.
As I have intimated, as we study market participants’ use of AI, we are increasingly thoughtful about the Commission’s use of AI. As I’ve noted previously:
The CFTC has on staff surveillance analysts, forensic economists, and futures trading investigators, each of whom identify and investigate potential violations. These groups use supervisory technology (SupTech) in support of their work. Over the past few years, the CFTC has transitioned much of its data intake and data analysis to a cloud-based architecture. This increases the flexibility and reliability of our data systems and allows us to scale them as necessary. This transition will allow the Commission to store, analyze, and ingest this data more cost-effectively and efficiently.[25]
Coordination
I have consistently encouraged both inter-agency and international coordination on issues related to AI.[26]
I have advocated for “the creation of an inter-agency task force composed of financial regulators…. [to develop] guidelines, tools, benchmarks, and best practices for the use and regulation of AI in the financial services industry.”[27] As I have noted, “this approach promises efficiencies and a needed clarity for market participants trying to navigate diverse and sometimes divergent regulatory and compliance frameworks.”[28]
Financial services firms have indicated a desire to clarify regulatory approaches to innovative technologies. As reported to Treasury, “[s]ome financial institutions, however, expressed concern about the possibility of regulatory fragmentation as different financial sector regulators at both the state and federal level consider regulations around AI. This concern also extends to firms operating under different international jurisdictions.”[29]
Collaboration can help address significant issues and problems of scale, as well as some smaller changes that can help along the way. For example, the Treasury Report notes that “[a]s Generative AI increases in usage, there appears to be a significant gap in data available to financial institutions for training their models to prevent fraud….Ramifications of this data divide are especially apparent for anti-fraud use cases where larger institutions generally have much more internal data.”[30] This is not something that can be solved overnight, and will require thoughtful consideration and coordinated efforts.
The Treasury Report also encourages clarifying how we understand AI by advocating for a common lexicon specific to AI. Developing an agreed upon definition which would benefit financial institutions, regulators, and consumers alike, to “not only facilitate appropriate discussion with third parties and regulators but could help improve understanding of the capabilities AI systems may have to improve risk management or to amplify new risks,” and “may help address the current lack of clarity around measuring and identifying risks, especially with the rapid adoption of Generative AI. As noted in the introduction, terminology can have implications for the common understanding of AI technology and its associated risks as well.”[31]
Conclusion
I usually offer a standard disclaimer at the start of my remarks—something like, my thoughts are my own and do not reflect the perspectives of others. Today, however, I feel compelled to disclose that I used ChatGPT to draft this speech. Just kidding.
The research and development of this speech reflects weeks of effort by my staff and their patience with my not-so-gentle editing. However, as someone who spends significant amounts of time reading, studying, and processing data, I am tempted, at times, to defer to an increasingly capable generative AI model to serve as my speechwriter-in-chief. Assuming others will find tempting uses for AI as well, let’s figure out the best, responsible path for bringing this technology into our markets.
[1] The thoughts and perspectives that I share with you today are my own; they are not the views and perspectives of my fellow Commissioners, the Commission, or the staff of the CFTC.
[2] U.S. Dep’t of the Treasury, Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (Mar. 2024), at 12, https://home.treasury.gov/system/files/136/Managing-Artificial-Intelligence-Specific-Cybersecurity-Risks-In-The-Financial-Services-Sector.pdf (Treasury Report).
[3] Treasury Report at 12-15.
[4] See, e.g., 7 U.S.C. § 5.
[5] See, e.g., Keynote Remarks of Commissioner Johnson for Governing Data at Iowa Innovation and Business Law Center and Yale Law Journal of Law & Technology at Yale Law School: Twin Peaks – Emerging Technologies (AI) and Critical Third Parties (Apr. 4, 2025), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson16.
[6] For more information, see https://www.cftc.gov/About/AdvisoryCommittees/MRAC.
[7] See, e.g., CFTC Market Risk Advisory Committee, Recommendations on DCO System Safeguards Standards for Third Party Service Providers (Dec. 2024), https://www.cftc.gov/media/11666/mrac121024_DCOThirdPartySystemSafeguards/download.
[8] U.S. Commodity Futures Trading Commission, Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets (Jan. 25, 2024), https://www.cftc.gov/PressRoom/PressReleases/8853-24 (citing Commissioner Kristin Johnson, Artificial Intelligence and the Future of Financial Markets, Manuel F. Cohen Lecture, George Washington University Law School (Oct. 17, 2023) (describing the historic development and integration of increasingly complex algorithms including supervised and unsupervised machine learning algorithms in financial markets)).
[9] Commissioner Kristin N. Johnson Statement on the CFTC RFC on AI: Building a Regulatory Framework for AI in Financial Markets (Jan. 25, 2024), https://www.cftc.gov/PressRoom/SpeechesTestimony/johnsonstatement012524.
[10] U.S. Commodity Futures Trading Commission, Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets (Jan. 25, 2024), https://www.cftc.gov/PressRoom/PressReleases/8853-24.
[11] For example, a joint letter from trade associations and exchanges referred to the use of AI for compliance processes and controls and the World Federation of Exchanges identified compliance as a use case, stating “AI can be used to reduce manual inputs for trade documentation and regulatory reporting, as well as reducing market manipulation….” See Letter from World Federation of Exchanges to CFTC, Regarding Response to Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets (Apr. 24, 2024), https://comments.cftc.gov/PublicComments/ViewComment.aspx?id=73447; Letter from Futures Industry Association, FIA Principal Traders Group, CME Group, Inc., and Intercontinental Exchange Inc. to CFTC, Regarding Release No. 8853-24 (Jan. 25, 2024) Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets (Apr. 24, 2024), https://comments.cftc.gov/PublicComments/ViewComment.aspx?id=73444. The Bank Policy Institute stated that “… AI models, including generative AI tools, are being evaluated or piloted [by banking organizations] to enhance operational efficiencies and risk mitigation in the cybersecurity and fraud prevention contexts.” See Letter from Bank Policy Institute to CFTC, Regarding Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets (CFTC Release No. 8553-24) (Apr. 17, 2024), https://comments.cftc.gov/PublicComments/ViewComment.aspx?id=73424.
[12] See Treasury Report.
[13] Treasury Report at 10-11. Responses to the CFTC’s RFC also highlighted AI-driven fraud risk. For example, Letter from Institute for Agriculture and Trade Policy to CFTC, Regarding Request for Comment on the Use of Artificial Intelligence in CFTC Regulated Markets (Apr. 24, 2024), https://comments.cftc.gov/PublicComments/ViewComment.aspx?id=73457.
[14] Treasury Report at 16.
[15] See, e.g., id. at 17.
[16] Id. at 16.
[17] Id. at 17-18.
[18] Id. at 16.
[19] Id. at 18.
[20] Customer Advisory: Criminals Increasing Use of Generative AI to Commit Fraud, CFTC (March 19, 2025), https://www.cftc.gov/sites/default/files/2025/03/AI_fraud.pdf.
[21] Treasury Report at 19.
[22] See Keynote Remarks of Commissioner Johnson for Governing Data at IIB&L Center and Yale Law Journal of Law & Technology at Yale Law School (Apr. 4, 2025), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson16.
[23] CFTC, Letter No. 24-17: Use of Artificial Intelligence in CFTC-Regulated Markets (Dec. 5, 2024), https://www.cftc.gov/PressRoom/PressReleases/9013-24.
[24] See, e.g., Keynote Remarks of Commissioner Johnson for Governing Data at IIB&L Center and Yale Law Journal of Law & Technology at Yale Law School (Apr. 4, 2025), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson16; Commissioner Kristin Johnson’s Keynote Address at the University of Chicago Law School: Charting the Future of Financial Regulation (Jan. 24, 2025), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson15.
[25] Commissioner Kristin N. Johnson Statement on the CFTC RFC on AI: Building a Regulatory Framework for AI in Financial Markets (Jan. 25, 2024), https://www.cftc.gov/PressRoom/SpeechesTestimony/johnsonstatement012524 (citing Commissioner Kristin Johnson, Opening Statement on Measuring Benefits and Mitigating the Risks of Integrating Artificial Intelligence (July 18, 2023), https://www.cftc.gov/PressRoom/SpeechesTestimony/johnsonstatement071823; Commissioner Kristin Johnson, Artificial Intelligence and the Future of Financial Markets, Manuel F. Cohen Lecture, George Washington University Law School (Oct. 17, 2023)).
[26] See, e.g., Speech of Commissioner Kristin Johnson: Building A Regulatory Framework for AI in Financial Markets (Feb. 23, 2024), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson10; Statement of Commissioner Kristin N. Johnson on Future-Proofing Financial Markets: Assessing the Integration of Artificial Intelligence in Global Derivatives Markets (Dec. 5, 2024), https://www.cftc.gov/PressRoom/SpeechesTestimony/johnsonstatement120524; Remarks of Commissioner Kristin Johnson at the Global Blockchain Business Council’s 8th Annual Blockchain Central Davos: Collaboration for the Intelligent Age (Jan. 20, 2025), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson14.
[27] Speech of Commissioner Kristin Johnson: Building A Regulatory Framework for AI in Financial Markets (Feb. 23, 2024), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson10.
[28] Remarks of Commissioner Kristin Johnson at the Global Blockchain Business Council’s 8th Annual Blockchain Central Davos: Collaboration for the Intelligent Age (Jan. 20, 2025), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson14.
[29] Treasury Report at 35.
[30] Id. at 34.
[31] Id. at 33.
-CFTC-