Resistant AI Secures $25M to Combat Fraud with AI Agents

Discover expert insights on health & fitness, trending internet chicks, editor’s picks, travel guides, lifestyle tips, entertainment news, billionaire stories, and global updates – only at Mid Breaker.

Resistant AI has secured $25 million for its financial crime/fraud prevention-oriented artificial intelligence (AI) models. The Prague, Czech Republic-based company’s Series B funding will allow Resistant to grow its existing products focused on document fraud detection and transaction monitoring in new areas and partnerships, while also advancing its threat intelligence developments, according to a press release on Monday (Oct. 13).

“The new investment arrives as the anti-fraud and regtech market is being reshaped by fully-native or bolted-on agentic solutions that convert static user workflows into more affordable, intelligent adaptive ones,” the company said in a release.

However, unlike those Large Language Model (LLM)‍-based agents, these systems are not built to perform the quantitative risk analysis required for fraud and financial crime (fincrime) redressal, Resistant said. They also have high systemic hallucination rates (due to 10-30%) and have been shown to be hard to defend from “adversarial manipulation.”

Resistant explains that it is working on “protect and empower” the AI agents, and their “overwhelmed fraud, risk and compliance teams,” with machine learning models designed to spot fraud within documents, transactions and behavior.

“The risk-prevention part of financial crime has experienced a tectonic shift with the introduction of LLMs and AI agents, as well as adversarial use of GAI by fraudsters,” said the company’s founder and CEO Martin Rehak in a statement.

“Our fraud and fincrime models give any institution the ability to enable both their human and agentic co-pilots to stand against such AI-powered threats at scale.”

The company’s last fund-raising round was in 2023, when it raised $11 million in a Series A round. The latest round was led by DTCP, as well as existing investors including Experian, GV and Notion Capital.

When asked by Mid Breaker about the ways in which artificial intelligence (AI) poses difficulties for financial institutions as they look to fight fraud, Jenna Kaye-Kauderer, managing vice president and head of AirKey at Capital One, said one could start with perception.

“With gen AI we’re seeing new threats and new fraud vectors we never fathomed before in history. And we’re seeing those today,” she added.

“While the use of fraudulent social media accounts and impersonation attacks is nothing new,” that report said, “generative AI has accelerated this time curve, enabling malicious actors to build convincing simulated personas and scale their campaigns in minutes.” The old system of putting a defense on the field and just hoping that it works? Nope.

The practice of verifying identity has to change, rather than constantly upping its ante in technology and dynamics of physical verification. “Not as a product,” Mid Breaker wrote, but “as an ongoing discipline the continues to evolve.”

“One of the most critical lessons for financial institutions and banks is really building flexible authentication systems that enables being able to react quickly as these new vectors show up,” Kaye-Kauderer told PYMNTS. “And secondly is having a very broad set of the authentication tools, so really having big toolkit,” he said.

Share This Article