AI is helping many different businesses grow and innovate, but the burgeoning tech is also helping criminals conjure up sophisticated means of fraud, according to financial services firm Plaid.

Presentation or liveness attacks have jumped 38% just this year, a Plaid spokesman told Fortune. Liveness attacks are when bad actors attempt to trick the video portion of a verification process by impersonating someone. It can be done by holding up a photo, wearing a realistic-looking mask, or by using a deepfake image on-screen. About 12% of all liveness attacks used generated faces, while roughly 25% of fraudulent ID document attempts used generative AI, the spokesman added. 

Alain Meier, head of identity at Plaid, told Fortune that given the availability of AI-based resources and online data, plus what can be found on the dark web, it’s only getting easier to commit fraud.

“The bar for committing a sophisticated fraud attack is getting lower year after year,” he added. 

About 90% of Plaid’s customers require some form of “know your customer,” or KYC, process, the spokesman said. This means companies must verify customers when new accounts are created. For example, many banking and financial services companies now require users to submit a video of themselves as part of the verification process. Companies will then match the video to, say, a customer’s driver’s license.

But this is where the more sophisticated fraudsters have found a work-around: They’re using AI to create deepfake videos of potential victims to access accounts. “Fraud has become very professional,” Meier added.

Meier shared a story in which Plaid’s technology, and a human, helped spot a potential fraud. In late 2023, a financial services client was signing up new users and using Plaid’s identity verification software to ensure real people were behind the accounts. Part of the process included a “liveness check,” in which the company asked users to send a selfie video. Plaid’s software noticed that multiple users had similar IP addresses, raising a red flag.

A Plaid analyst then reviewed the videos and discovered that the background for several was the same: a brick wall with many mounted phones and devices. Further investigation revealed that an organized crime group based in Eastern Europe was behind the fakes.

But sometimes the deepfakes can’t be detected. When that happens, Meier said, machine learning tools can be used to detect slight changes in fake documents, photos, or videos, as well as to analyze background elements, associated file data, and how the materials were provided.

Still, it’s hard for many companies, especially small businesses, to keep up with the pace of evolving fraud tactics. Many simply lack the resources, Meier said. “The fraud is so advanced,” he added, “it would require every company to have an in-house team of crack fraud experts.”

Subscribe to the CFO Daily newsletter to keep up with the trends, issues, and executives shaping corporate finance. Sign up for free.

Read More