An alert to help financial institutions identify fraud schemes targeting them and that are associated with the use of deepfake media created with generative artificial intelligence (GenAI) tools was issued Wednesday by Treasury’s financial crimes enforcement arm.
The Financial Crimes Enforcement Network (FinCEN) said the office has seen an increase since 2023 in suspicious activity reporting describing the suspected use of deepfake media, particularly the use of fraudulent identity documents to circumvent identity verification and authentication methods.
The agency issued an alert (FIN-2024-Alert004) to explain typologies associated with these schemes, provide red flag indicators to assist with identifying and reporting related suspicious activity, and remind institutions of their reporting requirements under the Bank Secrecy Act (BSA).
GenAI-rendered content, which is commonly referred to as “deepfake” content or “deepfakes,” can manufacture what appear to be real events, such as a person doing or saying something they did not actually do or say, the FinCEN alert says. The agency notes that while leading developers and companies producing GenAI tools have committed to implementing oversight and controls intended to mitigate malicious deepfakes and other misuse, criminals may develop methods to evade such safeguards. It added that some AI tools are open-source and can be modified by users, potentially circumventing controls.
FinCEN noted that no one red flag necessarily indicates illicit of suspicious activity, so financial institutions should consider the surrounding facts and circumstances before determining whether a specific transaction is suspicious or associated with illicit use of GenAI tools. Red flags noted include:
- A customer’s photo is internally inconsistent (e.g., shows visual tells of being altered) or is inconsistent with their other identifying information (e.g., a customer’s date of birth indicates that they are much older or younger than the photo would suggest).
- A customer presents multiple identity documents that are inconsistent with each other.
- A customer uses a third-party webcam plugin during a live verification check. Alternatively, a customer attempts to change communication methods during a live verification check due to excessive or suspicious technological glitches during remote verification of their identity.
- A customer declines to use multifactor authentication to verify their identity.
- A reverse-image lookup or open-source search of an identity photo matches an image in an online gallery of GenAI-produced faces.
- A customer’s photo or video is flagged by commercial or open-source deepfake detection software.
- GenAI-detection software flags the potential use of GenAI text in a customer’s profile or responses to prompts.
- A customer’s geographic or device data is inconsistent with the customer’s identity documents.
- A newly opened account or an account with little prior transaction history has a pattern of rapid transactions; high payment volumes to potentially risky payees, such as gambling websites or digital asset exchanges; or high volumes of chargebacks or rejected payments.
It said financial institutions, when reporting such suspected activity, should reference Wednesday’s alert by including the key term “FIN-2024-DEEPFAKEFRAUD” in SAR field 2 (“Filing Institutions Note to FinCEN”) and in the narrative to indicate a connection between the suspicious activity being reported and this alert. It said institutions should also include any applicable key terms indicating the underlying typology in the narrative.
FinCEN Issues Alert on Fraud Schemes Involving Deepfake Media Targeting Financial Institutions