The role of artificial intelligence (AI) and, within it, machine learning (ML), in the credit underwriting process is under review at the Consumer Financial Protection Bureau (CFPB), which highlighted in a blog post Wednesday the regulatory uncertainties surrounding federally required adverse action notices to consumers who are denied credit.
The post, focusing on current requirements of the Equal Credit Opportunity Act (ECOA) and the implementing rules under Regulation B, notes that the existing regulatory framework “has built-in flexibility that can be compatible with AI algorithms.” For example, while a creditor must provide the specific reasons for an adverse action, the Official Interpretation to Regulation B, which implements ECOA, “provides that a creditor need not describe how or why a disclosed factor adversely affected an application, 12 CFR pt. 1002, comment 9(b)(2)-3, or, for credit scoring systems, how the factor relates to creditworthiness,” the bureau stated.
The post also notes the potential pros and cons of using AI to expand on the types of information that may be considered in applications by individuals who have so little credit history they are unscorable using traditional underwriting techniques. “Consideration of such information may lead to more efficient credit decisions and potentially lower the cost of credit. On the other hand, AI may create or amplify risks, including risks of unlawful discrimination, lack of transparency, and privacy concerns,” the post states. “Bias in the source data or model construction can also lead to inaccurate predictions. In considering AI or other technologies, the Bureau is committed to helping spur innovation consistent with consumer protections.”
The bureau is hoping to see providers use the tools the agency has deployed to promote innovation in the provision of consumer financial products, particularly regarding required disclosures. Launched last September, these include the bureau’s revised Policy to Encourage Trial Disclosure Programs (TDP Policy); a revised No-Action Letter Policy (NAL Policy); and the Compliance Assistance Sandbox Policy (CAS Policy). The bureau recounts that the TDP Policy and CAS Policy provide for a legal safe harbor that could reduce regulatory uncertainty in the area of AI and adverse action notices. Also, the TDP Policy specifically identifies adverse action notices as a type of federal disclosure requirement that would be covered by the policy, it notes.
In particular, the bureau says it is interested in exploring at least three areas:
- The methodologies for determining the principal reasons for an adverse action. The example methods currently provided in the Official Interpretation were issued in 1982, and there may be uncertainty in the application of these examples to current AI models and explainability methods. See 12 CFR pt. 1002, comment 9(b)(2)-5.
- The accuracy of explainability methods, particularly as applied to deep learning and other complex ensemble models.
- How to convey the principal reasons in a manner that accurately reflects the factors used in the model and is understandable to consumers, including how to describe varied and alternative data sources, or their interrelationships, in an adverse action reason.
“The Bureau intends to leverage experiences gained through the innovation policies to inform policy,” the bureau stated in its post. “For example, applications granted under the innovation policies, as well as other stakeholder engagement with the Bureau, may ultimately be used to help support an amendment to a regulation or its Official Interpretation.”
Innovation spotlight: Providing adverse action notices when using AI/ML models