Odcp rfi on hispanic program services
Combined Degree Programs 6. Financial Obligations 7. General Policies 8. Housing Services 9. Student Health The CFPB should challenge lenders to improve on their baseline models in just this way. We would argue that some accommodation should be made to preference gains in fairness — even if the new iteration forces lender to make a modest concession to model quality.
The intervention should occur in the most sensitive elements of lending — underwriting, marketing, and collections. Therein, we propose a three-step process for reviewing models:. First, the CFPB should consider using a dataset of its own for testing models of lenders.
Additionally, the CFPB should update the data frequently. Second, the CFPB should establish a threshold for any loan where further examination for potential fair lending concerns is warranted. By the process of iterating, the CFPB will soon see if there are ways that can lead to significant reductions in disparate impact with little or minimal losses of model quality.
An accommodation ratio of 5. We propose that regulators, at minimum , compel lenders to alter their model when a change from the most predictive approach can create an accommodation ratio of 2. The review process outlined above leads a lender to find a model that more accurately defines risk while reducing disparate impact.
This test will create a systematic way of requiring lenders to place fair lending concerns on equal grounds with repayment risk. In any compliance procedure, lenders would have to understand the level of bias in their model, moving their underwriting closer to the previously-mentioned goal of explainability. The recommendations we have made for adding explainability to modeling in Question 10 also apply to issues raised here.
For a regulator to review hundreds of models on an ongoing basis requires a framework for review. We have recommended that the CFPB publish guidance to lenders on our proposed accommodation ratio.
Additionally, other elements in the guidance could provide clarity for lenders when they develop their baseline models. Nonetheless, models can only make conclusions based on the data they are given. If the set of loans used in analysis differs from the lending in the broader marketplace, it can lead the algorithm to make conclusions that lack real-world validity.
Monotonic constraints safeguard against nonsensical conclusions. A monotonic constraint controls the direction that any input can have on the outcome. For example, if a small dataset included many refinance mortgage loans that fell into foreclosure, even though the loans had loan-to-value ratios of well below 50 percent, the model might associate higher risk rates with lower LTV ratios.
Such a system would penalize borrowers who made higher down payments. A model builder should proactively intervene to prevent this kind of overfitting error by applying a monotonic constraint that said that low LTV ratios should not increase the predicted default rate. In essence, a monotonic constraint is a tool to safeguard against errors that are the product of unrepresentative data sets.
Notably, monotonic constraints are easy to implement, increase explainability, lead to models that are more readily understood by loan applicants, and improve model quality.
The CFPB should, as a part of testing models, examine how base data sets used by lenders may lead to biased outcomes from the models used by those lenders. Risks of overfitting to biased models may be substantial when data sets include only a small number of borrowers from protected classes.
Lenders should incorporate processes in their AI training to guard against sociological bias as a part of fair lending compliance. Stress-testing: Given the relative infancy of AI in the financial sector, some models may not be predictive during times of crisis, such as the current pandemic.
Moreover, the risk grows when they do not understand the logic of their models. Lack of technological capacity to identify fair lending violations does not justify non-compliance.
Generating clear and explainable action notices can improve fair lending policy and model review practices. A meaningful adverse action notice meets four criteria: it is accurate, explainable, interpretable, and actionable. Each of these creates challenges for translation. With so many variables, limiting an adverse action notice to four or five reasons may be too simplistic. When models evolve continuously, it is hard to create coding systems. Traditionally, financial institutions FIs have called on three primary methods for determining which variables were the most significant factors in an adverse decision.
Some FIs have reviewed the inputs in their models to ensure no prima facie bias or grounds to believe that a data point input could contain proxy bias.
While the latter approach requires more rigor and may be more statistically sound, neither is sufficiently robust method for selecting an adverse action reason with AI. The contribution of a variable on the final result expressed as a Shapley value may differ by applicant and by FI.
It is important for each FI to be able to demonstrate that the method it uses to explain its decision is not just clearly described, but that it can be understood by the applicant.
FIs should publish adverse action notices that contain information that can help an applicant improve their creditworthiness so that the applicant could receive a more favorable outcome in the future. The leading regression-based underwriting models use actionable variables, but many AI-based models include variables that are not actionable. Modelers can create explainable models without compromising on the overall predictive power of the algorithm.
That point was illustrated recently during a several-day competition among data scientists to produce a model from data provided by a credit bureau. The explainable team found that they lost only 1 percent in overall accuracy — a difference that was within the overall margin of error. That matters if we want to make sure that AA notices remain actionable for consumers. Adverse Action notices only have room to list several reason codes. Unfortunately, when variables are collapsed to fit within those constraints, it compromises the effectiveness of the notices.
The CFPB should address problems that occur when well-intentioned modelers attempt to explain their models within the constraints of the standard AA notice. The table below illustrates the issue:. This table highlights a structural problem with adverse action notices. Most notices include only four or five reason codes.
In this case, by reducing the number of action codes to four, the modelers have had to collapse variables in ways that reduced the accuracy of the explanation. In some cases, the summarization is a difference without a distinction, as expressed here by collapsing two delinquency variables into one. However, others reveal problems.
Not all cash flows should qualify as income, but applicants cannot be sure how their data was seen by the lender when the adverse action notice collapses all income-related variables. Some consumers may receive cash flows that look like income from account-to-account transfers. Indeed, a subset of P2P transfers could represent informal sources of earned income.
The problem is that the consumer does not know how the model interpreted those transfers. Such a situation — where the consumer does not see the basis of the credit assessment — is a suboptimal outcome. On social networks. Who are we? Africa Cup of Nations Good Subscriber Account active since Shortcuts.
Account icon An icon in the shape of a person's head and shoulders. It often indicates a user profile. Log out. US Markets Loading H M S In the news. Sherry Fairchok. A US neobank Anda is offering a simple onramp to crypto that targets Hispanic users. Insider Intelligence publishes hundreds of insights, charts, and forecasts on the Fintech industry.
0コメント