Credit Scoring Series Part Two: Credit Scorecard Modeling Methodology
In the world of credit scoring and risk management, data scientists are responsible for designing and developing accurate, useful, and stable credit risk models. At the same time, they must ensure their models are intuitive enough so other data scientists can assess and reproduce their results or something similar.
But while the model development process is scientific in some respects, in others it can be quite subjective. Part of being a data scientist is making informed hypotheses and regularly testing new ideas and conjectures. But while this process is an invaluable part of model development, it can be easy to lose track of all the questions and hypotheses that get tested, and it can also be hard to reconcile contradictory results.
That’s why data scientists need a trusted, tested, organized model development method that helps them keep track of results, outcomes, and other crucial observations. In other words, every data scientist should ask themselves three questions:
- How can I be confident I won’t miss important answers to my questions and hypotheses?
- How can I be sure my model will pass a peer-to-peer audit?
- How can I be certain my colleagues will be able to replicate my model’s results?
To satisfy the above points, data scientists need three crucial things:
- Systematic steps – also called a methodology – that all users can follow to ensure best practices;
- A supporting structure – also called a theoretical framework – that people can use to fill in answers, and
- A model design, which is a description of a credit risk model that sets out important characteristics and proves the model’s business value.
Once a data scientist has identified these important elements, they can start putting their questions in their theoretical frameworks and start designing and building the model. The process a data scientist might go through, and the questions they’d need to ask themselves might look something like this:
- Question 1: How can I tell “good” customers from “bad” customers? Do they pay 60, 90, or 180 days past due?
- Answer 1: This is part of my model design – I’ll seek the answer from other areas of the business/organization and I’ll document it under “operational definition.”
- Question 2: When the model predicts “good” and “bad” customers, how long should the outcome period be? Do I need to fix or adjust that period’s length?
- Answer 2: This is also part of my model design. Again, I need to check with the business/organization to verify what they want the model to predict; once that’s done, I’ll file the answer under “performance window.” Once I’ve established the definition and the outcome period, I can derive the outcome variable from my data – which will form part of my framework.
- Question 3: Who should I include in the analysis? Do I need to exclude fraudulent customers or those who fall between the “good” and “bad” description?
- Answer 3: In my model design, I need to add a list with all the assumptions I can make so I can ask the business/organization to confirm.
- Question 4: What are the main aspects that differentiate “good” customers from “bad” customers?
- Answer 4: This is part of my theoretical framework. Specifically, this is the identification of independent variables. I’ll carry out data exploration to establish the relationship between customers’ characteristics and the outcome variable. For example, I’ll test “customers that have regular incomes are less likely to default” or “older customers are less likely to default.” By testing these, I’m carrying out significance testing on hypotheses using statistical methods like logistical regression. Based on my statistical analyses, I can decide which variables to retain in the model.
Of course, this process can go on, but this is an example of basic questions a data scientist could ask themselves to start building a useful model. In the next sections, we’ll dive deeper into scorecard modeling methodology.
Development Methodologies
Any business, research, or software project requires a sound methodology, often in a form of theoretical or conceptual framework. The purpose of the framework is to describe the order of steps and their interactions. This framework ensures all important stages are carried out, gives users a better understanding of the project, identifies important milestones, and helps project stakeholders collaborate better.
Often, there’s more than one established methodology that organizations can adopt. Data mining projects are a good example of a project where multiple conceptual frameworks are available. Data mining usually involves developing a predictive model for business purposes; because it’s such a multidisciplinary type of project data mining projects require a variety of viewpoints. For example, a data mining project could require feedback from:
- Business teams: To assess potential business benefits
- Data science teams: To create a theoretical model
- Software development teams: To develop a viable software solution(s)
Each viewpoint may require a separate methodology, but at least two methodologies would be required to accommodate the above perspectives. Examples of two popular methodologies are Agile-scrum and Cross Industry Standard Process for Data Mining (CRISP-DM). Generally the former is used to address business and software development questions, and the latter is used for building a business model.
The Agile-scrum methodology is a time-boxed, iterative approach to software development that builds software incrementally whose main purpose is delivering value to the business. The methodology promotes active user involvement, effective interactions between stakeholders, and frequent deliveries. As such, it’s well suited for data mining projects, which are usually carried out in short time frames and require frequent updates to cope with dynamic economic environments.
CRISP-DM is the leading industry methodology data mining process models. It consists of six interconnected phases:
- Business understanding
- Data understanding
- Data preparation
- Modeling
- Evaluation
- Deployment
Figure 1. CRISP-DM data mining framework
Theoretical Framework and Model Design
A theoretical framework is a building-block foundation that identifies the important factors and their relationships in a (hypothesized) predictive model like a credit risk model. The objective is to formulate a series of hypotheses and decide on a modeling approach (such as logistic regression) to test those hypotheses. But more importantly, theoretical frameworks establish methods to replicate and validate findings to help users feel confident about the model’s accuracy and rigor.
Key elements of this framework are:
- The dependent variable (criterion); for example, “credit status”
- The independent variables/predictors; aspects like age, residential status, payment history, etc.
- The testable hypotheses; for example, “homeowners are less likely to default”
The model design should follow the accepted principles of the research design methodology that serves as the blueprint for data collection, measurement, and data analysis; this allows users to test the model for reliability and validity. In this case, “reliability” asks, Does the model represent the phenomenon we’re trying to predict? and “validity” asks, Did we build the right model for our objectives?
A good model design should document the following:
- The unit of analysis (such as customer or product level)
- Population frame and sample size
- Operational definitions (what are “good”/“bad” customers?) and modeling assumptions (did this model include/exclude fraudulent customers?)
- Observational time horizon (such as customers’ payment history over the last two years) and performance windows (such as the timeframe for which the “bad” definition applies)
- Data sources and data collection methods
Figure 2. Utilizing historical data to predict future outcomes
The length of the observation and performance windows will depend on the industry sector for which the model is designed. For example, both windows are typically longer in the banking sector than in the telecommunications sector, where frequent product changes require shorter observation and performance windows.
Application scorecards are typically applied to new customers and have no observation window because customers are scored using information known at the time of application. External data such as bureau data dominate internal data for this type of scorecard. And behavioral scorecards have an observation window that utilizes internal data and tend to have better predictive power than application scorecards.
Different scorecards can be applied throughout the entire customer journey starting from acquisition campaigns to predict the likelihood of a customer responding to a marketing campaign. During the application stage, customers can be scored against multiple predictive models, such as their likelihood to default on a credit obligation or predicting fraudulent customers. A range of behavioral scorecard models would be applied to existing customers to predict default probability in order to set credit limits and interest rates, to plan upsell and cross-sell campaigns, to determine the churn probability for retention campaigns, to predict the likelihood a customer will pay back their debt amount or go into collections, and more.
CRISP-DM phase | Steps |
---|---|
Data preparation | 1. Data Integration |
2. Exploratory data analysis | |
3. Data cleansing | |
4. Data transformation | |
Modeling | 5. Training data (partitioning) |
6. Selection of predictors | |
7. Weight of evidence transformation | |
8. Model build (for example, logistic regression) | |
9. Reject inference (optional) | |
10. Scorecard model scaling | |
Evaluation | 11. Model evaluation and validation |
12. Credit risk strategies | |
13. ROI analysis | |
Deployment | 14. Deployment code |
15. Model scoring, testing and implementation | |
16. Model monitoring |
Table 1. Typical steps in building a standard credit risk scorecard model
Conclusion
Credit scoring is a dynamic, flexible, and powerful tool for lenders, but there are plenty of ins and outs that are worth covering in detail. To learn more about credit scoring and credit risk mitigation techniques, read the next installment of our credit scoring series, Part Three: Data Preparation and Exploratory Data Analysis.
Read prior Credit Scoring Series installments: