Log In
Or create an account ->
Imperial Library
Home
About
News
Upload
Forum
Help
Login/SignUp
Index
Cover
Title Page
Copyright Page
Contents
About This Book
About the Author
Acknowledgments
Chapter 1 Introduction
1.1 Book Overview
1.2 Overview of Credit Risk Modeling
1.3 Regulatory Environment
1.3.1 Minimum Capital Requirements
1.3.2 Expected Loss
1.3.3 Unexpected Loss
1.3.4 Risk Weighted Assets
1.4 SAS Software Utilized
1.5 Chapter Summary
1.6 References and Further Reading
Chapter 2 Sampling and Data Pre-Processing
2.1 Introduction
2.2 Sampling and Variable Selection
2.2.1 Sampling
2.2.2 Variable Selection
2.3 Missing Values and Outlier Treatment
2.3.1 Missing Values
2.3.2 Outlier Detection
2.4 Data Segmentation
2.4.1 Decision Trees for Segmentation
2.4.2 K-Means Clustering
2.5 Chapter Summary
2.6 References and Further Reading
Chapter 3 Development of a Probability of Default (PD) Model
3.1 Overview of Probability of Default
3.1.1 PD Models for Retail Credit
3.1.2 PD Models for Corporate Credit
3.1.3 PD Calibration
3.2 Classification Techniques for PD
3.2.1 Logistic Regression
3.2.2 Linear and Quadratic Discriminant Analysis
3.2.3 Neural Networks
3.2.4 Decision Trees
3.2.5 Memory Based Reasoning
3.2.6 Random Forests
3.2.7 Gradient Boosting
3.3 Model Development (Application Scorecards)
3.3.1 Motivation for Application Scorecards
3.3.2 Developing a PD Model for Application Scoring
3.4 Model Development (Behavioral Scoring)
3.4.1 Motivation for Behavioral Scorecards
3.4.2 Developing a PD Model for Behavioral Scoring
3.5 PD Model Reporting
3.5.1 Overview
3.5.2 Variable Worth Statistics
3.5.3 Scorecard Strength
3.5.4 Model Performance Measures
3.5.5 Tuning the Model
3.6 Model Deployment
3.6.1 Creating a Model Package
3.6.2 Registering a Model Package
3.7 Chapter Summary
3.8 References and Further Reading
Chapter 4 Development of a Loss Given Default (LGD) Model
4.1 Overview of Loss Given Default
4.1.1 LGD Models for Retail Credit
4.1.2 LGD Models for Corporate Credit
4.1.3 Economic Variables for LGD Estimation
4.1.4 Estimating Downturn LGD
4.2 Regression Techniques for LGD
4.2.1 Ordinary Least Squares – Linear Regression
4.2.2 Ordinary Least Squares with Beta Transformation
4.2.3 Beta Regression
4.2.4 Ordinary Least Squares with Box-Cox Transformation
4.2.5 Regression Trees
4.2.6 Artificial Neural Networks
4.2.7 Linear Regression and Non-linear Regression
4.2.8 Logistic Regression and Non-linear Regression
4.3 Performance Metrics for LGD
4.3.1 Root Mean Squared Error
4.3.2 Mean Absolute Error
4.3.3 Area Under the Receiver Operating Curve
4.3.4 Area Over the Regression Error Characteristic Curves
4.3.5 R-square
4.3.6 Pearson’s Correlation Coefficient
4.3.7 Spearman’s Correlation Coefficient
4.3.8 Kendall’s Correlation Coefficient
4.4 Model Development
4.4.1 Motivation for LGD models
4.4.2 Developing an LGD Model
4.5 Case Study: Benchmarking Regression Algorithms for LGD
4.5.1 Data Set Characteristics
4.5.2 Experimental Set-Up
4.5.3 Results and Discussion
4.6 Chapter Summary
4.7 References and Further Reading
Chapter 5 Development of an Exposure at Default (EAD) Model
5.1 Overview of Exposure at Default
5.2 Time Horizons for CCF
5.3 Data Preparation
5.4 CCF Distribution – Transformations
5.5 Model Development
5.5.1 Input Selection
5.5.2 Model Methodology
5.5.3 Performance Metrics
5.6 Model Validation and Reporting
5.6.1 Model Validation
5.6.2 Reports
5.7 Chapter Summary
5.8 References and Further Reading
Chapter 6 Stress Testing
6.1 Overview of Stress Testing
6.2 Purpose of Stress Testing
6.3 Stress Testing Methods
6.3.1 Sensitivity Testing
6.3.2 Scenario Testing
6.4 Regulatory Stress Testing
6.5 Chapter Summary
6.6 References and Further Reading
Chapter 7 Producing Model Reports
7.1 Surfacing Regulatory Reports
7.2 Model Validation
7.2.1 Model Performance
7.2.2 Model Stability
7.2.3 Model Calibration
7.3 SAS Model Manager Examples
7.3.1 Create a PD Report
7.3.2 Create a LGD Report
7.4 Chapter Summary
Tutorial A – Getting Started with SAS Enterprise Miner
A.1 Starting SAS Enterprise Miner
A.2 Assigning a Library Location
A.3 Defining a New Data Set
Tutorial B – Developing an Application Scorecard Model in SAS Enterprise Miner
B.1 Overview
B.1.1 Step 1 – Import the XML Diagram
B.1.2 Step 2 – Define the Data Source
B.1.3 Step 3 – Visualize the Data
B.1.4 Step 4 – Partition the Data
B.1.5 Step 5 –Perform Screening and Grouping with Interactive Grouping
B.1.6 Step 6 – Create a Scorecard and Fit a Logistic Regression Model
B.1.7 Step 7 – Create a Rejected Data Source
B.1.8 Step 8 – Perform Reject Inference and Create an Augmented Data Set
B.1.9 Step 9 – Partition the Augmented Data Set into Training, Test and Validation Samples
B.1.10 Step 10 – Perform Univariate Characteristic Screening and Grouping on the Augmented Data Set
B.1.11 Step 11 – Fit a Logistic Regression Model and Score the Augmented Data Set
B.2 Tutorial Summary
Appendix A Data Used in This Book
A.1 Data Used in This Book
Chapter 3: Known Good Bad Data
Chapter 3: Rejected Candidates Data
Chapter 4: LGD Data
Chapter 5: Exposure at Default Data
Index
← Prev
Back
Next →
← Prev
Back
Next →