Session 3 Flashcards
(19 cards)
How does a creditor’s perspective differ from a shareholder’s, and why is a neutral balance sheet format important?
Creditor’s Perspective vs. Shareholder’s:
Shareholders focus on:
- Profitability and value growth (e.g., ROE, stock price)
- Accept higher risk for higher return
Creditors focus on:
- Solvency and risk, including:
- Interest coverage
- Leverage
- Liquidity (e.g., debt ratios, cash flows
- Prioritize downside protection and repayment capacity
Key Insight: A neutral balance sheet format is needed to compare financials across borrowers consistently in credit analysis.
What is junior debt, and how does it compare to senior debt in terms of risk, repayment, and use?
Junior Debt (a.k.a. subordinated or mezzanine debt):
- Repaid only after senior debt in liquidation → higher risk
- Acts as a risk buffer for senior debt
- Probability of default (PD) may be similar
- Loss given default (LGD) is higher for junior debt
- Higher interest rates & more volatile pricing
- Often includes flexible terms (e.g., conditional on issuer profitability, unsecured)
Comparison vs. Senior Debt:
- Lower repayment priority
- Higher risk and return
- Sometimes analyzed similarly to equity due to flexible structure
Common Uses:
- Recapitalizations
- Acquisitions
- Start-up or growth financing
- Capital structure optimization
What are the key differences between qualitative and quantitative credit ratings, including criteria used?
Qualitative Ratings:
- Concept: Subjective opinion of creditworthiness over time (e.g., 1 or 5 years)
- Data Base: Manual analysis of financial + non-financial info
- Use: By banks and rating agencies
Qualitative Criteria:
- Management quality & business model
- Industry outlook & regulation
- Event risk & corporate governance
Quantitative Ratings:
- Concept: Objective measure of default probability over time
- Data Base: Based on models:
- Structural (e.g., Merton model)
- Statistical (historical default data)
- Use: Basel II internal ratings, insurers, corporate self-rating
Quantitative Criteria:
- Leverage, liquidity, profitability (e.g., Debt/EBITDA, EBIT margin)
- Cash flow strength, interest coverage
- Revenue & earnings growth
Why are input drivers important when linking financial statements to credit analysis?
Financial statements reflect underlying business and market drivers like:
- Leverage
- Pricing
- Competitive environment
.
In credit analysis, the focus is on identifying risks to a firm’s ability to meet obligations — not upside potential
.
Therefore, input drivers like: - Liquidity
- Market dynamics
- Governance are critical to assess financial stability and credit risk
What does individual credit rating analysis involve, and how do rating agency methodologies adapt?
Individual Credit Rating Analysis:
- Involves forming an informed opinion based on internal + external factors
- Forecasts are often limited to client-provided data
- More explicit forecasts typically come from rating agencies
Rating Agencies:
- Use slightly different methods and variables
- Methodologies are not static — revised due to regime shifts
- → e.g., changes in causal structure of input factors
In the simple scoring approach Step 1, what are the hard and soft facts used to calculate a Preliminary Client Rating?
Information is split into two categories:
How are input factors scored in Step 2 of the Simple Scoring Approach, and what categories are evaluated?
Each input is scored from 0 to 4 points based on sector percentiles:
Scoring is repeated across these categories:
- Debt Management (e.g., debt ratio, equity ratio)
- Liquidity (e.g., cash ratio, quick ratio)
- Profitability (e.g., ROA, ROE)
- Asset Management (e.g., collection/payment periods)
Note: Sometimes scoring includes caps/floors based on parent support, business model, or country of origin — but not in this example.
What are Steps 3 and 4 in the Simple Scoring Approach to credit rating, and how is the Preliminary Rating adjusted?
Step 3: Total Score → Preliminary Client Rating
Total score = sum of points from all input factors
Converted to a rating class:
Step 4: Individual Client Rating
Adjust Preliminary Rating using additional client-specific insights:
- Key Influencing Factors:
- Cash Flow / Debt Ratio → repayment ability
- Negative Information → legal issues, default history, reputation
➡ Final result = Individual Client Rating
A refined, realistic credit risk assessment
What is Step 5 in the credit rating process, and how are rating levels mapped to default probabilities?
Step 5: Estimate Probability of Default (PD)
- Use the final rating (A–E) to assign a statistical likelihood of client default
- PD values are based on historical or model-based data
Use of PD:
- Lending decisions
- Pricing
- Regulatory capital requirements
How does S&P determine a credit rating, and what is the relationship between credit ratings and risk?
S&P Credit Rating Process:
- Issuer requests rating → analyst assigned
- Data exchange and analysis
- Draft reviewed for factual accuracy
- Final rating approved by committee and published
Rating Methodology:
Anchor = Combination of:
- Business Risk Profile (country, industry, competition)
- Financial Risk Profile (leverage, cash flow)
- Modifiers adjust Anchor (e.g., diversification, governance)
- Result = Stand-Alone Credit Profile, possibly adjusted for external support
- ESG factors included (no separate score since 2023)
Anchor Grid Insight:
Risk ↑ in either profile → Rating (Anchor) ↓
E.g., Strong Business Risk (2) + Modest Financial Risk (2) → Anchor = a+/a
Credit Risk Principles:
- Higher rating = lower PD (e.g., AAA is more like to repay debt than B)
- Longer horizon = higher PD
- Lower rating = higher LGD relevance (riskier firms → bigger potential creditor losses)
What are the main advantages and disadvantages of individual credit risk analysis?
Advantage:
- Flexible & qualitative → adapts to different industries and major events (e.g., crises)
Disadvantages:
- Risk of information overload and inaccurate forecasts → requires expert judgment
- Time-consuming (~1 week per rating)
- Subjective → needs rating committees
- Methodologies require frequent updates
- Often lags market changes (e.g., CDS spreads, stock prices)
How do structural models estimate credit risk, and how is equity modeled as a call option?
Core Idea of Structural Models:
Firm has:
- Assets (A) = what the firm owns
- Debt (D) = what the firm must repay at maturity
Key Question: Will Assets ≥ Debt at debt maturity?
- Uncertainty comes from asset value volatility (like a stock price)
Modeling Equity as a Call Option (Merton Model):
Shareholders have a call option on the firm’s assets:
At maturity:
- If Assets > Debt → firm pays debt, shareholders keep the rest:
=> Payoff = Assets – Debt - If Assets < Debt → shareholders walk away, firm defaults:
=> Payoff = 0
➡ Shareholders have limited liability (can’t lose more than they invested)
Modeling equity as a call option turns default risk into a priced option problem.
→ Equity value reflects the firm’s default risk based on:
Asset value, debt level, and volatility
What are the drivers of credit risk in structural models, and how is Distance to Default (DD) calculated and interpreted?
Drivers of Credit Risk in Structural Models:
- Higher earnings → ↓ PD
- Higher volatility of earnings → ↑ PD
- Higher assets → ↓ PD
- Higher debt → ↑ PD
PD = Probability of default
Interpretation:
DD is expressed in standard deviations
⇒ Measures how many std. devs. asset value is above the default point (debt)
⇒ Higher DD = safer company
How is Distance to Default (DD) converted into a Probability of Default (PD), and what are the strengths and weaknesses of structural models like Moody’s KMV?
Conversion:
- PD = f(DD) → Mapping function
- Converts Distance to Default into Probability of Default
- Based on empirical data (e.g., historical credit databases)
Advantages of Structural Models (e.g., Moody’s KMV):
- Intuitive link to financial structure
- Strong theoretical foundation when using market-implied data (e.g., options)
Disadvantages:
- Complex & data-intensive (math, IT, empirical calibration)
- Needs a developed stock + options market
- Not suitable for non-listed firms
- Poor fit for new or inefficient markets (e.g., startups, bubbles)
How do statistical models estimate credit risk, and what are the key development steps?
General Overview:
- Use historical borrower data to forecast default probabilities over time
- Combine financial + non-financial inputs
- Must ensure risk differentiation and use current, reliable data
- Used in Basel II internal ratings since 2004
Development Steps:
- Variable Selection – Identify relevant input factors
- Method Selection – Choose appropriate statistical model
- Data Reduction – Use 5–7 optimal inputs for predictive accuracy
- Classification Accuracy – Test model using forecast metrics
- Rating Scale Construction – Convert output into rating classes
- PD Assignment – Match classes to PDs (e.g., via Moody’s, S&P, models)
- Smooth Rating Function – Ensure rating stability over time
- Testing & Updating – Validate and re-estimate with out-of-sample data
What are three statistical model types used for estimating credit risk, and how do they work?
1. Discriminant Models (e.g., Altman’s Z-Score):
Classifies firms into groups (e.g., bankrupt vs. non-bankrupt)
Formula (based on financial ratios):
𝑍 = 1.2𝑋1 +1.4𝑋2 + 3.3𝑋3 + 0.6𝑋4 + 1.0𝑋5
X 1 = NWC / Total Assets
X 2 = Retained Earnings / Total Assets
X 3 = EBIT / Total Assets
X 4 = Market Equity / Book Liabilities
X 5 = Sales / Total Assets
Interpretation:
- Z < 1.81 → High risk of bankruptcy
- Z > 2.67 → Low risk
- 1.81 < Z < 2.67 → Gray zone (uncertain)
2. (Non-)Linear Regression Models:
Predict continuous outcome (e.g., Probability of Default, PD)
Equation: 𝑃𝐷 =𝛽1𝑋1+𝛽2𝑋2+…+𝛽𝑛
X i = risk indicators (e.g., leverage, liquidity)
β i = coefficients estimated from historical data
- Produces a PD for each company
- Often grouped into rating classes (e.g., BBB, CCC)
3. Empirical Mapping of Ratios:
- Sort companies into deciles based on 1 ratio (e.g., Quick Ratio)
- Plot: PD vs. decile
- Helps evaluate predictive power of financial ratios
- Supports variable selection for models above
What are Steps 3 to 5 in building a statistical model for credit risk estimation?
Step 3: Input Variable Selection
Not all financial ratios help predict default; some may cause multicollinearity or overfitting
Use:
- Statistical tests (e.g., ANOVA, Kolmogorov-Smirnov)
- Factor analysis and multicollinearity checks
- Keep only the most useful and independent variables
Step 4: Model Evaluation
- Use accuracy metrics, like the Gini coefficient, to assess predictive power
- Gini = 1 → perfect model
- Gini = 0 → no better than random
- Goal: Select best variable combination for high accuracy and interpretability
Step 5: Rating Classification
- Group outputs (e.g., predicted PDs) into rating classes (e.g., AAA to C)
- Use clustering or thresholds
- Regulatory requirement (Basel rules):
- At least 7 distinct risk classes for non-defaulting firms
What are Steps 6–8 in credit risk model development using statistical methods?
Step 6: Assign PDs to Rating Classes
Calculate average historical PD per rating class (e.g.):
- AAA ≈ 0.01%
- A ≈ 0.10%
- BB ≈ 1.00%
Makes the scale comparable to S&P/Moody’s, linking PD to credit quality
Step 7: Smooth the PD Curve
Raw PDs may be bumpy or inconsistent (e.g., PD for BBB > BB)
Apply calibration function to:
- Ensure higher PD = worse rating
- Produce smooth, concave curve for stability and usability
Step 8: Model Validation & Maintenance
Test out-of-sample: Does it generalize beyond training data?
Regularly:
- Recalibrate yearly using new data
- Retest every ~2 years to ensure variable relevance
- Required for banks to meet regulatory standards (e.g., Basel)
What are the key advantages and disadvantages of statistical credit risk models?
Advantages
- Data-driven & objective – Reduces subjective bias using historical/statistical data.
- High predictive power – Models like those using the Gini coefficient effectively rank default risk.
- Regulatory compliance – Meets Basel/CRD standards (e.g., minimum rating classes, testing).
Disadvantages
- Overfitting risk – May work well on training data but poorly on new data if too complex.
- High maintenance – Needs regular updates, recalibration, and expert oversight.
- Data quality dependence – Poor/inconsistent inputs make results unreliable.