Introduction to Probability Theory for Risk Modeling Flashcards
(21 cards)
State Space (Ω)
The set of all possible outcomes of an experiment.
Event (A)
Any subset of the state space Ω.
Probability Measure Axioms
P(A) ∈ [0,1], P(∅)=0, P(Ω)=1, and for disjoint A, B: P(A∪B)=P(A)+P(B).
Complement Rule
P(A^c) = 1 - P(A).
Union Rule
P(A∪B) = P(A) + P(B) - P(A∩B).
Conditional Probability
P(A|B) = P(A∩B) / P(B), provided P(B)>0.
Independence
A and B are independent if P(A∩B) = P(A)P(B).
Bayes’ Theorem
P(A|B) = P(B|A)P(A) / P(B).
Random Variable (X)
A function X: Ω → ℝ assigning numerical values to outcomes.
PMF (discrete)
p(x) = P(X = x).
PDF (continuous)
f(x) such that P(a≤X≤b) = ∫ f(x) dx.
CDF
F(x) = P(X ≤ x).
Expectation (discrete)
E[X] = Σ x p(x).
Expectation (continuous)
E[X] = ∫ x f(x) dx.
Variance
Var(X) = E[X²] - (E[X])².
Bernoulli Distribution
Models binary outcomes (success/failure)
Binomial Distribution
Models multiple Bernoulli trials (e.g.,
default risk)
Poisson Distribution
Models rare events over time (e.g., fraud
detection)
Normal Distribution
Used for asset returns and risk modeling.
Exponential Distribution
Models time between events (e.g.,
waiting time for defaults)
Lognormal Distribution
Used for modeling stock prices