Random Variables and Distributions Flashcards

1
Q

Sample vs population - which letters to denote?

A

sample is a subgroup of population
Greek letters for properties of POPULATION
roman letters for properties of SAMPLE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

define probability

A

predicting properties of a SAMPLE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

define statistics

A

deducing information about the POPULATION from the sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

define cumulative probability (both discrete and continuous)

A

F(x) = P(X≤x)
sum of all probabilities up to X=x for discrete

integration from -inf to x for continuous
CDF F(x) obtained by integrating PDF f(x), vice versa by differentiating

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

manipulating expectations:
E[aX+b] =

A

a E[X] + b

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

expectation of a sum E[A+B] is equal to…

A

the sum of expectations
= E[A] + E[B]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Manipulating variances: Var[aX+b] =

A

a^2 Var[X]

proof by expanding Var[X] = E[X^2] - (E[X])^2 definition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

under what conditions is it true that Var[X+Y] = Var[X] + Var[Y] = Var[X-Y] ?

A

only for INDEPENDENT X and Y

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

standard deviation quantifies the…

A

width of a probability distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define skewness. How is it calculated? What does a -ve, +ve and skewness of 0 mean?

A

skewness is a measure of the asymmetry of a distribution.

Skew[X] = E[[X-µ)^3] / σ^3

negative skew = tail to the left
positive skew = tail to the right
0 skew usually symmetric

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

define kurtosis. how is it calculated?

A

kurtosis is a measurement of how much WEIGHT of a distribution lies in its TAILS
(ie. how likely it is to observe extreme values)

Kurt[X] = E[(X-µ)^4] / σ^4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

define excess kurtosis - how is it calculated?

A

comparing kurtosis to that of a normal distribution, ie. how much more weight is in the tails

XS Kurt = Kurt[X] - 3

-ve XS kurt means less weight is in the tails compared to normal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

define moment of a distribution. how is the nth moment calculated?

A

nth moment of a distribution:
M(n) = E[X^n]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is a moment generating function?

A

some function m(t) such that the limit as t–>0 of the nth derivative wrt t gives the nth moment, M(n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

how to find the moment generating function m(t)

A

m(t) = E[exp(tX)] = integral of exp(tx) f(x) for some continuous PDF f(x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what does the random variable for the Binomial distribution describe?

A

the number of times, m, that a particular event happens in n independent measurements
constant probability of success, p
only two outcomes

17
Q

the random variable in the Poisson distribution describes…

A

the number of times, m, a random event occurs in a specified time interval.
Constant chance of occurrence.

probability of event occurring is proportional to length of time period.

will often be given the avg number of occurrences per interval (ie. lambda)

18
Q

examples of exponential distributions

A

residence time of molecules in a CSTR
lifetime of reactant molecules in batch reactor for first order reaction

19
Q

moment generating function of exponential distributions

A

m(t) = λ/λ-t

20
Q

how to change variables for normal distribution to standardise to 𝛷(z)

A

z = (x-µ) / σ

new random variable: Z = (X-µ)/σ
such that Z~N(0,1) which has a CDF of 𝛷(z)

where X~N(µ,σ)

21
Q

a binomial dist can be approximated as normal if…

A

np > 5 and
nq = n(1-p) >5

22
Q

a poisson dist can be approximated as normal if…

A

lamda > 15

23
Q

when approximating a discrete distribution as continuous (normal), one must…

A

APPLY A CONTINUITY CORRECTION

less than –> 0.5 higher
more than –> 0.5 lower

24
Q

Bayes’ Theorem

A

think probability tree branches
P(A|B) = P(B|A) * P(A)/P(B)

25
what is a joint probability distribution?
probability of X and Y having particular outcomes fXY(x, y) Cumulative FXY(x,y) calculated by a double sum or double integral
26
define marginal probability distribution
fX(x): probability that X takes a particular value, irrespective of Y value ie. fXY(x,y) integrated / summed over all y values
27
definition of conditional probability P(A|B)
P(A|B) = P(A∩B) / P(B) analogous for conditional joint probabilities: fX|y(x) = fXY(x,y) / fY(y) where fX|y(x) is prob dist of X given Y has outcome y
28
Pearson Correlation coefficient
𝜌 = Cov[X,Y] / √Var[X]Var[Y] = covariance / product of sd's
29
covariance of two independent variables
covariance = 0 so that E[XY] = E[X]E[Y] (note that cov=o does not necessarily mean independent)
30
Var[X+Y] = general case
Var[X+Y] = Var[X] + Var[Y] + 2Cov[X,Y] if indep, Cov=0