EXAM 1 Flashcards

(79 cards)

1
Q

Econometrics

A

The science of using statistics and economic theory to acquire economic data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Causality

A

An action is said to cause an outcome if the outcome is the direct
result, or consequence, of that action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Controlled Experience
- control group
- treatment group
-random assignment

A

In a controlled experiment, the control group doesn’t receive treatment, while treatment group does. The assignment to each group is random.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Casual Effect

A

Effect on an outcome of a given action/treatment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Experimental Data

A

experiment designed to evaluate a treatment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Observational Data-

A

Observes actual behavior outside an experimental setting
- treatments are not assigned

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Cross sectional data

A

data on different entities for a single period of time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Time series data

A

data for single entity at multiple times period

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Panel data

A

data for multiple entities, which each entity is observed at 2+ periods

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Probability Theory

A

basic language of uncertainty + forms the basis for statistical inference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Outcomes

A

mutually exclusive potential results of a random process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Sample Space

A

set of all possible outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Event

A

Subset of the sample space; set that contains more than one outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Probability of Outcome

A

the proportion of the time that the outcome occurs in the long run

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Probability of Event

A

the sum of the probabilities of the outcomes in the event

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Random Variable

A

numerical summary of a random outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Probability Distribution

A

The probability distribution of a discrete random variable is the list of all possible values of the variable and the probability that each value will occur.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Cumulative Probability Distribution

A

the probability that the random variable is less than
or equal to a particular value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Bernoulli Distribution

A

p and 1-p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Probability Density Function (continuous)

A

probability that the random
variable falls between those two points

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

CDF for continuous

A

the probability that the random variable is less than or equal
to a particular value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Expected Value of Discrete RV

A
  • long run average value of the random variable over many repeated trials
  • weighted average of the
    possible outcomes of that random variable where the weights are the
    outcomes’ probabilities.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Expected Value of Continuous RV

A

an uncountable infinite many
possible values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

mean+SD

A

measure the center of the
distribution and its spread

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Skewness
measures the lack of symmetry - symmetric, skewness =0 - long right tail = + skewness -long left tail = - skewness
26
Kurtosis
how thick or heavy the tails of distributions are - the greater the kurtosis, the more likely the outliers - a normal distributed RV is 3
27
Standard Deviation + Variance
measures of dispersion of distribution - the variance is an expected value of the square of the deviation of Y from its mean
28
Moments of Distribution
1. Mean of Y; E[Y] is first moment 2. E[Y^2] is the second moment 3. E[Y^r] is the rth moment - variance is function of 1st and 2nd moment -skewness is the function of 1st-3rd moment - kurtosis is the 1st-4th moment
29
Joint Probability Distribution
probability that the random variables simultaneously take on certain x and y values
30
Marginal Probability Distribution of Y
Adding up all the probabilities possible for which Y takes on a specific value
31
Conditional Distribution
The distribution of Y conditional on X taking a specific value P( X|Y)= P(X,Y)/P(Y)
32
Law of Iterated Expectation
weighted average of the P(Y|X), weighted by the distribution of X - the expected value of Y is equal to the expectation of the conditional expectation of Y given X E [Y ] = E [E [Y |X ]]
33
LIE
- computed using the conditional distribution of Y given X , and the outer expectation is computed using the marginal distribution of X . - implies that if the conditional mean of Y given X is zero, then the mean of Y is zero. - applies to expectations that are conditioned on multiple random variables
34
Independence
2 RV are independent if knowing the value of one variable provides no info about the others
35
Covariance
The extent the two variables move together - if X & Y are independent, covariance is zero
36
Correlation
How much one variable depends on the other variable - X&Y are uncorrelated if corr(x,y)=0 -1
37
funfact
If the conditional mean of Y does not depend on X , then Y and X are uncorrelated
38
Standard normal distribution
convenient representation for RV that is normally distributed
39
Multivariate Normal Distribution FOUR IMPORTANT PROPERTIES
Represents the joint distribution of two (bivariate normal) or more
40
MULTIVARIATE NORMAL DISTRIBUTION- PROPERTY 1
If X and Y have bivariate normal distribution with covariance σXY , then for constants a and b
41
MULTIVARIATE NORMAL DISTRIBUTION- PROPERTY 2
If a set of variables has a multivariate normal distribution, then the marginal distribution of each of the variables is normal.
42
MULTIVARIATE NORMAL DISTRIBUTION- PROPERTY 3
If variables with multivariate normal distribution have covariances that equal zero, then the random variables are independent. - if X and Y have a bivariate normal distribution with σXY = 0, then X and Y are independent. - uncorrelation implies independence (not true in general)
43
MULTIVARIATE NORMAL DISTRIBUTION- PROPERTY 4
If X and Y have a bivariate normal distribution, then the conditional expectation of Y given X is linear in X that is E [Y |X = x] = a + bx, where a and b are constant. Joint normality implies linearity of conditional expectations, but linearity of conditional expectation does not imply joint normality
44
Chi Squared Distribution
sum of M square independent standard normal RV - et Z1, Z2 and Z3, be independent standard normal random variables. Then, Z1 + Z2+Z3 has a Chi-squared distribution with 3 degrees of freedom
45
Student T Distribution
ratio of a standard normal random variable, divided by the square root of an independently distributed chi-squared random variable with M degrees of freedom divided by M
46
F Distribution
with M and N degrees of freedom, denoted by FM,N is defined to be the distribution of the ratio of a chi-squared random variable with M degrees of freedom, divided by M, to an independent chi-squared distribution with N degrees of freedom, divided by N
47
Random Sampling
Random sampling procedures n objects are selected random from a population Y1, ..., Yn, where Y1 is the first observation, Y2 is the second observation, and so forth. Each of these Yi ’s are random variables.
48
Identically distributed
each Yi has the same marginal distribution
49
Sample Average
= 1/n (Y1+..+Yn)
50
E[Y] = μY Var [Y] = σ Y = σ^2/n .
51
When the distribution of Y is not normal
the exact distribution of the sample mean is typically complicated and depends on the distribution of Y
52
Large Sample Approximation
Large sample approach uses approximations to sampling distribution that rely on on n>30
53
Asymptotic Distribution
approximation becomes exact, n-> infinity
54
Law of Large #
sample size is large, the average be very close to mean with very high probability
55
Central Limit Theorem
when sample size is large, the sampling distribution of standardized sample average is approximately normal
56
Asymptotic Theory
While exact sampling distributions are complicated and depend on the distribution of Y , asymptotic distributions are simple.
57
Convergence in Probability
converges in probability to μY (or equivalently ̄Y is consistent for μY if the probability that the the sample average ̄Y is in the range μY − c to μY − c becomes arbitrarily close to 1 as n increases for any constant c > 0
58
Statistics
science of using data to learn about the world around us
59
Estimator
function of a sample of data to be drawn from population - is a RV because it's a function of random sample observations
60
Estimate
numerical value of the estimator when it's actually computed using data from specific sample - not random sample
61
ESTIMATION PROPERTY 1: Unbiasedness
We say ˆμY is an unbiased estimator of μY if E [ˆμY ] = μY . - bias of the estimator is E [ˆμY ] − μY - if we compute the value of the estimator for different samples, on average, we get the right number
62
ESTIMATION PROPERTY 2: Consistency
Let ˆμY be an estimator of μY . We say ˆμY is a consistent estimator of μY if ˆμY converges in probability to μY - when the sample size is large, the uncertainty about the value of μY arising from random variation in the sample is very small. Convergence in Probability A sequence of random variables { Xn } converges in probability to X if for all  > 0 lim x→∞ Pr (|Xn − X | > ) = 0.
63
ESTIMATION PROPERTY 3: Efficiency
Let ˆμY and ̃μY be unbiased estimators of μY . We say that ˆμY is more efficient than ̃μY if V [ˆμY ] < V [ ̃μY ]. In other words, an estimator is more efficient than other if it has a tighter sampling distribution. SBU Econometrics Fall 2021 8 / 43
64
Properties of the average Y
- The sample mean Y is an unbiased and consistent estimator of μY - The sample mean Y is the best linear unbiased estimator (BLUE), where “best” stands for more efficient here. - The sample mean Y is also the least squares estimator of μY .
65
P Value
probability of drawing a statistic at least as unfavorable to the null hypothesis as the value actually computed with your data, assuming that the null hypothesis is true. One often “rejects the null hypothesis” when the p-value is less than the significance level α
66
Significance level
The significance level of a test is a pre-specified probability of incorrectly rejecting the null, when the null is true. - probability of type 1 error
67
Critical Value
the value of the test statistic for which the test just rejects the null hypothesis at the chosen significance level
68
Sample Variance
the square root of the sample variance. The sample variance is an unbiased and consistent estimator of the population variance.
69
Standard Error
an estimator of the standard deviation and is denoted by SE
70
n known, variance unknown
p value= (- (Yaverage-mean)/SE))
71
Type 1 Error
Rejecting null when it's true
72
Type 2 Error
Accepting null when it's false
73
Rejection Region
the set of values of test statistics for which the null hypothesis is rejected
74
Acceptance region
the set of values of test statistics for which null hypothesis is not rejected
75
Size of Test
probability of type 1 error
76
Power of test
probability of rejecting the Ho when alternative is true
77
Confidence Interval
an interval that contains the true value of mean in 95% of repeated samples
78
When to use t statistics
when sample size is rlly small
79
The sample covariance is a consistent estimator of the population covariance. The sample correlation lies between −1 and 1
The sample correlation coefficient measures the strength of the linear association between X and Y in the sample of n observations.