Quiz 1 Flashcards

(56 cards)

1
Q

State Central Limit Theorem

A
  • CLT
  • Given a population with finite mean u (mew) and finite variance O^2 (sigma squared), the sampling distribution of the mean approaches a normal distribution with mean u and variance O^2/N, as N, the sampling size, increases
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Statistic

A

-Quantity calculated from a sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Population

A

-Set of all objects that we’re interested in researching

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Parameter

A

-Quantity calculated from a population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Significance

A

-Unlikely to have occurred by chance alone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Sample

A

-Subset of a population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Random Sample

A

-Each member of a population has equal likelihood of being chosen

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

X-bar
_
X

A

-Sample mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

S^2

A

-Sample variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

S

A

-Sample standard deviation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

U (mew)

A

-Population mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

O^2 (sigma squared)

A

-Population variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

O (sigma)

A

-Population standard deviation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Descriptive Statistics

A

-Numbers that summarize or describing data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Inferential Statistics

A
  • More in terms of hypothesis testing

- Allow us to test hypotheses about the differences between groups on the variable being measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Measures of Central Tendency

A
  • Mean: arithmetic average
  • Median: middlemost score
  • Mode: most frequently occurring score
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Measures of Dispersion

A
  • Range
  • Standard Deviation
  • Variance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Range

A

-Largest score - Smallest score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Variance

A

-Average of square deviation about (from) the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Standard Deviation

A

-Square root of variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Types of Frequency Distributions

A
  • Leptokurtosis
  • Platykurtosis
  • Normality
  • Skew
  • Kurtosis
  • Bimodal
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Kurtosis

A

-The peakedness or flatness around the mode of a frequency distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Leptokurtosis

A
  • More scores in the tails and fewer scores in the middle as compared to the corresponding normal distribution
  • Tends to happen more in smaller samples
24
Q

Platykurtosis

A

-Fewer scores in the tails and more scores in the middle as compared to the corresponding normal distribution

25
Normality (Normal Distribution)
- The left and right sides look alike (if you split the graph down the middle) - a.k.a. Bell curve or Gaussian distribution
26
Skew
-Refers to the amount of asymmetry of the distribution
27
Positively Skewed
-Vast majority of the data is on the left (or low side) of the tail
28
Negatively Skewed
-Vast majority of the data is on the right (or high side) and the tail is pointing to the left
29
Frequency Polygon
-Shows the distribution of subjects scoring among various intervals
30
Histogram
-A graphical representation of the data using bars
31
Bar Chart
-When the x-axis is categorical in nature
32
Bimodal Distributions
- Most common distribution found in psychology | - Two humps
33
Sampling Distribution
-A distribution under repeated sampling and equal-sized samples of any statistic
34
Z-Scores
- Statistic that allows us to determine how far we are away from the mean - a.k.a. Standard scores - Determine how far away scores are from the mean in standard deviational units - May be both descriptive and inferential
35
Characteristics of standard normal distribution
- Has a mean (or u) or 0 | - Has standard deviation (or o (theta)) of 1
36
Confidence Interval
-A 95%, 99% (or some stated) probability that the interval falls around (or about) the parameter (u)
37
df
- Degree of freedom (independent piece of information) - Can change, but for confidence interval, it will be N-1 - If N>/= 100, use infinity
38
T-Distribution
-William Gosset - Is leptokurtic, and as you increase the sample size, it becomes normal, such that t(sub infinity)=z - Kate Moss
39
Standard Error
-Standard deviation of a sampling distribution
40
Hypothesis
-Educated guess about which group will be significantly higher on a measure or if there will be a positive or negative relationship between two measurements
41
Independent Variable
-A variable that is manipulated by the experimenter
42
Dependent Variable
- A variable that is measured | - A score
43
Null Hypothesis
- Ho (that’s a sub o) - ”Null” means “no” - No difference in rates, scores, etc. - No statistically significant difference between or among population means on a particular measurement
44
Type I Error
-Probability of rejecting the null hypothesis when in fact it’s true
45
Type II Error
- Inability of detecting a difference if in fact one exists | - Insensitivity of experiment
46
Power
- Ability to detect a difference if in fact one exists | - Sensitivity of the experimentsp
47
Determinants of Alpha
- (fishy looking thing) - 1. Experimenter - 2. Journal Editors - We usually set our alpha = .05
48
Determinants of Power
- Power and B go opposite - Look in notes - 1. N goes up, Power goes up, B goes down - 2. O^2 goes up, Power goes down, B goes up - 3. Skew goes up, Power goes down, B goes up - 4. Outliers go up, Power goes down, B goes up - 5. Difference between group means go up, Power goes up, B goes down - 6. Alpha goes up, Power goes up, B goes down - 7. One vs. Two-Tailed Tests: 1 tail (more power); 2 tail (less power)
49
Heuristic Formula of F (in words)
of folks in a group x(times) variance among group members/ average variance within groups
50
Is the F-Ratio that you compute larger than the critical value in the table? (Conclusion).
- If yes, statistically significant. Group A is significantly higher than Group B on the dv. - p < .05; p< .01 - If no, not statistically significant (nonsignificant). There is no statistically significant difference between Group A and Group B on the dv. - p > .05; n.s.
51
Assumptions of ANOVA. Why Fisher made them?
- 1. Normality in the Population - a. That’s how Fisher derived out the critical values of F - b. From CLT, the sampling distribution of the mean approaches normality as N increases - Kolmogorov-Smirnov - Shapiro-Wilk’s W=1.0 2. Homogeneity (homoscedasticity) of Variance in the Population O(sub 1)^2 = O(sub 2)^2 = O(sub 3)^2 -Why? Fisher averaged “like commodities” (sample within variation) to obtain his best estimate of O^2 from O^2/N
52
What happens when you violate the assumptions?
-1. Robust = when you violate the assumption, the Type I error rate doesn’t change appreciably from the normal level. (stated) - 2. Liberal = when you violate the assumption, the Type I error rate is higher than the nominal level - We don’t like these tests 3. Conservative = when you violate the assumption, Type I error rate is lower than the stated level - When you violate these assumptions? It is still robust
53
Characteristics of F
- 1. F distribution is positively skewed - 2. X-bar(sub F) = df(sub w)/df(sub w) -2 - 3. F (sub infinity), infinity = always 1.00 - Any ration below 1.00 will be nonsignificant
54
Correlation Factor
-Takes the raw data and converts it into deviational scores
55
Is the t value that you compute higher than the critical value in the table? (Determinants of Alpha).
LOOK IN PACKET
56
1-Alpha (goes after type 1 error)
Probability of not rejecting the null hypothesis when in fact it's true