Exam Flashcards

(100 cards)

1
Q

To work out F statistic

A

MS / MSerror

Between groups MS/ Within groups MS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

If means are the same is there a significant difference?

A

No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How is something significant?

A

Below .05 or Below .01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do you work out how many participants?

A

Df of within sample / number of groups

+1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are priori contrasts used for?

A

Experimenter can make predictions about means

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In a between groups two way ANOVA how many error terms are there?

A

1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

MS axs

A

Within subjects error term for main effect of variable A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

MS a

A

Within subjects main effect of variable A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

MS s

A

Within subjects subjects of variable A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Familywise and per comparison equation

A

afw = c (apc)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a family wise error?

A

Multiple tests that increase type 1 error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is per comparison error?

A

Single test that increases type 1 error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is ANCOVA

A

Statistical control of error variability

When experimental control of error not possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Ancova assumptions

A

Homoscedasticity- equal scatter
Heterogeneity of regression coefficients
Multicollinearity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

ANOVA assumptions

A
Normal distributions 
Independent 
DV ratio / interval scale 
IV categorical 
Homogeneity of variance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Test reliability is a precursor of

A

Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Vectors set to 90 degrees have what rotatation?

A

Orthogonal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Rotations converged in 3 iterations are

A

Orthogonal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Types of Orthogonal roatation

A

Varmiax
Equamax
Quartimax

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Types of oblique rotation

A

Direct oblimin

Promax

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Which regression has a priori sequence of entry?

A

Hierarchical regression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a scree plot?

A

Eigenvalues on Y axis

Factors on X axis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is a cross loading

A

Any loading greater than .4

And has difference if more than .2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What does KR-20 measure?

A

Internal reliability for measures with dichotomous choices (Yes/No)
Values up to +1.00

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What does KR-20 stand for?
Kuder- Richardson
26
What is good internal reliability in regards to KR-20
Anything greater than .7
27
Three kinds of factorial design
Completely randomised factorial design ( 1 treatment condition / between groups) Randomised block factorial design ( all treatments in randomised order within group) Mixed factorial design
28
Criteria for different populations
At least some of the rules are different
29
Mean square equation of error
MSs/ab = SSs/ab / dfs/ab
30
What is a?
Sig
31
When is it more important to allow type 1 errors?
When important to find new facts
32
When is it important to allow type 2 errors?
When not clogging up literature
33
What is probability (p)
Probability of observed effect | Having assumed null hypothesis true
34
The normal distribution
``` Mathematical function 2 population parameters Distribution of scores u = mean o = SD ```
35
Different normal distributions are generated whenever
The pop mean or pop SD are different
36
What is normal distribution used for?
In order to make standardised comparisons | Across different populations and treatments
37
If shared area of normal distribution large
Populations similar
38
If shared area of normal distribution small
Populations different
39
In terms of normal distribution It is mathematically impossible for the shared area to
Ever equal zero
40
What is chi square distribution used for?
Testing sample and population variance are same / different
41
F distribution based on
Different distribution
42
If 2 populations are the same then F ratio will be
1
43
If two populations are different then F ratio will be
More than 1
44
The f ratio will further increase with
Further the difference
45
The F ratio depends on knowing the
Variances for two samples | Degrees of freedom associated with each sample, based on sample sizes
46
Chi squared and F distribution used for...
More statistical approach ( not normal distribution) Looking at treatment populations
47
Chi squared 2 assumptions
Population normally distributed | Measure taken on interval / ratio scale
48
F ratio further 2 assumptions
Homogeneity of variances | Independent measures
49
The F equation
F = Xsquared a / sa-1 | / Xsquared b / sb- 1
50
Chi squared equation
X squared = (s-1) o squared s | / o squared p
51
The F ratio is the ratio of
Two sample based variances
52
F value observed and critical value meaning
If Observed F value greater than critical F value then significant
53
The general problems of rejecting null hypothesis
Can always attribute some portion of difference to chance factors / error
54
What is the name of uncontrolled sources of variability in experiment
Experimental errors
55
Two types of error
Individual differences error | Experimental error
56
Experimental error is show by
Within group variability
57
Two estimates of experimental error are
Independent from eachother | But both reflect same value of experimental error
58
A systematic source of variability comes from the
Treatment effect
59
Unsystematic source of variability comes from
Experimental error of subjects and measurement
60
When population means are not equal this is the result of the ...
Treatment effects
61
When population means are equal reflects
Experimental error alone
62
Sum of squares equation
SStotal = SSwithin + SSbetween
63
Basic ratio of variance
Score and sum squared divided by the number of items that contribute to the score and sum
64
For the purpose of ANOVA, variance is defined as
variance = SS / df
65
SSt is the
Total sum of squares
66
SSa is the
Between group sum of squares
67
SSs/a is the
Within group sum of squares
68
If null hypothesis true, ratio of between groups and within groups variability equal to
1
69
Partitioning the variability means
Subdividing total deviation
70
Total deviation equation
AS - T
71
Between groups deviation equation
A - T
72
Within groups deviation equation
AS - A
73
What do the parts of the deviation equation mean?
``` T = grand mean AS = total deviation AS = within group A = between group ```
74
Transforming the data reduces the chances of making which error?
Type 2 error
75
If ANOVA assumption broken it fails gracefully, eg
Miss real effects (Type 2 error) | But does not increase chances of making Type 1 error
76
Transformation of positive and negative skews | Moderate Substantial Severe
Moderate - square root Substantial - logarithm Severe / reciprocal
77
Designs which include multiple IVs are called
Factorial designs
78
For 2 way between groups design | F ratio is calculated for :
Main effects if IV 1 Main effect of IV 2 Interaction effect of IV 1 and 2
79
F ratio for main effect A
MSa / MSs/ab
80
F ratio for main effect of variable B
MSb / MSs/ab
81
F ratio for main interaction
MSab / MSs/ab
82
Mean square equation for main effect A
MSa = | SSa / dfa
83
Mean square equation for main effect B
MSb = SSb / dfb
84
Within groups | F ratio for main effect
Fa = MSa / MSaxs
85
Within subjects | F ratio for subject variables
Fs MSs / MSaxs
86
A significant main effect of subject variable is a problem when
Specific predictions are made about performance | When there is a hidden altitude treatment interaction
87
Within subjects additional assumptions
Sphericity Homogeneity of treatment different variances Compound symmetry
88
No need to test for sphericity of IV if it only has
2 levels
89
What do we want from Mauchleys test of sphericity
For value to be non significant | Homogeneity of variance
90
With mixed designs, what is there no such thing of?
One factor mixed design
91
What test for homogeneity of variance is used for split plot?
Box’s M
92
What test is used to test for normality?
Box’s M
93
How do we control in experimentation
Randomisation
94
Mean square equation for interaction
MSAB = SSAB / DFAB
95
Each sum of squares is calculated by combining two quantities called
Basic ratios
96
Degrees of freedom are the
Number of observations that are free to vary | When we already know something about those observations
97
Actual variance estimates are called
Mean squares
98
If means are the same there is no
Significant difference
99
If only 2 levels of a factor then
No analytical comparisons are required | T- tests can be performed instead
100
Definition of interaction effect
Means of the IVs differ with respect to the levels of the other IV