Research Methods Flashcards

1
Q

content validity

A

Evidence that the content of a test corresponds to the content of the construct it was designed to cover

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Ecological validity

A

evidence that the results of a study, experiment or test can be applied, and allow inferences, to real-world conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

reliability

A

the ability of the measure to produce the same results under the same conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

test-retest reliabillity

A

The ability of a measure to produce consistent results when the same entities are tested at two different points in times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Correlational research

A

observing what naturally goes on in the world without directly interfering with it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Cross-sectional research

A

This term implies that data come from people at different age points with different people representing each age point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Experimental research

A
  • One or more variable is systematically manipulated to see their effect (alone or in combination) on an outcome variable.
  • Statements can be made about cause and effect
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Systematic variation

A

differences in performance created by a specific experimental manipulation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

unsystematic variation

A

Differences in performance created by unknown factors. (age, gender, IQ, Time of Day, Measurement error etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Randomization

A

Minimizes unsystematic variation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Frequency distributions (AKA Histograms)

A

A graph plotting values of observations on the horizontal axis, with a bar showing how many times each value occurred in the data set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The ‘Normal’ Distribution

A
  • Bell shaped
  • Symmetrical around the centre
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Properties of frequency distributions

A
  • Skew
  • Kurtosis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Skew

A
  • The symmetry of the distribution
  • Positive skiew (scores bunched at low values with the tail pointing to high values)
  • negative skew (scores bunched at high values with the tail pointing to low values)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Kurtosis

A
  • the ‘heaviness of the tails
  • leptokurtic = heavy tails
  • Platykurtic = light tails
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Deviance

A
  • we can calculate the spread of scores by looking at how different each score is from the center of a distribution eg: the mean
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Sum of squared errors (SS)

A
  • indicates the total dispersion, or total deviance of scores from the mean
  • it’s size is dependent on the number of scores in the data.
  • More useful to work with the average dispersion, known as the variance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The sum of squares, variance, and standard deviation represent the same thing

A
  • the ‘fit’ of the mean to the data
  • the variability in the data
  • how well the mean represents the observed data
  • error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Population

A
  • The collection of units (be they people, plankton, plants, cities, suicidal authors etc.) to which we want to generalize a set of findings or a statistical model.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Sample

A

a smaller (but hopefully representative) collection of units from a population used to determine truths about that population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

calculating ‘error’

A
  • a deviation is the difference between the mean and an actual data point
  • deviations can be calculated by taking each score and subtracting the mean from it:
    deviance=outcome(i)-model(i)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Sum of squared errors

A
  • we could add the deviations to find out the total error
  • deviations cancel out because some are positive and others negative
  • therefore, we square each deviation
  • if we add these squared deviations we get the Sum of Squared Errors (SS)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Mean squared error

A

Although the SS is a good measure of the accuracy of our model, it depends on the amount of data collected. To overcome this problem we use

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

The standard error

A
  • SD tells us how well the mean represents the sample data
  • but, if we want to estimate this parameter in the population, then we need to
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
why can't we prove certainty in stats
- because it's inferential statistics - it's based on probability
26
Type I error
- occurs when we believe that there is a genuine effect in our population, when in fact there isn't - the probability is the a-level (usually .05)
27
Type II error
- occurs when we believe that there is no effect in the population when, in reality, there is. - The probability is the B-level (often 0.2)
28
regression has no IV, DV
Predictor = IV Outcome = DV
29
misconceptions around p-values No1
A significant result means that the effect is important = no, because significance depends on sample size
30
misconceptions around p-values No 2
A non-significant result means that the null hypothesis is true = no, a non-significant result tells us only that the effect is not big enough to be found (given our sample size), it doesn't tell es that the effect size is zero.
31
misconceptions around p-values No 3
A significant result means that the null hypothesis is false? = no, it is logically not possible to conclude this
32
Researcher degrees of freedom
A scientist has many decisions to make when designing and analysing a study
33
Continuous DV over categorical DV
ANOVA
34
Noir
Nominal Ordinal Interval Ratio
35
Measurements of error
Deviances
36
how to get rid of a 0 deviance
square it
37
Standard deviation
- estimate of error
38
crombachs alpha
tests internal validity
39
random allocation
- attempt to control for individual difference - each person has an equal chance
40
Matched pairs
-is a within subjects - doesnt test same subject but matches on characteristics
41
mean squared error
refers to variance
42
mean
- the one number that bests represents a normal distribution - best represents central tendency - it needs to be thought of as a model not a number
43
Degrees of Freedom
The number of scores that are free to vary
44
Variance
SS/df
45
within group variance
the estimate of error
46
test statistic for and ANOVA
F statistic
47
Big F
is significant and is ratio divided by error
48
levels
the divides with the IV's
49
ANOVA
- tests the mean differences between levels - controls for type I error
50
Cheffe's
- Post-hoc - the most conservative (least likely to make a Type I error) - keeps a large estimate of error
51
when to do post-hoc
After - 3 or more levels - IFF - main effect is significant
52
factorial designs
- have multiple IV's (Factors)
53
interaction
- the effect of one IV depends upon the level of another IV - are more important to interpret than main effect - should be interpreted first - IV-DV-IV
54
how to check interaction
- visual inspection, if the lines are parallel there is no interaction - the differences between cell means (if the differences are the same there may be no interaction)
55
how many hypothesis for a two-way ANOVA
- 3 - one for the IV A on the DV - IV B on the DV - Interactional effect on the DV
56
If there is a significant interaction
- use a simple effect analysis
57
Independent design
- different entities in all conditions
58
Repeated measures design
- the same entities in all conditions
59
Mixed design
- different entities in all conditions of at least one IV, the same entities in all conditions of at least one other IV - SPANOVA
60
you can have a significant interaction without having
- significant main effect
61
ANOVA (Between)
main effect
62
ANOVA (Within)
Error
63
F
- total variability =MS between/MS Within
64
all statistical statements
Statistic, degrees of freedom, value, significance, effect size
65
tests of between-Subjects effects
- the amount of stats needed depends on factor levels
66
repeated measures attempt to control for
- individual differences
67
when accounting for differences in statistical equations
- you must make changes to both numerator and denominator
68
Advantages of repeated measures
- Sensitivity (unsystematic variance is reduced, more sensitive to experimental effects) - Economy (requires less participants)
69
disadvantage of repeated measure
- practice effect - fatigue
70
can you use post hoc for repeated measure?
- no, you need to use contrast (pre-planned comparison)
71
Covariance
the amount to which scores vary together - needs to be measured before everything else
72
how to tell if it is an ANCOVA
- there is no interaction
73
when you have a significant covariate
-make an adjustment to the means - compare the estimated margin means
74
what N is needed to reach normality
30
75