# Error And Control Flashcards

1
Q

What are the two categories of error associated with measurement?

A
• Random Error

- Constant/ Systematic errors

2
Q

What do Random errors do?

A

Obscure the results

3
Q

What do Constant errors do?

A

Bias the results

4
Q

Which are more problematic, constant or random errors and why?

A

Constant because random errors will usually average out whereas Constant errors will add errors to all the results

5
Q

What are extraneous variables?

A

undesirable variables that add error to our experiments and the measurement of the dependent variable

6
Q

What is a way to control the influence of extraneous variables?

A
• Random allocation

- Counterbalance

7
Q

What are confounding variables?

A
• extraneous variables that disproportionately affect one level of the IV more than other levels
• Add systematic error at the level of the IV
8
Q

What do confounding variables do?

A

introduce a threat to the internal validity of our experiments

9
Q

What can confounding variables result in us measuring?

A
• An effect of the IV on the DV when it is not present

- No effect of the IV on the DV when it is present

10
Q

What are ways researchers eliminate/control confounding variables (turning them into extraneous variables?

A
• Choice of subject design depending on main concerns (e.g. individual differences vs order effects)
• where this is not possible aim to control the groups through random allocation, matching, counterbalancing and control group
11
Q

What is internal validity?

A

Whether the variable we are interested in is the only thing that has an effect on the IV

12
Q

How can the many sources of confounding variables be categorised?

A
• Selection
• History
• Maturation
• Instrumentation
13
Q

What is selection?

A
• Bias resulting from the selection or assignment of participants to different levels to the IV
• Results if participants who are assigned to different levels of the IV differ systematically in some way that could influence the measurement of the DV (other than the manipulation of interest)
14
Q

How can you help control selection and what is it a particular problem for?

A
• random allocation helps control it

- Particular problem for quasi-experimental designs

15
Q

What is history?

A
• Uncontrolled events that take place between testing conditions (that aren’t related to participants themselves e.g. testing conditions different in morning to afternoon)
16
Q

What is maturation?

A
• Intrinsic changes in the characteristics of participants between different test occasions (e.g. having had practice between the two, or they get older)
17
Q

What is a way to control maturation?

A

Counterbalancing

18
Q

What is instrumentation?

A
• Changes in the sensitivity or reliability of measurement instruments during the course of the study
• Could also be human error
19
Q

What is reactivity and when can it threaten internal validity?

A
• the awareness that they are being observed may alter behaviour
• can threaten internal validity if participants are more influenced by reactivity at one level of the IV than the other
20
Q

What are demand characteristics?

A
• results from reactivity

- Participants do what they think the experimenter wants them to do

21
Q

What does experimenter bias result from?

A

Reactivity

22
Q

What is a way to overcome reactivity

A

use blind designs but only with between Ps designs. E.g. Ps don’t know if a drink contains alcohol. A double blind design is when the experimenter doesn’t know either

23
Q

What is precision?

A

Exactness (consistency)

24
Q

What is accuracy?

A

correctness (truthfulness)

25
Q

What is reliability?

A

(precision) the extent to which our measure would provide the same results under the same conditions

26
Q

What is validity?

A

(accuracy) the extent to which it is measuring the construct we are interested in

27
Q

What are two questions when considering reliability?

A
• Does It have temporal consistency (is it stable over time and with different conditions)
• Does it have internal consistency (Do All the elements of our measure tap into the same area of interest
28
Q

What are the different forms of reliability?

A
• test-retest reliability
• inter-rater reliability
• parallel forms reliability
• internal consistency
29
Q

What is test-retest reliability and what is it important for?

A
• measures fluctuations from one time to another
• If we administered our measures to the same participants of separate occasions would we obtain the same results?
• Important for constructs which we expect to be stable
30
Q

What is inter-rater reliability?

A
• measures fluctuations between observers

- If two different raters/ observers measured the variable of interest would they obtain the same results

31
Q

What is Parallel forms reliability?

A
• If we administer different versions of our measure to the same participants would we obtain the same results?
• Different versions can be useful to help eliminate memory effects as the questions are different
32
Q

What could parallel forms reliability be subject to?

A
• order effects

- fatigue effects

33
Q

What is internal consistency?

A

Determines whether all items (e.g. in a questionnaire) are measuring the same construct

34
Q

How can internal consistency be assessed?

A

Split-half reliability: questionnaire items split into two groups and the halves are administered to Ps on separate occasions
- would want the two groups to generate similar results

35
Q

What should you beware of with internal consistency?

A
• order effects

- fatigue

36
Q

What are the different forms of validity?

A
• content validity
• face validity
• criterion validity
• construct validity
37
Q

What is content validity?

A
• does our test measure the construct fully?

- E.g. the RM exam should cover knowledge of quantitative and qualitative methods

38
Q

What is face validity?

A
• Does it look like a good test?
• Do the questions in the RM exam reflect the RM knowledge students should have learnt
• We don’t always want our test to have face-validity as it could lead to demand characteristics
39
Q

What is criterion validity?

A
• does the measure give results which are in agreement with other measures of the same thing?
• Do RM exam quiz scores relate to final exam grades
40
Q

What is concurrent criterion validity?

A

comparison of new test with established test

41
Q

What is predictive criterion validity?

A

Does the test predict outcome on another variable

42
Q

What is construct validity?

A
• Is the construct we are trying to measure valid?
• Does the construct itself exist?
• The validity of a construct is supported by cumulative research evidence collected over time
• Together supporting the existence of the construct itself
• E.g. is happiness a construct?
43
Q

What are the two terms in which construct validity can be assessed?

A
• convergent validity

- discriminant validity

44
Q

What is convergent validity?

A

correlates with tests of the same and related constructs. (e.g. a measure of satisfaction should correlate a measure of happiness)

45
Q

What is discriminant validity?

A

doesn’t correlate with tests of different or unrelated constructs (e.g. Measure of depression doesn’t correlates with measure of happiness)

46
Q

What does true causation need to satisfy?

A

Necessary and sufficient criteria:

• The manipulation of the IV in the absence of other factors will always result in the DV change (sufficient)
• The DV change will not be measured in the absence of the IV manipulation, i.e. in response to other factors (necessary)
47
Q

What does it mean if something is sufficient?

A

Y is adequate to cause X?

48
Q

What does it mean if something is necessary?

A

Y must be present to cause X

49
Q

Give an example of something that is necessary but not sufficient

A
• To be good at psychology you need to be good at RM (RM is necessary to make you good at psychology
• But to be good at psychology, you also need to be good at other subjects in psychology (RM is not sufficient to make you good at psychology)
50
Q

Give an example of something that is sufficient but not necessary?

A
• Completing and passing an undergraduate degree in psychology at UoM will get you a BSc (the degree is sufficient in order to obtain a BSc)
• There are other universities and other courses that upon completion award a BSc therefore studying and completing an undergraduate psychology at UoM is not
51
Q

Give an example of something that is necessary and sufficient

A
• To obtain full marks on the final RM and statistics exam it is necessary to answer every question correctly
• To obtain full marks on the final RM and Statistics exam, it is sufficient to answer every question correctly
52
Q

What is a reason that it is usually hard to control other factors?

A
• human behaviour is complex

- it is hard to identify them

53
Q

What is multifactorial causation?

A
• Phenomenon is determined by many interacting factors
54
Q

What are some questions to bear in mind when sampling and then trying to generalise to the population?

A
• What is the population of interest?
• Is the sample representative?
• Is the sample free from bias?
55
Q

What are populations?

A
• The entire collection of people, animals plants or objects that we are interested in that share a common characteristic
• Defined by population parameters (measurements which describe the population)
• Vary in size (e.g. all students, all UoM students, all UoM psychology students)
56
Q

What do we sample?

A
• selection of individuals from the larger population
• for any population there are many possible samples
• vary in size
57
Q

What are sample statistics and what are they used to do?

A
• measurements which describe the sample

- used to infer population parameters

58
Q

Why do we sample?

A
• Time (difficult to collect data from everyone)
• Money (expensive to collect data from everyone)
• Access (not always possible to reach all members of a population)
• Sufficiency (pattern of results don’t change much even if we have data from everyone
59
Q

How do we sample?

A
• clearly define population we are interested in

- try and avoid sampling bias

60
Q

What are the different sampling methods?

A
• random sample
• systematic
• stratified sample
• cluster sample
• opportunity/convenience sample
• snowball sampling
61
Q

What is a random sample?

A
• The gold standard
• Each member of the population has an equal chance of being selected
• Usually quasi-random as it is truly impractical to get a fully random sample as we don’t have access to all the members of a population
62
Q

What is a systematic sample and where might it be a problem?

A
• Draw from the population at fixed intervals

- Problematic in populations with a periodic function (pattern that might influence results)

63
Q

what is a stratified sample?

A
• Proportional: specified groups appear in numbers proportional to their size in the population
• Disproportional: Specified groups which are not equally represented in the population, are selected in equal proportions
64
Q

What is a cluster sample and what is the problem associated with it?

A
• Researcher samples an entire group or cluster from the population of interest
• Lots of effects that can influence the generalisability of our results
65
Q

What is an opportunity/ convenience sample and what can it lead to??

A
• People who are easily available
• But can lead to a biased sample
• Most common
66
Q

What is Snowball Sampling?

A
• recruit small number of Ps and then use those initial contacts to recruit further Ps
• Biases the sample but useful if you want to recruit very specific populations
67
Q

What does external validity refer to?

A

the ability to generalise our results

68
Q

What is population validity?

A

Our sample representative?

69
Q

What is ecological validity?

A

Does the behaviour measured reflect naturally occurring behaviour (lab studies often accused of having a lack of ecological validity)

70
Q

What is external validity made up of?

A
• population validity

- ecological validity

71
Q

In regards to validity what is there usually a trade off between?

A

internal and external validity

72
Q

What can occur if your sample is not large enough?

A

Sampling error

73
Q

What is there a trade off between in sample size?

A

size and time/cost

74
Q

What are the factors in deciding on sample size?

A
• Design (subjects design, number of IVs or IV levels)
• Response rate (not everyone will reply or take part)
• Heterogeneity of population (how varied is population)