Research Methods A2 L1 - 4 (analysis, case study, reliability, validity) Flashcards

1
Q

Content analysis:

A

Systematic research technique for analysing data, where the researcher creates a coding system of predetermined categories that can be applied to the content

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a pilot study often used to test?

A

Coding system to ensure they don’t overlap but are separate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What type of data does content and thematic analysis generate?

A

Content: Quantitative
Thematic: Qualitative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Thematic analysis:

A

Use of themes to collect new sets of data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Theme:

A

An idea that reoccurs from interviews

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Stages of content analysis:

A

1) Sampling –> time-interval/event sampling
2) How is data being recorded?
3) Analyse/categorise data
4) Tally up amounts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Strengths and weaknesses of content analysis: (+3, -2)

A

+ Reliable way to analyse quantitative data –> coding units are not open to interpretation
+ Trends and patterns can be identified due to quantitative data
+ Not time consuming
- Causality cannot be established because it merely describes the data
- Cannot extract any deeper meaning for data patterns arising

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Case studies:

A

Detailed investigation of single individual or group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Aim of case studies:

A

Be scientific and objective in their methodology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What different types of data can be collected from case studies?

A

Qualitative and quantitative data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Strengths and weaknesses of case studies: (+3, -2)

A

+ Good use of research to support key theories
+ Use of qualitative data
+ Allows study of cases that would usually be deemed impractical or unethical
- Individual differences between people that cannot be generalised
- Difficult to replicate as it is just one individual

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can psychologists assess for reliability in observations?

A
  • Test retest method
  • Pilot study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How can psychologists improve reliability in observations?

A
  • Inter observer reliability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How can psychologists assess for reliability in self-reports?

A

Test retest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How can psychologists improve reliability in self-reports?

A
  • Altering questions used in the interview
  • Inter researcher reliability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How can psychologists assess for reliability in experiments?

A

Test retest

17
Q

How can psychologists improve reliability in experiments?

A

Standardisation of instructions

18
Q

Test retest method: (4)

A
  • Conduct the observation/experiment/ self-report once and collect the results
  • Repeat again a few weeks later w/ same pps in same way
  • Compare the results
  • If results are similar, there should be a correlation coefficient of 0.8 or more
19
Q

Pilot study: (4)

A
  • Conduct small trial of observation
  • May include standardised instructions, debriefing and planning procedures
  • Operationalised categories
  • Minimises human error and variation
20
Q

How can inter observer reliability be improved and what should the results show?

A
  • Use more than one observer
  • Ensuring behavioural categories were operationalised properly
  • Further observer training about which behaviour should be observed
  • Correlation coefficient of 0.8 or more
21
Q

How should reliability be improved if results from pilot study are not clear in an observation?

A

Give more training to observers

22
Q

What factors should be considered when reviewing the questions used in the interview?

A
  • Type of data required –> qualitative/quantitative affects whether open/closed questions are asked
  • Ambiguity –> avoid vague questions
  • Double barrelled questions –> Avoid 2 in 1 questions
  • Leading questions
  • Complexity –> avoid jargon
23
Q

How can reliability be improved through standardisation of instructio?

A

If experiment is conducted twice:
- repeat same procedures twice
- use operationalised variables

24
Q

Internal validity:

A

Extent to which a study is investigating the true effects of independent variable on dependent variable

25
Factors that can reduce internal validity: (5)
- Investigator effects - Demand characteristics - Confounding variables - Social desirability bias - Lack of operationalisation
26
Give 2 examples of internal validity:
- Face validity - Concurrent validity
27
How can concurrent validity be assessed?
- Scores from new test can be compared against an older, established test where validity is already known - If positive correlation coefficient
28
How can face validity be assessed?
One or more experts in the field examine test questions and see whether the questions appear to be measuring what they should 'on the face of it'
29
How can concurrent and face validity be improved?
Remove/rewrite/re-word irrelevant questions
30
External validity:
- Refers to factors outside of research setting - How well can results gained be generalised to other settings, people and time eras
31
Types of external validity:
1) Ecological validity 2) Temporal validity
32
Ecological validity:
Ability to generalise findings of piece of research to other settings
33
Temporal validity:
Results of study can be generalised to people in today's contemporary society