Research methods Flashcards

1
Q

define operationalisation of variables

A
  • clearly defining variables and how they are intended to be measured.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

define EV.

A

a variable that is not the IV, but may affect the DV, eg light, sound

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define confounding variable

A
  • a EV that cannot be controlled, varies systematically with the IV.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define and explain investigator effects

A
  • any conscious/unconscious behaviours from the researcher that may effect answers from pps
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what are the 3 types of experimental design?

A
  • independent groups
  • matched pairs
  • repeated measures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe what an independent groups design is.

A
  • each group only takes part in one conditions
  • the mean of whatever the DV is compared between these diff groups.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Give pros and cons of using an independent groups design .

A
  • more time efficient
  • less chance of order effects and demand characteristics
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

define a repeated measures design

A

each pp takes part in each condition then compared.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

give a pro and con of repeated measures

A

pp variables controlled, so higher validity
order effects might come into play with demand characteristics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

define a matched pairs design

A
  • pps are partnered with someone with a similar, relevant variable, so pps only take part in one condition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

give a pro and con of matched pairs

A
  • pre tests and other matching process may be time consuming
  • order effects and demand characteristics are reduced.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

define a quasi experiment

A
  • they have an IV that is based on an already existing factor that can’t be changed- eg gender.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

give the strengths and limitations of quasi experiments

A

-cannot determine for sure if IV is what caused change in DV
- have some control, which increases validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

define a natural experiment

A
  • a type of natural experiment where a researcher uses an IV that is already in existence, but an environmental one that MAY be manipulated, eg drug addicts or left/right handed people.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Give the pros and cons of a natural experiment.

A
  • high ecological validity: since IV cannot be changed, can be applied to real life well.
  • lack of control: there may not be control over EVs and confounding variables, leading to low internal validity.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Compare and contrast a NATURAL and QUASI experiment.

A
  • both involve choosing already occurring variables.
  • however in a natural experiment, a variable that is an environmental choice is chosen, eg the preference of coffee over tea, whereas in quasi experiments, a natural IV is chosen that cannot be manipulated whatsoever, eg age.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

define systematic sampling.

A
  • every nth person from a population being chosen within a sampling frame.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

define stratified sampling

A
  • population divided into strata, with pps from each strata being selected using random sampling until a sample size is reached.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

give pros and cons of systematic sampling

A

+: little bias because after interval has been decided, researcher has little to no control over who is selected.
-: time consuming and sampling frame needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

give pros and cons of stratified sampling

A
  • representative of population, equal proportion of each strata present, easier to generalise.
  • time consuming to distribute population into strata, and sometimes categories might be half met, therefore hard to group certain individuals.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the BPS ethical guidelines? (4)

A
  • protection from physical and mental harm
  • informed consent
  • no deception
  • confidentiality and privacy.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How do we get around informed consent?(3)

A
  • presumptive consent- other group not in the study asked if it sounds ok
  • Prior general consent: consenting to multiple studies, even those that involve deception.
  • retrospective consent: asked for after study is done, dueing debrief.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Dealing with deception and protection from harm? (3)

A
  • provide counselling if study was potentially traumatic.
  • fully debrief pps
  • should be aware that they have right to withhold data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How do we deal with confidentiality? (2)

A
  • use initials when listing patient specific data
  • during debriefing, patients should be made aware that their data will be completely private and not shared with others unless their prior consent is given.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Define pilot studies
- a smaller scale investigation carried out before the actual investigation.
26
What are the aims of a pilot study? (3)
- to check that experimental design is appropriate. - to check if possible questionaries are appropriate. - to make changes to any technique is needed
27
Define a naturalistic observation Pros and cons?
- observing a study in a natural environment, eg a classroom. - +: higher external validity, can apply to real life because its in a natural environment. - -: lack of control, so less likely to produce valid results.
28
Define a controlled observation pros and cons?
- observing in an environment where some control is used, eg a lab. - +: high internal validity: controlled observations, so easier to replicate. - -: low ecological validity, due to artificial stimuli and environment.
29
Define an OVERT observation pros and cons?
when pps KNOW they're being observed. +: much easier to do if researcher doesn't fit into sample size as pp. -: demand characteristics are likely -: ethics man ethics- privacy
30
Define COVERT observation pros and cons?
when pps DONT KNOW when they're being observed. - +: less chance of demand characteristics, as unaware they're being wacthed - -: ETHICS- kinda creepy, deception and privacy.
31
Define pp observation pros and cons?
when the researcher becomes a part of the sample that they are observing (eg poses as a student) - +: increased external validity because less chance of demand characteristics, gives real life insights. -: researcher might loose objectivity, therefore conclusions might be biased.
32
What is a criticism of ALL types of observation? how can it be minimised?
- observer bias: observer's interpretation may affect results of study. - can be reduced by involving several observers.
33
- define behavioural categories
when a target behaviour is broken down into observable and measurable components
34
define event sampling
- data that is recorded every time an event occurs
35
define time sampling
- data that is collected at a certain time interval, eg every 5 mins
36
Define the following terms: structured interviews unstructured interviews semi structured interviews
- fixed list of questions asked in a fixed order - no fixed list of questions- pp's response determines next question - certain questions are fixed, but allows room for flexibility in changing questions.
37
Define acquiescence bias
- yes saying tendency to questions without properly understanding them.
38
how do you design good interview/questionaries
- avoiding overuse of jargon - avoiding leading questions and emotive language - avoiding double barrelled questions (two questions in one)
39
What are features of a sign test? What situations should it only be used in?
- repeated measure design - looking at a difference in data - have to have nominal data (categoric data)
40
What is a type 1 error
when a null hypothesis is rejected when it shouldn't have been. so false positive
41
What is a type 2 error?
- when a null hypothesis is accepted when it shouldn't have been - false negative
42
what 3 questions would you ask yourself when you determine what statistical test you should use?
- Diff or Correlation - Nominal, ordinal or interval data. - Experimental design (independant, or related data?)
43
in what case would you use a sign test?
- Diff, Nominal, related (matched pairs or repeated measures)
44
In what case could you use a Wilcoxon test?
- diff, ordinal, related data
45
In what case would you use related t test?
- Diff, interval, related
46
In what case would you use a chi squared test?
- independant, nominal, diff
47
In what case would you use a Mann-Whitney test?
- Independant, ordinal, diff
48
In what case would you use the unrelated t test?
- independant, interval, diff
49
in what case would you use a chi squared test when looking for the correlation?
- nominal, correlation, related data
50
In what case would you use a Spearman's rho?
- related, correlation, Ordinal
51
In what case would you use Pearson's r?
- Interval, correlation, related data.
52
Nominal data
- categoric data, is discreet.
53
Ordinal data
- data that can be ordered, eg scores on a test, on a scale, not equal intervals.
54
Interval data
- data with equal intervals, usually with units.
55
give the format of a psychological research report.
- abstract - intro - method - results - discussion - refs
56
Define reliability
- the extent to which a test produces consistent results.
57
Define internal reliability
- extent to which something is consistent within itself
58
Define external reliability
- extent to which a test measures consistently over time.
59
Define inter-observer reliability
- consistency between findings/observations of multiple observers in a study.
60
How do you improve observational reliability?
- use multiple observers - train observers to know what to look for - Have clearly defined criteria.
61
Give a method assesing internal reliability using questionnaires
- split-half method - the same pps do the both halves - assess correlation between the answers of the 2 halves using Pearsons R test - if results are consistent, then the questionnaire has good internal reliability
62
Give a method to assess external reliability using questionnaires
- test-retest method: give the pps the same test on 2 separate occasions,
63
Give 3 ways to improve reliability in self-report methods. (interviews)
- use precise questions (eg closed qs) - use the same interviewer, or train the interviewer. - Pilot questionnaire beforehand to check clarity of questions
64
Give 3 methods to improve reliability for controlled research
- use same method for pps - use same conditions - when replicated, researchers need to use the same method each time.
65
Define validity
- the extent to which something measures what it claims to.
66
Define internal validity
- concerns towards whether the results are due to the IV being manipulated and not due to confounding variables.
67
Name the 3 types of external validity.
- Ecological - temporal - population
68
Define ecological validity
- whether findings from a controlled experiment can be generalised elsewhere.
69
Define temporal validity
- whether findings from a controlled experiment can be generalised beyond the period of time of the study.
70
Define population validity
- whether results can be generalised to other groups.
71
Define face validity
- when we use self report methods - quick eyeballing to see if the test is measuring what it's supposed to measure
72
Define concurrent validity
- when an established test is used, in comparison to a non-established one. - if the non-E one has similar results to the established, we can say that the test has concurrent validity.
73
Give 2 ways of improving validity when using questionnaires.
- assure responses are anonymous - review them again when they are determined to have low concurrent validity.
74
give 3 ways of improving validity in experimental research.
- use control group to serve as a comparison - whether or not the manipulation of the IV is the thing that's changing it. - standardise procedures - Double blind studies to reduce DCs.
75
give 1 way of improving validity when using observations
- make sure categories aren't too broad or overlapping.
76
Give 2 ways of improving validity when using observational techniques.
- use direct quotes and be coherent in reporting. - triangulation can be used - eg using different sources: personal diaries, family etc.
77
Describe what is meant by content analysis.
- researcher making their observations based on indirect methods, such as books, films, diaries etc.
78
give the 3 steps to go through content analysis.
- identify themes in the data - repeatedly go through the data - eg listen to recordings more often, and read over reports again. - tally the themes
79
Define thematic analysis
- an analytical qualitative method for organising, describing and interpreting data.
80
What is a case study
An in depth analysis of a single induvial or group over time - often idiographic and individualistic
81
Describe one disadvantage of a case study
- findings cannot be generalised to other individuals - Because of case studies being very unique to an individual, they cannot be replicated, so they lack reliability
82
give 2 disadvantages, other than generalisability when using case studies
- researcher may develop bias toward subject as they get to know them very well. - Case studies are extremely specific to one individual or group, and thus cannot be replicated - so they lack reliability
83
Describe a method to carry out content analysis.
- Create a checklist of categories - count/tally frequency of behaviour. - Analyse the data using quantitative methods - eg representing tallies over time of study etc.
84
What is the advantage of thematic analysis over content analysis?
- since themes are identified after the overview of content, it may prevent observer bias in this case.
85
Describe what is meant by investigator effects.
- when a researcher's potential biases can impact results of the study.
86
Give ways to prevent investigator effects.
- inter-rater reliability - use another researcher and compare these results. - use a double blind method
87
State and explain one advantage of using observation
- allows observation of real behaviour rather than spoken responses - people may lie/resort to trying to remember their responses.
88
What are the stages of scientific theory.
- observation - constructing a hypothesis - collecting experimental data - Proposing a theory that may explain the results.
89
Outline what is involved in self report technique
- when pps themselves report their thoughts/feelings themselves - This can be through questionnaires or surveys - can also involve open/closed questions
90
define Paradigm
- a shared set of assumptions that is from a generally accepted scientific theory.
91
Describe how criteria may help refine observational techniques (4)
- may help provide quantitative data - so easier to analyse - may help researcher have clear goal as to what they are looking for - may improve reliability - can allow tallying into pre-arranged groupings
92
How can you deal with socially sensitive research?
- be aware of implications of research if published - make sure pps are aware that they have the right to withdraw - ensure confidentiality of all pps involved - assess research question carefully - is it leaning towards anything?