PAPER 2- TOPIC 3 RESEARCH METHODS ✅ Flashcards

1
Q

define an aim

A

a general statement of what the researcher wants to investigate, and the purpose of it

e.g. to investigate whether…..has an effect on……

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

define a hypothesis

A

a testable predictive statement that states the relationship between the variables being investigated, in the study

e. g. there will be a difference between…
- must be operationalised
- directional or non directional

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

define operationalisation

A

clearly defining variables in a way that they can be easily measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

define extraneous variable

A

a nuisance variable that does not vary systematically with the IV

  • random error that doesn’t affect everyone in the same way
  • makes it harder to detect results, as “muddies results”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

define a confounding variable

A

a form of extraneous variable that varies systematically with the IV, as it impacts the entire data set

  • may confound all results, as this influence may explain results of DV
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

recall the 8 features of science and the pneumonic

A

PROPH(F)ET

  • paradigms
  • replicability
  • objectivity
  • paradigm shift
  • hypothesis testing
  • falsifiability
  • empirical method
  • theory construction
  • objectivity
  • falsifiability
  • replicability
  • theory construction
  • hypothesis testing
  • paradigms and paradigms shift
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

define objectivity

give example

A

ability to keep a critical distance, from own thoughts and bias

  • forms basis to empirical method
  • lab studies with most control, tend to be most objective
  • —- e.g. Milgram, Asch
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

define empirical method

give example

A

scientific process of gathering evidence through direct observation of the sensory experience

  • e.g. experimental method, observational method
  • —-> Milgram ——-> Ainsworth SS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

define falsifiability

give example of an unfalsifiable theory

A

theories admit the possibility of being proven false, through research studies

  • despite not being “proven”, the strongest theories have survived attempts to falsify them
    e. g. Freud’s Oedipus complex is unfalsifiable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

define replicability

what does it help assess

example

A

extent to which the research procedures can be repeated in the exact same way, generating the same findings

  • assess validity as repeated over different cultures and situations, to see the extent to which findings can be generalised
    (e. g. Ainsworth SS, behavioural categories, standardised procedure)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

define a theory

  • describe their construction
A
  • a set of general laws that explain certain behaviours
  • this will be constructed based on systematic gathering of evidence through empirical method, and can be strengthened by scientific hypothesis testing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

define hypothesis testing

•••example

A

statements, derived from scientific theories, that can be tested systematically and objectively

  • only way to be falsified (using null hypothesis)

••• e.g. has STM got more than one store —> led to WMM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

define a paradigm

A

a paradigm is a set of shared beliefs and assumptions in science

  • psycholgy lacks a universally accepted paradigm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

define a paradigm shift

•••example

A
  • significant change in a dominant theory in a scientific division, causing a scientific revolution
  • —> as a result of contradictory research that questions the established paradigm
  • other researchers start to question paradigm and there becomes too much evidence against paradigm, to ignore, leading to a new paradigm

•••idea of brain’s function as holistic —> idea of localisation of function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

define deduction

A

process of deriving new hypotheses from an existing theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

define a case study

features of typical case study

A

a detailed, in depth investigation and analysis, of an individual, group or event

  • qualitative data
  • longitudinal
  • gather data from multiple sources (friends, family of individual also)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

pros and cons of case study

A

pros
• rich, in depth data
• can contribute to understanding of typical functioning (HM research discovered the two separate LTM & STM stores)
• can generate hypotheses for further nomothetic research being done, based on contradictory case (whole theories may be revised)

cons
• rarely occur, so hardly generalisable
• ethical issues (e.g. patient HM always consented to be questioned as he didn’t remember them everyday for 10 years)
• researcher interprets the qualitative data and selects which data to use (bias)
—> also data from family and friends may have experienced memory decay

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

define content analysis

and the aim

A

a type of observational research, where P’s behaviour is indirectly studied using communications they’ve produced

aim is to systematically summarise the P’s form of communication and split into coding units to be counted (quantitative) or analysed as themes (qualitative)

  • usually qualitative to quantitative
  • communications (e.g. tests, emails, TV, film)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

describe the steps of content analysis

A
  • gather and observe/read through the communication
  • the researcher identifies coding units, in order to categorise the information
  • the communication is analysed by applying the coding units to the text, and the number of times the coding unit appears is counted
  • data is then summarised quantitatively and so conclusions can be drawn
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

define thematic analysis

A

a form of content analysis, which uses qualitative method of analysing the data that involves identifying emergent themes within the communication used, in order to summarise it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

describe steps of thematic analysis

A
  • form of content analysis
  • identify emergent themes (recurring ideas) from the communication
  • —-> themes are more descriptive than coding units
    (e. g. stereotyping is theme. women gets told to go to kitchen is coding unit)
  • these themes may be further developed into broader categories, to try and cover most of the aspects in the communication
  • a new set of communication may be used to test the validity of the themes
  • qualitative summary is then written up, using quotes from communication
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

pros and cons of content analysis

A

pros
• high reliability, as follow systematic procedures

  • material is often public so don’t need consent & cheap to use secondary data
  • flexible as can produce both quantitative and qualitative data

cons
• very time consuming, manually coding the data and identifying coding units or recurrent themes

  • P’s are indirectly studied, so communications they produce are analysed out of the context it occurred in
  • content analysis suffer from lack of objectivity as researchers interpret the communication themselves —> human error if interpreting more complex communications
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

acronym to remember the second column (related column) in the table for choosing statistical tests

A

S
W
R

sign
wilcoxon
related T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

hint to remember all of the first column (unrelated data) from the table for choosing inferential tests for significance

A

all have U in them

chi sqUare
mann whitney U
Unrelated t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
the three factors affecting which inferential test to use
- data? (level of measurement) - difference? (testing for a difference or a correlation) - design (independent groups or matched pairs/ repeated measures ---> unrelated or related)
26
define a parametric test
a more robust test, that may be able to identify significance that other tests can't MUST BE... - interval data - P's must be drawn from a normally distributed population - the variance between P's in each group must be similar
27
observed/ calculated value
is the value that is produced by the statistical test
28
critical value
value that is gathered from the calculations table for the specific test - the cut off point between accepting and rejecting the null hypothesis
29
how do you know whether the observed value should be ≥ or ≤ the critical value, for test to be significant
"gReater rule" if test has an R in it, the observed/ calculated value should be GREATER than or equal to the critical value e. g. unRelated t - Related t - chi- squaRe - peaRsons R - spearman's Rho all should have an observed value ≥ critical value to be significant (sign test, wilcoxon and mann whitney u must have observed value ≤ critical value, to be significant)
30
define nominal data
- presented in form of categories | - is discrete and non-continuous
31
define ordinal data
- presented in orders or ranked - no equal intervals between units of data - lack precision as subjective for to what someone sees a "4" as - data is converted into ranks (1st, 2nd, 3rd) for statistical tests as raw scores are not accurate enough
32
define interval data
- continuous data - units of equal, precisely defined sizes (often public measurement scales used - e.g. time, temperature) - most sophisticated precise data - hence their use in parametric tests
33
experimental design(s) of related data
matched pairs | repeated measures
34
experimental design(s) of unrelated data
independent groups
35
type 1 and 2 error
type 1- false positive (said there was a significance when their wasn't 𝗼𝗻𝗲) type 2- false negative (𝘁𝗼𝗼 strict)
36
steps to complete sign test
* find difference between two scores (+ - 0 ) * select lowest number of + or - as 's' observed value (same for Wilcoxon test) * calculate N (no. of participants - 0's) * use hypothesis, probability and N value to find critical value * s must be ≤ critical value, to be significant
37
perfect conclusion template for a statistical using sign test for example
* observed value 's' of 1.4 was ≤ critical value of 1.6 for N value of 10 at a probability of 5% for a one tailed test * therefore, we can accept the alternative hypothesis showing that 'the happiness score of toffees increases when Rafa is out, rather then when he is in'
38
what is the order of all sections of a scientific research report
``` abstract introduction method results discussion referencing ```
39
describe the abstract section of a scientific report
- short summary of the study - includes all major elements: aims, hypothesis, method, results, discussion - written last, at start of report
40
describe the introduction section of a scientific report
• large section of writing - outlines relevant theories, concepts and other research- & how they relate to this study - state aims an hypotheses
41
describe the method section of a scientific report
✰ section explaining how experimental is carried out, split into: * design - experimental design (e.g. IG, MP, RM) ; experimental method (overt, naturalistic); IV & DV; and validity and reliability issues * participants - sampling technique, who is studied (biological and demographic), how many P's, target population * apparatus/ materials needed * procedure - step by step instructions of how it was carried out, include briefing and debrief to P's * ethics - DRIPC, how this was addressed
42
describe the results section of a scientific report
✰ summary of key findings, split into : • descriptive statistics - uses tables, graphs and measures of central tendency & dispersion • inferential statistics - test chosen, calculated and critical values, significance level, if it was significant, which hypotheses accepted ••••• if gathered qualitative data, likely to be in the form of categories or themes
43
describe the discussion section of a scientific report
✰ large piece of writing where researcher summarises and interprets the findings verbally and the implication of them includes: - relationship to previous research in introduction - limitations of research- consider methodology and suggestions for improvement - wider implications of research- real world applications and the contribution of research to current theories - suggestions for future research
44
describe the referencing section of a scientific report
the full details of any source material mentioned in the report, are cited
45
describe how to do a book reference
surname, first initial (year published), title of book (italics), place of publication. publisher e.g. Copland, S (1994), 𝘛𝘩𝘦 𝘤𝘩𝘳𝘰𝘯𝘪𝘤𝘭𝘦𝘴 𝘰𝘧 𝘣𝘦𝘪𝘯𝘨 𝘴𝘶𝘴, California, Puffin books
46
how to write a journal reference
author, date, article title, journal name (italics), volume (issue), page numbers e.g. Copland, S (1994) Effects of being sus on your ball knowledge, 𝘛𝘩𝘦 𝘜𝘭𝘵𝘪𝘮𝘢𝘵𝘦 𝘉𝘢𝘭𝘭 𝘒𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘵𝘦𝘴𝘵 , 11 (12), 231-237
47
brief description of an appendix (not on illuminate scientific report, but also in there)
- contains any raw data, questionnaires, debriefs, consent forms, calculations - evidence that don't fit in the main body of report
48
outline what's in a consent form
* aim * what they will do, and for how long * right to withdraw and confidentiality * ask for questions * place to sign & add date
49
outline what's in a debrief
* aims * discuss what went on in all conditions and any deception * findings * right to withdraw * remind confidentiality * where they can find more info * any questions?
50
outline what's in 'instructions'
• step by step of everything P has to do
51
define validity
whether the observed effect of a study is genuinely due to the manipulation of IV (measuring what they set out to) and accurately generalised beyond the research setting (e.g. across historical contexts, compared to well-recognised studies, measuring what set out to measure)
52
all different types of validity
- internal - external - ecological - concurrent - face - temporal
53
define concurrent validity
extent to which findings have a correlation with the results from well-recognised studies with established validity
54
define temporal validity
extent to which findings can be generalised to other historical contexts/eras
55
define ecological validity
extent to which findings can be generalised to real life, outside of the research setting
56
define face validity
extent to whether on the surface, a study looks like it will measure what it's set out t.
57
define internal validity
extent to which a study measures what it set out to measure | is observed effect on the DV, due to maniupulation of IV
58
define external validity
the extent to which the findings reflects the real world (in terms of the population (population), the environment (ecological), the time era (over time)
59
how to improve validity in - questionnaires - interviews - experiments - observations
• questionnaires - incorporate redundant questions to create a 'lie scale' (account for social desirability bias) - anonymity - remove ambiguous questions • interviews and case studies - structured interview reduces investigator effects, but reduce rapport and so answers less accurate - triangulate data - gain respondent validity by checking you understood the p correctly and use quotes in findings (increase interpretive validity) • experiments - control group - pilot study to expose extraneous variables - change experimental design to reduce order effects or effect of participant variables - standardise procedure - counterbalancing, double blind, randomisation • observations - familiarise with BC so don't miss anything - operationalise BC, so it is clear what you're looking for - use covert or non participant
60
define a pilot study
a small scale trail run of the actual study completed before the real full scale research is completed
61
why use pilot studies
- can identify extraneous variables, that can be controlled for the real study - can help improve reliability (test-retest) - modify any flaws with procedure or design (reduce cost from messing up large scale) - can allow training of observers - can adapt or remove ambiguous or confusing questions in questionnaire or interview - can identify areas where further randomisation, counterbalancing, standardisation etc... can be used, to limit any observed order effects, bias, investigator effect or demand characteristics
62
define peer review
assessment of research, done by other psychologists in a similar field, who provide an unbiased opinion of a study to ensure it is high enough quality for publication
63
describe the aims of peer review
- allocate research funding as people (and funding organisations) may award funding for a research idea they support - ensure only high quality, ... < useful >..... studies are published - suggest amendments, improvements or withdrawal before publishment
64
process of peer review
- research is sent to an anonymous peer to objectively review all aspects of written investigstion - they look for: • clear and professional methods & design • validity • originality (not copied) and significance of the research in that field of psychology • results - the statistics chosen and the conclusions
65
weaknesses of peer review
•bury ground breaking research - may slow down rate of change - also if research contradicts paradigm or mainstream research, itsmay be buried or resisted •publication bias - editor preferences may give false view of current psychology - some publishers only want to publish positive news or headline grabbing research to boost the popularity of their journal (may ignore valuable research) •anonymity - the peers reviewing stay unidentified - researchers competing for funding may be over critical (some publishers now reveal after who is reviewing to combat this) - reviewers may also resist findings that challenge their previous research
66
define reliability
how consistent a study's data is, and the extent to which it would produce similar results if the study was repeated
67
two ways of assessing reliability
• inter-observer reliability (inter-rater for forms like content analysis) • test-retest reliability
68
define inter-observer reliability
the extent to which there is an agreement between two or more observers observing the same behaviour, using the same behavioural categories
69
define test-retest reliability
the extent to which there is a correlation between the scores of the same P’s on a test or questionnaire, completed on different occasions
70
describe how to carry out inter-observer reliability
- complete observation again with two or more observers watching the same observation and using same behavioural categories - compare the results of the different observations using a spearman's rho test (0.8+ for a strong correlation)
71
describe how to carry out test-retest reliability
- administer same test or questionnaire to the same P on different occasions - not too soon - prevent recall of answers - not too long - prevent the views or ability being tested changing - use a correlation to compare the results, (0.8+ for a strong correlation)
72
how to improve reliability in - questionnaires - interviews - experiments - observations
• questionnaires - closed questions - clear, unambiguous questions • interviews and case studies - same researcher - limits leading or ambiguous questions - structured interviews • experiments - standardised procedure and instructions • observations - operationalise and familiarise/train observers with behavioural categories - two or more observers and compare results -
73
two ways of assessing validity
• 'eyeball' test to measure face validity of the test or measure OR - pass to expert to measure face validity AND • compare with well-recognised test with established validity to create a correlation co-efficient, measuring concurrent validity (close agreement in +0.8)
74
define imposed etic
when we assume a technique that works in one cultural context will work in another
75
define a meta analysis
combination of research and findings from several studies on the same topic - findings are weighted for its sample size
76
what is experimental method
aka types of experiment lab field natural quasi
77
# define a quasi experiment example
IV is a naturally occurring 𝗯𝗶𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹 difference (existing between people) - --> not manipulated or changed - measuring affect of naturally occurring IV on DV - can be field or lab Bahrick- studied duration of LTM of those within 15 years of graduation, and those within 50 years of graduation
78
# define a natural experiment example
IV is a naturally occurring event, not manipulated - someone or something caused the IV to vary (not the researcher) - measuring affect of naturally occurring IV on DV - can be field or lab Rutter's Romanian Orphans study
79
# define a field experiment example
carried out in a natural, everyday setting (the usual environment the P's are in) - IV manipulation is under less control Piliavin- studied how people react differently when they see someone collapse on the train, due to drinking or due to injury
80
# define a lab experiment example
- carried out under controlled conditions - researcher manipulates IV to see effect on the DV Milgram
81
strengths of lab
- control of variables - --> easier to replicate - --> an establish cause and effect of IV and DV as variables exactly manipulated (high internal validity) - --> more accurate results
82
weaknesses of lab
- tasks may be artificial or trivial - low mundane realism - P's aware they are being studied- DC (low external validity) - artificial environment- may behave differently - hard to generalise to real world (low external validity)
83
strengths of field
- higher mundane realism in natural environment - P's unaware being studied (no DC) - higher external validity
84
weaknesses of field
- ethical issues- P's can't consent, and their privacy is invaded - less control over variables - less valid, harder to replicate and harder to establish cause and effect between IV and DV
85
strengths of natural
- opportunity for research that would not be possible, for practical and ethical reasons - high external validity, as study real world scenarios and problems
86
drawbacks of natural experiment
- IV isn't deliberately changed so can't say that IV caused the observed change in DV (cant claim cause and effect) - events rarely occur, so hard to generalise findings - P's can't be randomly allocated to conditions - ---> may experience confounding variables
87
strengths of quasi
completed in a field or lab setting - if in field: P's unaware being studied (no DC), mundane realism - if in lab: control over variables, replication
88
drawback of quasi
- P's can't be randomly allocated to conditions - ---> may experience confounding variables (e. g. all those who been in car crash may have higher trauma level than control group) - IV isn't deliberately changed so can't say that IV caused the observed change in DV (cant claim cause and effect)
89
define mundane realism
results are representative of everyday life
90
define experimental designs
different ways P's can be arranged into experimental conditions - RM - IG - MP
91
define repeated measures
one group of P's experience both conditions of the experiment - mean of condition A compared to mean of condition B
92
define independent groups
- two separate groups experience two different conditions of the experiment - performance of each group is then compared
93
define matched pairs
- there are a separate group of P's for each condition of IV - but each P is matched to one other, based on certain shared characteristics relevant to the study e. g. complete IQ tests, prior to actual study and match the no.1 score with no.2 score - each P does one condition and their scores are compared directly against their partner
94
strengths and weaknesses of repeated measures
+++++++ • no effect of individual variables • more economical - only need one group of P's (less time/£ on recruitment) ---------- • order effects (boredom, fatigue, practice), also 1st condition may affect the 2nd condition (e.g. effects of coffee in C1 may continue into C2 where they have water) • may realise what study is aiming to find, therefore DC may be present
95
strengths and weaknesses of independent groups
+++++++ • reduced order effects • less chance of realising aim of study - no DC ---------- • individual variables • less economical- twice as many P's needed
96
strengths and weaknesses of matched pairs
+++++++ • accounts for individual variables • reduced order effects • unlikely to realise aim of study ---------- • more expensive and time consuming to find suitable matching P's, also may need a pre-test • although matched, individual variables will still be present
97
# define standardisation • what does it remove
- keeping procedures in a research study the same - all participants treated the same - (so they have the same experience) • removes experimenter bias also makes the study replicable and easy to complete again accurately
98
define counterbalancing
where half of P's do the first condition first followed by the second, and the other half do the second condition first and the first condition second - control for order effects
99
define random allocation
each participant has an equal chance of being in each group/condition - control for participant variables
100
define participant variables
individual characteristics that may influence how a participant behaves
101
define randomisation
- use of chance wherever possible to reduce bias or investigator effects (conscious or unconscious)
102
what variables do double blind and single blind procedures control
double blind: demand characteristics and experimenter bias single blind: demand characteristics
103
define random sample
- all members of target pop have an equal chance of being selected METHOD - randomly selects the list of P's in the sample - assign all names in sampling frame a number, input all names into computer, and randomly generate n numbers, the P's that these numbers correlate to, are in the sample
104
strengths and weaknesses of random sample
+++++++ • no bias, as everyone has equal chance (confounding and extraneous variables are equal divided across groups) ---------- • may still be unrepresentative as its random • hard to obtain complete list of target pop
105
define systematic sample
every nth member of population is selected METHOD - organise sampling frame e.g alphabetically - begin from randomly generated number and select every nth member after until sample complete
106
strengths and weaknesses of systematic sample
+++++++ • unbiased and objective, once n is selected investigator has no effect ---------- • may still be unrepresentative • time consuming and unrealistic to obtain full sampling frame
107
define stratified sample
composition of sample is weighted to reflect the proportion of people in certain subgroups in the target population (e.g. race, religion, what football team support) METHOD - identify strata in population, work out the % of population the strata contains, use same % representations in sample
108
strengths and weaknesses of stratified sample
+++++++ • highly representative of target population - can generalise findings • no bias ---------- • very time consuming and costly • hard to determine what variables to split the population up, based on
109
define opportunity sampling
selecting the first people that are willing and able to take part in the study - whoever, is available and around at the time of them completing their study
110
strengths and weaknesses of opportunity sample
+++++++ • convenient - time and low cost (don't need full sampling frame) • more likely to be people you know, so easier to conduct study ---------- • researcher may only go up to/ ask a certain type of person (only 'approachable' people) (researcher bias) • unrepresentative, as only from one specific area
111
define volunteer sampling
researcher advertises the study (or even ask people to raise their hand) and P's select themselves to take part
112
strengths and weaknesses of volunteer sample
+++++++ • minimum input from researcher, P's come to you • all of the P's who put themselves forward will be willing to participate (all engaged too) ---------- • attract a certain personality of those who want to help, or please the investigator • cost of advertising • risk not enough people willing to take part
113
weakness of all sampling methods
- selected P's may refuse to take part, ends up more like opportunity sampling
114
# define population define sample
- a group of people that the researcher is interested in studying - a small subset selected from the target population to take part in the study, through a particular sampling method (assumes to be representative of target population)
115
define generalisation as an implication of sampling
- if sample is representative, the findings from the study can be applied to the wider target population
116
define bias as an implication of sampling
when certain subgroups of a target population are under or over represented within the sample
117
list observational techniques
- naturalistic - controlled - covert - overt - participant - non participant - structured - unstructured
118
define naturalistic and controlled observation
• naturalistic - completed in field setting, where target behaviours would normally occur - investigator doesn't interfere with setting or variables • controlled - completed in lab setting under controlled conditions - investigator controls E and C variables, and manipulate variables to observe effect
119
define covert and overt observation
covert (covert operation- under cover) - participants are NOT aware they are being observed - behaviour must be in public and happening anyway (to be ethical) overt - participants ARE aware they are being observed
120
define participant and non participant observation
participant - observer involves themself in the group of people they are observing - sometimes only possible if involve (e.g. to see how factory workers are treated, join them) non participant - observer remains separate from group they are observing, and don't interfere
121
strengths and weaknesses of overt and covert observations
``` OVERT +++++++ • can obtain informed consent --------- • more likely DC • likely susceptible to hawthorne effect (act differently as knew you're being watched) ``` COVERT +++++++ • unlikely for DC - increasing internal validity • less likely susceptible to hawthorne effect (act differently as knew you're being watched) ---------- • can't obtain informed consent, so are observing people without them knowing
122
strengths and weaknesses of naturalistic and controlled observation
NATURALISTIC +++++++ • higher mundane realism, external validity • P's more likely to act naturally ---------- • struggle to judge patterns of behaviour as cant control extraneous or confounding variables • harder to replicate ``` CONTROLLED +++++++ • easier to replicate • more control over variables, higher chance of cause and effect ---------- • DC • artificial environment- less mundane realism • hawthorne effect ```
123
strengths and weaknesses of participant and non participant observation
PARTICIPANT +++++++ • may be necessary (e.g. to see how factory workers are treated, join them) • can understand reason behind behaviour as gain idea of emotions • can build rapport with P's and gain more insight and detail ---------- • observer may go native (too invested in experiment, they lose track of aims) • may miss important observations when involved • may lose objectivity if adopt the "local lifestyle" (identity too strongly with P's) NON-PARTICIPANT +++++++ • may be necessary when studying certain social groups (e.g. 50 year old man can't blend in and join a group of 15 year olds boys) • no risk of observers going native • objective ---------- • cant gain understanding of why people are behaving how they are, or may miss further insight)
124
strengths and weakness of using observations, in general
pros - give actual insight into behaviour, people don't always act how they say they do cons - observer bias - depends on how observer interprets the situation and behaviour - cant demonstrate cause and effect as variables aren't as closely manipulated, may be confounding variables - cant understand why people are behaving how they are
125
define structured and unstructured observation
structured - uses pre-determined behavioural categories to record the frequency that these target behaviours occur - ----> used in larger studies where there is too much going on unstructured - observe all relevant behaviour with no standardised checklist of behavioural categories - ---> used in small scale observation with few P's
126
strengths and weaknesses of structured and unstructured observation
STRUCTURED +++++++ • produce quantitative data- easier to analyse and draw conclusions • inter rater reliability increased - all observers looking for same behaviours --------- • not as detailed insight • may miss important behaviours UNSTRUCTURED ++++++ • rich in data • understand reason behind behaviour ---------- • hard to replicate, as interpretations of observer important • so much information - time consuming to analyse and make conclusions • observer bias - e.g. may only record behaviours that stand out
127
define behavioural categories
standardised checklist of operationalised target behaviours, that have been broken up to be more measurable and observable - only in structured observations
128
strength and weakness of behavioural categories
+++ • replicable • know what to look out for • allow for inter-observer reliability ----- • some categories may be wasted, or empty • must be clear, operationalised and not overlap otherwise up for interpretation • some behaviours may be missed, if no category for it
129
sampling methods of observations
* continuous recording - making note of all target behaviours that occur (unstructured) * event sampling - tallying the number of times a particular behaviour occurs in * time sampling - recording all target behaviour that occurs at set intervals for x amount of time
130
sampling methods of observation
* continuous recording - record all instances of target behaviour (unstructured) * event sampling - tallying the number of times particular target behaviours occur * time sampling - record behaviour that occurs within a pre-established time frame, at set intervals
131
strength and weakness of event sampling
+++ • easier to complete • focus on chosen behaviours, so don't miss anything ---- • if lots happen at same time, may be hard to keep up • may have some wasted categories
132
strength and weakness of time sampling
+ • time efficient - • may miss out in between periods
133
define self report techniques
134
define interview
a live conversation where an interviewer asks the interviewee questions to assess their individual thoughts and experiences
135
describe 3 types of interview
• structured - standardised pre determine questions asked in a fixed order • semi-structured - list of questions to ask in advance, but can ask follow up questions • unstructured - no set questions in any fixed order - only a general aim to discuss a certain topic - interviewer encouraged to elaborate on answers, by interviewer - like a conversation
136
strengths and weaknesses of structured interview
+ easier to analyse + replicable + interviewer requires less skill and training - answers may be restricted by questions - increase social desirability bias, as don't have too justify
137
strengths and weaknesses of unstructured interview
+ more detailed insight, can understand reason behind + responses tend to be more honest, as have to be able to justify - hard to analyse and make general conclusions, as lots of different questions and answers - different questions may be interpreted differently by different P's - interviewer bias may affect what questions are asked
138
define questionnaire
a set of written questions that a respondent answers to assess their individual thoughts, experiences and behaviour
139
common issues with questionnaires .......... what do they lead to
* overuse of jargon - using specialist topic vocabulary and assuming the respondent knows more about the topic then they actually do * emotive language - author of Q portrays their emotion through the emotions of the word they use * leading questions - phrasing of the question indicates the respondent to answer in a particular way * double barrelled questions - contains two Qs in one * double negative - two forms of negative in same Q .... all decrease clarity, increase confusion and misinterpretation
140
describe open questions closed questions
open questions are Qs that allow the respondent to answer how they wish, with no sort answers to choose from closed questions are Qs with a fixed number of responses
141
strengths of questionnaires weaknesses of questionnaires
+ cost effective + wide geographical use + convenient (don't need researcher present) + can generate large volumes of data easily + easily analyse and compare data - may be subject to social desirability bias - response bias (e.g. acquiescence bias or responding without fully reading the Q
142
define ethical issues in psychological research
• the problems that arise from how of P’s in the study are treated these exist when there is conflict between the aims of the experiment to produce valid research data, and the rights or safety of P's code of issues created by British Psychological Society (BPS)
143
describe all the ethical issues and acronym
DRIPC Deception- deliberately withholding information and misleading P's at any time Right to withdraw- the rights of P's to be able to withdraw themselves and their data from the study at any time Informed consent- making sure P's know what the aims of the research, the procedure, their rights and show their data will be used Protection from harm- idea that P's emotional and physical health is top priority and should be protected Confidentiality- the right of P's to not share their data to the public, or doing so anonymously to protect their identity
144
how to deal with informed consent ....if impractical to get consent form:
• consent form - a form that a P signs to confirm they know the aims of there experiment and what they will have to do if impractical to get consent form: • retrospective consent - ask P's for consent after participation, during debrief • prior general - P's give their permission to take part in a number of studies, • presumptive consent - ask people of a similar group to P's if they would consent to the study
145
how to deal with deception and protection from harm
* debrief - after study, explain true aims of experiment and any information that was withheld from them * reminded they can withdraw themselves and their data at any time * provide support through counselling or therapy if P's need it * re-assure that their behaviour was normal/typical
146
how to deal with confidentiality
• usually don't record any personal data to maintain anonymity ---> often use initials or numbers to describe P's • during briefing and debrief, remind P's their data is private and won't be shared
147
define quantitative data
numerical data that involves statistics - easy to analyse - lack detail
148
define qualitative data
non numerical data expressed in words - hard to analyse ---> hard identfying patterns - interpreted--> bias - rich in detail --> true feelings--> greater external validity
149
# define primary data examples 1 strength
data gathered first hand by the researcher, particularly for the research project - gathered through observations, interviews, questionnaires • can be designed in a way to target collecting the data that they need for the investigation
150
# define secondary data examples 1 weakness
data collected by someone else, other than the researcher, that already exists before the research project e.g books, journals, websites, government statistics • may be outdated
151
# define a variable independent dependent
any element of an investigation, that can change or vary - the variable that is manipulated, so that the effect on the D can be measured - the variable that is measured by the researcher
152
define demand characteristics
- type of extraneous variable - P's think they may have guessed the aims of the research and therefore act in a different way - either help or hinder the experimenter to find what they want)
153
# define investigator effects define participant reactivity
* the unwanted influence of the investigator's actions on the behaviour of P's * the differences in behaviour of P's as they try to adjust to the study environment
154
extraneous/confounding variables that result from participant reactivity
* demand characteristics - P's act different as they think they know the aims of study * hawthorne effect - P's act different as they know someone is watching * social desirability bias - P's act alter their behaviour to be seen as more socially acceptable
155
extraneous/confounding variables that result from investigator effects
• experimenter bias - experimenter affects the results through interpretation, body language, facial expressions and gender bias (male researchers prefer female Ps) • interviewer bias - ways in which the interviewer influences the response of the P ---->nodding, leading questions, interpretations • greenspoon effect - interviewer affects the way the P responds in interview by making affirmative noises
156
# define situational variables examples
aspects of the environment and situation a P is in, that might affect their behaviour e.g. light, temp, noise
157
two examples of how psychological affected the economy
*  Bowlby's theory of monotropy, and further research into role of father * development of treatments for disorders
158
describe the financial implications on the economy of Bowlby's theory of monotropy, and further research into role of father
- Bowlby suggested only mothers can form the monotropic attachment bond with their baby - This co-erced mums to stay at home due to law of accumulated separation and law of continuity, and meant the father, who may be earning less, had to work - recent research into role of father (e.g. Frodi found fathers and mothers show same physiological responses to babies crying on clips) - --> has allowed more flexible working arrangements (fathers to be stay at home, and mothers work if they earn more) - ---> because it shows both parents are suitable to provide necessary emotional support - can maximise income and contribute positively to economy
159
describe the financial implications on the economy on development of treatments for mental disorders
- creates a more healthy workforce, contributing to labour - workers can manage their symptoms and return to work efficiently - also cost of mass producing drugs outweighs the £15 million cost of absence from work for the economy
160
ways to get consent in deception study
- presumptive consent - ----> ask similar group of people how they would feel about taking part, if they agree, there assume real P's would to - prior general consent - ----> asking P's to give permission to take part in a number of different studies, including one that will involve deception