RESEARCH METHODS (1/3) Flashcards

1
Q

internal validity

A

whether results are due to manipulation of IV and not another factor
control over extraneous variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

investigator effects

A

experimenter unconsciously conveys to participants how they should behave
experimenter bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

demand characteristics

A

cues which convey to participant the purpose of the study
participants guess the aims of the research and adjust behaviour accordingly
changes results if participants change behaviour to conform to expectations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

cause and effect

A

change in IV is causing a change in DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

external validity

A

can results be generalised?
is task realistic?
does it have mundane realism?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

ecological validity

A

participants should elicit natural behaviour as if were in real-life setting
environment is important - natural or artificial
refers to whether results can be generalised to other real-life settings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

population validity

A

refers to whether we can extrapolate findings of research to population as a whole
sex, socioeconomic status, occupation, religious belief, background, age, culture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

temporal validity

A

whether findings and conclusions are relevant today
attitudes can change over time e.g. homosexuality was once defined as a mental illness
political context at time of research can impact findings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

participant variables

A

characteristics of individual that may influence outcome of a study (age, intelligence, personality type, gender, socio-economic status)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

situational variables

A

characteristics of environment that might influence outcome of a study (distractions, atmospherics)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

researcher variables

A

variation in characteristics of researcher conducting experiment (gender, mood, sociability)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

research methods

A

strategies, processes and techniques
collect data or evidence
uncover new information, better understand

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

variables

A

anything that can be vary or be manipulated
independent = manipulated
dependent = measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

operationalisation

A

express variables in a form that can be measured
contains units
variables must be operationalised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

control of variables

A

only achieved when all variables are constant
control group provides a baseline measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

extraneous variables

A

may affect results and dependent variable if not controlled
participant, situational, experimenter bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

single blind procedure

A

participants don’t know whether they are part of the experiment or control group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

double blind procedure

A

neither participants or researcher knows whether in experiment or control group to avoid unconscious bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

confounding variables

A

any unmeasured variable that influences the dependent variable
if results are confounded, it is hard to draw causal conclusions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

reliability

A

consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

validity

A

accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

lab experiments

A

in a lab
IV directly manipulated
effect on DV measured
EVs controlled as much as possible
standardised procedure
randomly allocate participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

lab experiment strengths

A

isolation of IV on DV - cause and effect established
strict controls and procedures - easily replicated, check reliability
specialist equipment in research facility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

lab experiment weaknesses

A

artificial - not natural behaviour, reduced ecological validity
likely demand characteristics - adjust behaviour
can’t use when inappropriate to manipulate IV (impractical/unethical)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
field experiments
same as lab but in real-life setting
26
field experiments strengths
high ecological validity - generalise findings to other settings demand characteristics reduced - unaware of experiment, acts more naturally
27
field experiments weaknesses
control reduced - more EVs - cause and effect not as easily established, reduces validity unaware of taking part - could become distressed, difficult to inform, unethical population validity reduced - on control over participants, may be biased
28
quasi experiments
similar to lab (similar strengths and weaknesses) high degree of control over EVs unable to freely manipulate IV unable to randomly allocate participants (bias + confound results)
29
natural experiments
no manipulation or control of any variable naturally occurring variables practical and ethical reasons - only method
30
natural experiments strengths
investigate impractical or unethical situations with any other method ecological validity is high - study 'real' problems demand characteristics reduced - unaware, act naturally
31
natural experiments weaknesses
no random allocation of participants (bias + confound results) no control over environment - reduce validity ethical guidelines - informed consent, confidentiality, right to withdraw breached natural events are rare - impossible to replicate for reliability
32
aims
identifies purpose of investigation straightforward expression of what the researcher is trying to find out
33
hypotheses
operationalised hypotheses is a precise, testable statement about the expected outcome of a piece of research i.e. prediction about a difference researcher would write a directional / non-directional and a null hypotheses
34
directional hypotheses
when researcher has good idea about what will happen predict specific outcome about direction of differences e.g. participants will give more electric shocks to a stranger after playing an anti-social computer game than after playing a non-aggressive game
35
non-directional hypotheses
when researcher is less sure about what is going to happen predict that there will be a difference, but not which direction it will be in e.g. there will be a significant difference in number of electric shocks given to a stranger after playing an anti-social computer game and after playing a non-aggressive game
36
null hypotheses
when researcher is confident that the IV will have no effect on the DV e.g. there will be no difference in number of electric shocks given to a stranger after playing an anti-social computer game and after playing a non-aggressive game
37
random sampling
every person in target population has equal chance of being selected obtains a list + computerised random generator used to select required amounts of participants
38
target population
group of people who share a given set of characteristics about who the researcher wishes to draw a conclusion obtains just a sample intend to generalise findings from sample to target population - should be representative of entire population
39
random sampling strengths
sample likely to be representative researcher has no control over who is selected - reduces chance of biased sample improves population validity
40
random sampling weaknesses
can be difficult and time-consuming random generator, list of participants required not time efficient unless small sample does not guarantee a representative sample some groups may still be overrepresented or underrepresented may be less representative than stratified sampling
41
opportunity sampling
selects anyone readily available and willing to take part asks people most convenient
42
opportunity sampling strengths
sample easy to obtain and cost effective uses most available people around them sample does not need to be identified prior to research
43
opportunity sampling weaknesses
sample unlikely to representative uses most convenient people around them participants likely to share similar characteristics and backgrounds, reducing population validity ethical issues researcher uses first people see and ask them to take part students may feel pressure to take part of lecturers ask them, creating problems about consent and right to withdraw
44
volunteer sampling
participants put themselves forward for inclusion - self-select researcher places advertisement in magazine/newspaper, radio, email, internet, notice board asking for volunteers place questionnaires and ask people to return answers
45
volunteer sampling strengths
may be only way to locate particularly niche group of people - volunteer themselves to take part e.g. people with rare medical conditions or people suffering child abuse can advertise for group otherwise difficult to identify can save time in gathering sample where niche groups required
46
volunteer sampling weaknesses
may lack generalisation likely to be co-operative and motivated (want to spend more time in experiment, rely to give honest, genuine results) and have shared characteristics (psychological studies may involve people interest, know what to look for - demand characteristics) limits population validity as fails to reflect wide variety of members from target population may lack generalisation relies upon people seeing advertisement to put themselves forward - similar characteristics (gym, app, magazine) limits population validity as reduces size and variability of sample (similar backgrounds, readers of same newspaper)
47
systematic sampling
every nth member of target population selected sampling frame produced - list of people in target population organised in some way sampling system nominated or determined randomly to reduce bias
48
systematic sampling strengths
avoids researcher bias once system has been established, researcher has no influence over who is chosen increases validity and should lead to more representative sample
49
systematic sampling weaknesses
does not guarantee representative sample even through randomised, may still be over or underrepresented less than other methods time-consuimg sampling frame and list of target population has to be stablished before selectio
50
stratified sampling
composition of sample reflects proportions of people in sub-groups / strata within target population identifies different strata making up population proportions calculated participants selected through random sampling
51
stratified sampling strengths
avoids researcher bias once subdivided into strata, random sampling method ensures all groups are represented and researcher has no influence over who is chosen gives accurate reflection of target population leading to higher population validity
52
stratified sampling weaknesses
time consuming has to identify strata, proportions, selected randomly requires knowing all participants and details of sample not completely representative identified strata cannot reflect all possible sub-groups - most identified strata likely to be considered but some less noticeable and more personal groups may be ignored
53
bias
when certain groups may be over or under represented within selected sample
54
generalisation
extent to which conclusions from particular investigation can be broadly applied to population made possible if sample is representative
55
self-report techniques
questionnaires and interviews gather info from large numbers of people investigate attitudes or opinions on particular topic qualitative or quantitative data
56
questionnaires
written format, less flexibility no social interaction between researcher and participant uses standardised procedure pre-written questions self-report data (asking people about feelings, attitudes or beliefs) Likert scales
57
closed questions
gather quantitative data - easy to analyse
58
open questions
gather detailed qualitative data
59
questionnaires strengths
highly replicable standardised procedure - easily redistribute and check findings for reliability time and cost efficient large sample reached quickly and easily - large amount of data gained and analysed + statistical analysis used investigator effects / researcher bias researchers not present - cues less likely
60
questionnaires weaknesses
people may modify answers due to social desirability bias, reducing validity sample biased towards more literate people - reduces validity and likely to be unrepresentative researchers not always present, so participants cannot ask for help with unclear questions and may miss sections out, limited amount of info gathered
61
notes about questionnaires vs interviews
easy to repeat as researcher does no require specific training to distribute - data can be collected from large number of people - high in replicability respondents may feel more able to reveal personal info (not face to face) - data more likely to be truthful and more valid closed questions --> quantitative data --> easier to analyse and draw comparisons than open questions --> qualitative data --> difficult to analyse only certain types of people do questionnaires (depending on where and how distributed), may be sample bias, only people with similar characteristics may do them, decreasing representativeness
62
interviews
include social interactions researchers require specific training asking questions to participant and response recorded or transcribed gather self-report behaviour open and closed questions
63
structured interviews
fixed predetermined questions large-scale interview based surveys e.g. market research
64
semi-structured interviews
guidelines for questions to be asked phrasing and timing left up to interviewer questions may be open-ended
65
unstructured interviews
may contain a topic area no fixed questions researcher asks questions + further questions depending on answers given interviewer helps participants and clarifies questions
66
interviews strengths
more appropriate dealing with complex/sensitive issues - can gauge is participant is distressed or not, can stop research and offer additional support research is present - interesting issues and misunderstandings can be followed up immediately - richer and more insightful data gathered, increasing validity lots of rich qualitative data gathered (especially in unstructured interviews) compared to questionnaires as there are fewer constraints in place
67
interviews weaknesses
more likely to elicit social desirability affected answer as there is interaction low inter-rater reliability between interviews (of same participant) as investigator effects are likely extremely time consuming prepare for conduct, spend lots of time with each participant take time to analyse and difficult to compare should be conducted by trained psychologist - more costly
68
independent groups design
different participants placed in each group two separate groups used to ensure results not influenced by order effects, reduce chance of demand characteristics and when repeated measures cannot be used
69
independent groups design strengths
each participant take part only once - only need one set of stimulus materials order effects e.g. boredom, tiredness and learning are reduced because they only experience one condition, increasing validity reduces chance of demand characteristics - only take part once, more difficult to identify differences between conditions and guess aim, less likely to adapt behaviour, increasing internal validity
70
independent groups design weaknesses
different sets of participants compared - individual differences may confound results more participants required as two groups are needed more expensive when larger sample population validity may affect findings as participants only take part in one condition, more variation between groups, less valid to draw meaningful conclusions
71
repeated measures design
same participants used in both conditions - each person takes part twice used to reduce influence of individual differences used where participants are difficult to obtain (fewer participants needed for large sample size) introduces order effects - extraneous variables e.g. practice effects, fatigue, boredom
72
counterbalancing
order of conditions is mixed up half of participants experience experimental condition and then control other half do control first doesn't eliminate order effects but means that they are equal across both conditions - negative effect reduced
73
repeated measures design strengths
results of each participant are compared - individual differences do not affect results participant variables controlled, each person acts as their own control special features of individuals will be cancelled out fewer participants required as same sample used twice - design economical
74
repeated measures design weaknesses
participants experience both conditions order effects might confound results, affecting validity at least two sets of stimulus materials required can create confounding results associated with materials e.g. word lists differing in difficulty increased chance of demand characteristics may identify differences between conditions and adjust behaviour
75
matched pairs design
different participants used in each condition but are matched on key variables to form pairs to imitate repeated measures used when important to control for individual differences but cannot use repeated measures due to order effects and demand characteristics match participants as closely as possible in terms of characteristics relevant to the study - form pairs
76
matched pairs design strengths
each participant only takes part once - only one set of stimulus materials needed, reducing chance of confounding results order effects reduced - only experience one condition participants variables reduced, though not totally reduced individual differences beyond matched characteristics may exist
77
matched pairs design weaknesses
matching process is difficult and time consuming may be inaccurate, incomplete, invalid participant variables never fully controlled attrition may be an issue - loss of one participants means loss of two sets of data
78
Naturalistic observation
Studying spontaneous behaviour in natural surroundings Record what they see No intervention Qualitative notes of human behaviour Behavioural categories
79
Naturalistic observation strengths
High in ecological / external validity Take place in natural environments, natural tasks (mundane realism) Behaviour likely to be natural (reduces demand characteristics and Hawthorne effect)
80
naturalistic observation weaknesses
ethical issues participants may not be aware of observation in natural environment - issues with informed consent, confidentiality and debrief participants should only be studied in environments where people know they are likely to be observed, thus limiting number of situations they can be used low in reliability natural environment - other factors not controlled, likely to confound results lack of control conducted on small scale lack representative sample (bias to age, gener, class, ethnicity) lack generalisability
81
controlled observation
usually structured observation carried out in lab standardised procedure - where, when and with who, in what circumstance behavioural categories usually overt and non-participant
82
controlled observation strengths
high in reliability controlled environment with standardised procedure and high levels of control easily replicated quick to conduct, many observations carried out (qualitative data) large sample obtained findings representative and easily generalised ethical issues reduced participants debriefed and give informed consent more likely to adhere to ethical guidelines and able to offer debrief
83
controlled observation weaknesses
low in ecological / external validity take place in unnatural environment - lacks mundane realism behaviour unnatural, influenced by demand characteristics and Hawthorne effect
84
covert observation
undisclosed participants don't know that they are being observed must occur in public to be ethical - knows are visible to others
85
covert observation strengths
high external validity not aware of observation behaviour more natural, more valid
86
covert observation weaknesses
prone to ethical issues not aware of observation, no informed consent lack of protection from harm and privacy violated - may not have wanted to take part practical difficulties difficult to remain undetected, no recording equipment, crucial behaviours may be missed reduces validity and accuracy of data
87
overt observation
participants are aware they are being observed informed consent gathered
88
overt observation strengths
less ethical issues informed consent gathered agreed to take part, protection from harm
89
overt observation weaknesses
low external validity aware of observation behaviour likely to be unnatural, influence by demand characteristics and Hawthorne effect
90
participant observation
researcher joins in and becomes part of the group they are studying to get a deeper insight either covert: study carried out undercover, real identity and purpose concealed, false identity, pose as member of group or overt: researcher reveals identity and purpose
91
participant observation weaknesses
practical difficulties difficult to remain undercover, problematic to accurately note and record behaviour, reflections have to be written retrospectively validity decreased ethical issues involve degree of deception - not aware researcher is studying behaviour violates privacy
92
non-participant observation
observing participants without researcher participating from a distance
93
non-participant observation strengths
less practical difficulties behaviour recorded as it occurs validity and reliability increased
94
inter-rater reliability
observation prone to bias if there is only one researcher measure of consistency different researchers compare results to check reliability statistical measurement to determine how similar data collected from different people are high = positive correlation
95
behavioural categories
list / tally of behaviours likely to occur during an observation - defines what they will record quantitative should be operationalised, observable, defined, unambiguous count frequencies of behaviour seen and totals used to draw conclusions should improve inter-rater reliability and intra-rater reliability (single observer's consistency) as decreases subjectivity
96
event sampling
target behaviour established researcher recorders every time it happens useful when behaviour is infrequent and can be missed with time sampling may miss other important events - limited in detail doesn't explain why data occurs - can't establish cause of behaviour
97
time sampling
researcher records behaviour in fixed time frame e.g. every 60th second reduces number of observations made may be unrepresentative - risk missing other events lots of behaviour to record - not singled out
98
correlations
relationship between two co-variables data analysed for relationship between two variables indicates how accurately use measurement of one variable to predict another plot scatter graph, gradient indicates correlation coefficient can't establish cause and effect - only a relationship
99
type of correlations
positive - both variables increase together negative - one increases, other decreases zero - no relationship
100
strength of correlation co-efficient
-1 to +1 (perfect negative/perfect positive) less than 0 = negative 0.0 - 0.3 = weak 0.3 - 0.7 = moderate 0.7 - 1 = strong
101
correlations strengths
allows us to investigate otherwise unethical situations e.g. manipulate sensitive variables, child abuse, depression, illness just looking at relationship between co-variables lead to new research and used as starting point before committing to experimental studies easy, nothing needs to be set up - just pre-existing data time and cost-efficient pre-existing secondary data researcher can readily access data without validity issues and practical considerations control for individual differences both sets of data come from same participants natural control over participant effects
102
correlations weaknesses
do not infer causation - cannot establish cause and effect only tell us whether a relationship exists cannot tell us if one causes another - usefulness is limited validity issues another untested variable may impact relationship - third variable problem inaccurate conclusions are commonplace validity issues in terms of data collection methods may lack validity - often use self-report methods which lead to social desirability may invalidate correlation if flaws in data collection
103
qualitative data
non-numerical language-based data collected through interviews, open questions and content analysis allows researchers to develop insight into the nature of subjective experiences, opinions and feelings subjective, difficult to analyse, imprecise, non-numerical data, rich in detail, low in reliability used for attitudes, beliefs and opinions collected in real life settings open questions / interviews
104
quantitative data
numerical data that can be statistically analysed through experiments, observations, correlations and closed or rating questions from questionnaires objective and easy to analyse precise numerical data limited detailed high in reliability (easy to repeat) closed questions / questionnaires
105
qualitative / quantitative data scientific objectivity
quantitative is scientifically objective numerical data can be interpreted using statistical analysis based on principles of mathematics and allow researchers to objectively conclude whether statistically significant relationships or differences have been found analysis is free from bias and interpretation, so high in objectivity qualitative is highly subjective because involves non-numerical language-based data cannot be easily compared or categorised means analysis is open to bias and interpretation
106
qualitative / quantitative data replication
quantitative can be easily replicated based on measure, numerical values such data requires minimal interpretation from researchers consistent analysis by multiple researchers - highly replicable and reliable
107
qualitative / quantitative data depth of detail
qualitative is highly valid based on non-numeric, detailed responses in-depth and insightful - provide unexpected responses opportunity to capture rich, descriptive data about how people think and behave - can lead to new insights quantitative is less valid based on numeric data which is quantifiable such data is narrow and lacks depth or detail and nature of turning thoughts and feelings into numbers can be seen as superficial when gathering quantitative data, respondents may be forced to select answers which do not reflect their real life thoughts and feelings, leading to data which is superficial, lacks detail and therefore has lower validity
108
qualitative / quantitative data natural settings
qualitative is more valid likely to have been gathered in more natural environments e.g. a researcher carrying out a case study of experience of mental illness would make use of wide range of qualitative methods e.g. interviews, observations increase likelihood of natural behaviour more valid and credible quantitative data is less valid likely to have been gathered in artificial, controlled environments increases unnatural behaviour and demand characteristics lacks validity and credibility
109
qualitative / quantitative data cost / time implications
quantitative is more time and cost effective immediately produce numerical info form large sample sizes easily compared and analysed produce lots of data fairly quickly qualitative data is less data has to be transformed before analysis can be carried out transforming data into categorises can be lengthy and subjective process methods are more difficult to run
110
content analysis
way of analysing qualitative data in a numerical way (qual --> quan) analyses secondary source content e.g. adverts, films, diaries categorise using top-down or bottom-up approach
111
top-down approach
pre-defined categories before research
112
bottom-up approach
allows categories to emerge from content watch or read first to come up with categories won't miss important themes - provides more detail
113
quantitative analysis
create coding system and tally each time a behavioural category occurs should be pre-defined and clearly operationalised - less subjective, limits misinterpretation, more clear, increases accuracy and validity statistical analysis then carried out more scientific, reliable, valid - can look at significant differences or relationships
114
qualitative (thematic) analysis
familiarise with data generate initial codes search for initial emerging themes, lots of different codes to sort into themes review themes, may collapse into each other, cross-over define and name themes write up
115
quantitative analysis process
data collected read/ examine data to familiarise - if bottom up identify coding units data analysed by applying coding units tally each time coding unit appears
116
content analysis strengths
highly reliable easily replicated - standardisation of coding units, pre-existing secondary data same material coded more than once (intra-rater reliability) or by different researchers (inter-rater reliability) can check for consistency BUT subjectivity may affect findings, can define codes differently, decreasing consistently likely to be highly ethical already in public domain (no privacy issues) does not involve direct use of participants - no ethical issues due to this BUT researcher needs to ensure that they have consent of stakeholders to analyse confidential records can be difficult
117
content analysis weaknesses
prone to subjective analysis involves interpreting qualitative data from secondary sources alongside a coding system affected by gender, cultural background of researcher prone to researcher bias
118
case studies
focusing on one person/small group gathers detailed data through a variety of techniques - triangulation - (psychometric testing, interviews, observations) qual + quan mostly longitudinal - over extended period of time when one person has gone through a unique situation which is uncommon and cannot be replicated holistic view of human behaviour - look at everything about a person that can affect behaviour preferred by psychodynamic and humanistic psychologists
119
triangulation
use of multiple methods or data sources in qualitative research to develop a comprehensive understanding of phenomena used to test validity through convergence of info from different sources improves validity - more data gathered gain holistic understanding of individual
120
case studies strengths
high internal validity triangulation - multiple techniques to gather lots of data (qual and quan) to produce rich, detailed data, each technique validates the others rich data provides detailed insights and deeper analysis is possible, providing an accurate and exhaustive measure of aims can stimulate new paths for research detail collected on one case lead to interesting findings that conflict with current theories e.g. Broca's area, speech production often catalyst for further experimental research
121
case studies weaknesses
low population validity only involve one participant in unique situation unable to generalise data to wider population or replicate situation low reliability unusual situation that cannot be replicated (unethical) cannot test for reliability validity issues relationships may form between researcher and participant due to extensive and frequent contact researcher bias and investigator effects due to longitudinal studies, become too invested, decrease validity
122
pilot studies
small scale pilot studies used to carry out trial runs before committing to full-scale main studies to help foresee any costly problems e.g. method/design, instructions, procedure, materials, measurements problems can then be rectified or study scrapped without entire participant sample and set of stimulus materials wasted saves time and money judge likelihood of significant results being found not possible for natural experiments and case studies - events/participants so rare that it would be too wasteful to sacrifice a sample
123
pilot studies for interviews and questionnaires
questionnaires may be too hard or too easy - results not varied enough for useful data to be gathered questions changed before real study so results are more useful don't waste time and money measuring something irrelevant, make sure questions are clear and make sense, participants' reactions not to induce emotions affecting responses
124
peer review
part of scientific process after study, report submitted for peer review helps to ensure integrity and can be taken seriously by scientific community
125
peer review process
draft article submitted for publication editor reads article to check suitability for journal sent to experts to check quality (researchers' peers in same field) quality and significance tested e.g. subject, importance methodology, interesting, ethical, logical conclusions, original findings, appropriateness for journal recommendation made to editor - approval or rejection revision usually expected editor makes final decision typically high rejection rates - process can take several months or years
126
peer review purpose
allows for allocation of research fundings - paid by government or charities, help determine where funding should go ensures only high quality research is disseminated - scientific evidence becomes part of mainstream thinking and practice, so vital that conclusions are based on valid methods and accurate presentation - only show true information to prevent public believing wrong information e.g. MMR vaccine, autism - poor research would damage integrity of field and discipline, high standards maintained quality assurance - leads to practical applications in people's lives - necessary that recommendations can be founded and do not have negative consequences gives work and journal higher authenticity and integrity - can be scrutinised, trusted, respected and taken seriously checks for fraud and fabrication, ensures conclusions not based on opinion - personal bias - unlikely to spot own errors
127
types of peer review
single blind double blind open
128
single blind peer review
author doesn't know identity of reviewer + anonymity allows reviewer to be honest without fear of criticism + knowing author and affiliation allows use of previous knowledge - knowing author may overshadow quality - leading to lack of scrutiny, especially if good track record - potential for discrimination
129
double blind peer review
author and reviewer do not know each other's identity + research judged fairly without bias + both benefit from protection from criticism - anonymity not guaranteed - discovered through area of research, references or writing style - knowledge of identity helps come to informed judgement
130
open peer review
identity of reviewer and author known by all participants during and after the review process + transparency encourages accountability and civility, improving overall quality of review and article + reviewers more motivated to do a thorough job - names and comments part of published article - some reviewers may refuse open system - concerns of identification as source of negative review - could be reluctant to criticise more senior researchers - career may depend on them, significant in small research communities
131
peer review strengths
ensures validity and credibility purpose is to promote and maintain high standards in research through scrutiny of procedures and conclusions likely that data is trustworthy and only high quality research is disseminated increases probability of weaknesses and errors being identified process involves submitting to journal, sent for review, then to editor - can take months or years before publication but more chance of errors being spotted researcher bias - less objective about own work - helps to promote objectivity
132
peer review weaknesses
contributes to file drawer effect more likely to submit positive results than negative or inconclusive results findings challenging existing understanding may be overlooked publication bias, some research overlooked can be subject to bias anonymity not maintained, experts with conflict of interest may not approve research to further own reputation / career may lead to bias in research that is published and disseminated in the field
133
primary data
collected first hand by researcher directly from group of participants for specific research purpose collected through observation, psychometric test, interview etc. qualitative or quantitative data
134
secondary data
someone else already collected data for different purpose information stored on record for other researchers re-analyse data for new purpose e.g. medical records, employee absence records
135
primary and secondary data evaluation practical issues
primary data can be time consuming and expensive to gather have to conduct experiment or observation and gather participants, find a location and time ethical guidelines need to be considered when directly interacting with participants, need approval of ethics board more costly and demanding than accessing pre-existing data from secondary sources secondary data is time and cost efficient do not have to carry out own research - data is readily accessible no ethics complications, no interaction with participants
136
primary and secondary data evaluation validity
secondary data is lower in validity been collected for a different purpose not entirely relevant to research question and not fit needs of investigation secondary data lacks temporal validity may have been gathered a long time ago and may no longer be applicable to modern society if temporal shift or shift in societal views that may influence behaviour or opinions reduced validity and usefulness, applicability and relevance
137
primary and secondary data evaluation replicability
primary data is more reliable data collected first hand, plan research and operationalise appropriately well-documented procedures, controlled manner replicability possible, check for consistency to validate not possible for secondary data may not have detailed enough standardised procedure, not able to replicate exactly or understand possible extraneous or confounding variables
138
primary and secondary data evaluation ethical considerations
primary data involves participants ethics board consulted when studying sensitive issues, take care to not cause psychological or physical harm, gain informed consent still ethical issues on use of secondary data issue of confidentiality, consent and safe storage if data is in public domain consent is implied, approval only needed if personal info used to identity participants or where access is restricted
139
meta-analysis
systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims / hypotheses achieved by searching databases quantitative research technique with data from multiple studies to get one combined answer - data reviewed together integrates results from all published studies on one topic, identify trends and relationships sample size = no. of studies --> large sample size useful when weak or contradictory evidence, get clearer whole picture more generalisable, uses scientific approach can be impacted by publication / researcher bias
140
measures of central tendency
how close scores are to average mean, median, mode
141
mean
interval / ratio data (can be converted into ordinal or nominal data) adding all scores and dividing by number of scores less useful if fairly even distribution around centre
142
mean weaknesses
can be skewed by anomalies - rogue scores can significantly increase or decrease mean scores - not representative not always an actual score (2.4 children) - not accurate reflection of data set
142
mean strengths
accurate and sensitive - takes all numbers into consideration, highly representative is numerical centre point of actual values - used to calculate standard deviation
143
median
ordinal data (can be converted into nominal data) middle score when data in ordered list (or middle scores' average) less useful when extreme high or low scores
144
median strengths
unaffected by extreme scores - only concerned with middle scores - more accurate and representative quick and easy to calculate
145
median weaknesses
may not be an actual score - not representative not appropriate in small data sets or when there are large differences
146
mode
nominal data (cannot be converted into ordinal or interval) most common score can be bi-modal or multi-modal if multiple common scores least useful, especially when there are multiple modes
147
mode strengths
unaffected by extreme scores - more representative always an actual score - accurate representation
148
mode weaknesses
sometimes doesn't have a mode or has many - limited usefulness doesn't use all data - accuracy questioned
149
measures of dispersion
how spread out scores are - provides fuller picture analyse how far away scores are from average responses - spread / variability normally large dispersion is due to individual differences or poor experimental control range, standard deviation
150
range
ordinal data difference between highest and lowest score
151
range strengths
easy and simple to calculate takes into account extreme values
152
range weaknesses
ignores most of the data - doesn't reflect true distribution easily distorted by extreme values (only looks at 2 values, highest and lowest values are likely to be the extreme values, if any)
153
standard deviation
measures collectively how much individual scores deviate from the mean, presenting this as a single number --> how much data is dispersed interval / ratio data indicates average distances of scores around the mean takes every score into account the larger the SD, the more spread out they are relative to the mean
154
standard deviation strengths
precise as all values accounted for, accurate representation of distribution, detailed conclusions made allows for interpretation of individual scores in terms of how it falls from the mean (130 IQ = 2 SDs away from the mean) complex to calculate, more difficult to understand not quick or easy to calculate less meaningful if not normally distributed
155
standard deviation commenting on the spread
large spread suggest inconsistencies in data, highlighting individual differences larger SD, the more spread out, more variability smaller SD, more similar the scores
156
normal distributions
probability distribution symmetric about the mean data near the mean is more frequent than data far away from the mean appears as a bell curve mean, median and mode appear at the same point with the same value - at highest point in the middle SD = 68% within 1 SD of the mean, 95% within 2 statistical infrequency = how far score is from mean can define abnormality
157
skewed distributions
asymmetric distribution of scores mean, median and mode have different values most scores on one side with long skews on the opposite side to the majority
158
positive skew
skew towards the positive scale more scores at lower end, less high scores (e.g. test is too hard) outliers at higher end
159
negative skew
skew towards negative scale more scores at higher end of graph, outliers at lower end lots of high scores, less lower scores (e.g. test is too easy)
160
probability
refers to the likelihood of an event occurring expressed as number or percentage
161
significance
inferential statistical tests necessary to determine whether results are significant or simply due to chance shows which hypothesis to accept or reject use probability of p =< 0.05 - likelihood of the data (in terms of difference or relationship found) being due to a random chance is less than or equal to 5%. - there is less than or equal to 5% chance of the null hypothesis being true
162
type 1 error
false positive (claiming that there is a significant difference when there isn't) claims support for research hypothesis with significant result when caused by random variables and not really significant level of significance not cautious enough p =< 0.10
163
type 2 error
false negative (claiming there is no significant difference when there is) accepts null hypothesis, claiming there is no significance, when there is an effect beyond chance level of significance is too stringent e.g. p = <0.01
164
165
165