Research Methods Flashcards

1
Q

lab experiment

A

controlled conditions and ps know that they are taking part in an experiment
manipulates IV and measures DV
can control extraneous variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

field experiment

A

occurs in natural conditions
manipulates IV and measure DV
ps act as they normally would

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

quasi experiment

A

can be controlled or natural
takes advantage of natural occurring variable
IV may be a difference between people eg gender/depression
(IV not manipulated)
measures DV
no random allocation
test= T test, mann whitney, wilcoxon, anova

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

natural experiment

A

takes advantage of variable manipulated by another individual/ organisation
something has happened and experimenter measures the effect on a person eg a flood
measures DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

covert observation

A

observing people without their knowledge
less investigator effects as ps don’t know they’re being watched, less demand characteristics
ethical issues and ps should be debriefed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

overt observation

A

ps are aware they are being observed
more ethical as ps can give informed consent
investigator effects / demand characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

participant observation

A

person conducting the observation takes part in the activity- can be covert or overt
can get lots of in depth data in close proximity to the ps
investigator effects as they can impact the ps behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

non participant observation

A

person conducting doesn’t take part, just observes
less investigator effects as they can’t impact the behaviour of the people
researcher may miss some behaviours of interest as they are far away

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

naturalistic observation

A

carried out in an everyday setting and the researcher does not interfere, observes behaviour as it would usually happen
high ecological validity
low reliability as it is hard to replicate due to the nature that events are happening by chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

controlled observation

A

under strict conditions eg in a lab where extraneous variables are controlled
can be replicated to check for reliability as they are standardised
low external validity due to high controls
ps behaviour may be altered due to controlled nature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

time sampling

A

observer records events at agreed time incriments eg every 10 seconds
makes better use of time
may miss important behaviours which are relevant to the observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

event sampling

A

observes the number of times a specific behaviour occurs
every target behaviour should be counted for
but some may be missed if there is too much happening at one time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

questionnaires- open qs

A

allow ps to answer how they wish, no fixed answers
qualitative data collected
less researcher bias as the ps answer in their own words and their response isn’t affected options given by researcher
social desirability to present themselves in a certain way

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

questionnaires- closed qs

A

restrict p’s answers to predetermined set of responses
quantitative data
eg checklist, rating scale, Lickert response table
quantitative is easy to statistically analyse and compare to other groups
answers are limited so ps may choose an option that doesn’t actually reflect them but they have to pick one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

structured interviews

A

questions are decided in advance and every p is asked the same questions
gains quantitative data which is easy to statistically analyse
standardised so can be tested for reliability
investigator effects as they are asking same qs over and over and body language may change in response to some answers
- have to train interviewers which takes time and money (for both)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

unstructured interviews

A

conducted more like a conversation where lots of rich, in depth qualitative data is collected
higher validity due to decreased investigator effects
investigator is not determining where the interview will go so they will not affect the ps answers
time consuming and hard to analyse and compare data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

aim

A

research question that they are trying to answer
eg
to investigate whether (IV) effects/improves/hinders DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

directional hypothesis

A

predicts the direction of difference of the variables
eg the results will be higher when…
allocate 5% risk of error to one side of the distribution
based on past research
one tailed
will have 1 crtitical region on a graph

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

non directional

A

predicts that a difference will exist but doesn’t say the direction of the difference
eg there will be a difference…
normal way of testing H0
we reject H0 if the sample statistic reaches the CV in either tail- 2 crtitical regions on a graph
no past research
two tailed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

sampling

A

involves selecting ps from a target population
sample should be representative so that it can be generalised to whole population
bias occurs when one or more group is over represented in a sample
population- large group, whole or entire group
sample- small group selected from population, representative sample allows generalisation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

opportunity sampling

A

sample of people who are available at the time the study is carried out
convenient as is quick and easy
may be researcher bias as they may choose people with certain characteristics
bias as doesn’t represent whole population
ps may not want to fulfil the study and drop out

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

volunteer sampling

A

self selecting as ps have volunteered or responded to an advert to be part of the study
ps want to be in the study so will be engaged and won’t drop out
ps have given full consent to take part
may be bias as some people are more likely to volunteer than others so will have similar characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

random sampling

A

every member of the target population has an equal chance of taking part
eg pulling names from hat/random number generator
sample is representative
eliminates researcher/ participant bias
not everyone who is chosen to take part will participate so sample may still not be representative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

stratified sampling

A

each stratum in population should be large enough so that selection can be done on random basis
should be perfect homogeneity among different units of stratum
ratio of number of items to be selected from each unit of strta should be same as total number of units in strata bearing units of the entire population
stratification should be well defined and clear cut

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
systematic sampling
when every nth person in the pop. is selected removes participant and researcher bias not all ps will want to participate can be time consuming with large groups
26
pilot studies
small scale prototypes of a study carried out in advance to see if there are any problems with; experimental design, instructions for ps and instruments for measurements ensure time, effort and money aren't wasted on a study with flawed methodology other peers/ scientists can comment on study/ questionnaire
27
repeated measures (related)
same ps take part in each condition of the exp data is compared for each p to see if there is a difference fewer participant variables so the only thing affecting the DV is the IV (improves internal validity) may be order effects as the ps learn what the aim is or get tired of the experiment so perform worse in second condition
28
counterbalancing
used to counteract/ reduce order effects in repeated measures design (within participants) half of the sample complete condition 1 then 2 half of the sample complete condition 2 then 1
29
independent groups (unrelated)
two separate groups take part in each condition of the experiment (randomly allocated) decreases the likelihood of order effects reduces demand characteristics as ps won't be able to guess the aim of the study easier for investigator as they can use same material in both conditions- one isn't easier increases the effects of participant variables
30
matched pairs
participants are matched on a characteristic eg age, personality type, IQ may be matched using a test, highest scores matched then next highest etc one p from each pair is put into each condition randomly ( similar to independent groups once matched) reduces participant variables as they are matched impossible to match on all characteristics need more ps better than repeated measures as there are less demand characteristics as they are only doing one condition
31
extraneous variables
any variable other than the IV that may affect the DV may have failed to take these into account when designing study | eg time of day, age, gender
32
confounding variable
extraneous variables that cause a change in the DV relates to both of the main variables we are interested in
33
randomisation
when trials are presented in a random order to avoid any bias
34
standardisation
situational variables are kept identical so that any changes to the DV can be attributed to the IV
35
demand characteristics
occur when ps try to guess the aims of the study and change behaviour in order to support it
36
investigator effects
when a researcher acts in a way to support their prediction (can be conscious or unconscious) influences the behav of the ps they know the aims of the study can be reduced by using double blind ensure method/investigation is standardised use open ended qs
37
ethical issues
take into consideration the welfare of the ps, integrity of research and use of data ``` deception right to withdraw informed consent confidentiality protection from harm ```
38
peer review
assessment process by psychologists in a similar which takes place before research is published ``` check validity of research assess work for originality allocate research funding allows errors to be identified looks at significance of research in a wider context ``` VOFES
39
implications on economy
eg development of treatments for depression/ OCD mean that people are able to work more, take less time off work, doesn't cost the company, NHS saves money if treatments are successful
40
quantitative data
numerical data that can be statistically analysed and converted to graphical format easy to analyse statistically to check for significance etc lacks representativeness as it usually comes from closed questions
41
qualitative data
non numerical, language based data expressed in words data collected is rich in detail (usually from unstructured interviews) data is subjective and can be interpreted differently between people so may be subject to bias
42
primary data
data collected for a specific reason and reported by the original researcher it has good authenticity as it has been collected specifically for the research- data will fit the aims of the research primary data can take a long time to collect
43
secondary data
data which already exists and used for another study less time consuming than primary to collect concerns with accuracy as the data wasn't collected to meet the aims of the research
44
meta analysis
process where investigators combine findings from multiple studies and make an overall analysis of trends and patterns across the research - based on a large sample so more likely to be generalisable - may be bias as the researcher may only choose studies which show significant results
45
descriptive statistics
summarising data numerically, allowing researchers to view the data as a whole measures of central tendency + dispersion
46
mean (interval ordinal)
``` mathematic average (includes anomolies) most sensitive measure ``` easy to calculate, uses all values, gives central point of distribution affected by extreme scores, can't be used with nominal data
47
median (interval ordinal)
central score not affected by extreme scores difficult if there is lots of data doesn't reflect all values can't be used with nominal data
48
mode (nominal)
most frequent value not affected by extreme scores useful when scores cluster around non central value may not be representative not useful in small sets of data
49
measures of dispersion
define the spread of data around the mean
50
range
subtract lowest from highest and add 1 easy to calculate doesn't show distribution pattern
51
standard deviation
large SD shows data is very dispersed around mean small SD shows values are concentrated around the mean precise as all values are included extreme values can distort measurement used when mean ia a good measure of the average
52
normal distribution
bell shaped curve | mean median and mode all in the middle, same line
53
negatively skewed
curve to the right ( test too easy) | mode at top, median in middle, mean on end (less than median)
54
positively skewed
curve to the left (test too hard) mode at top, median in the middle, mean on the end mode remains at the highest point as it isn't affected by extreme values mean greater than median
55
content analysis
a type of observational technique that involves studying people through qualitative data data can be placed into categories and counted (quantitative) or can be analysed in themes (qualitative) coding system is used- categories for the data to be classified high ecological validity findings may be subjective as they may be interpreted wrong
56
thematic analysis
helps to identify themes throughout qualitative data will produce more refined qualitative data high ecological validity as observations come from real life behaviour
57
features of a science
paradigm concepts are falsifiable use of empirical methods theory constructed from which hypotheses are derived + tested
58
paradigm
an agreed upon set of theoretical assumptions about a subject and its method of enquiry Kuhn- pre science as no paradigm exists due to disagreements between various approaches paradigm shift- when scientists challenge an existing paradigm and many opinions change
59
theory
set of general laws that have the ability to explain a particular behaviour in order to test a theory, an experiment must be devised, hypothesis is made about what they think will happen and this must be objective and measurable
60
falsifiability
scientific theories must always be stated in a way that predictions derived from them could be shown to be false even if you consistently find the same results (replicability) you must be able to prove it wrong
61
empiricism
knowledge should be gained from direct experiences in an objective, systematic and controlled manner must be objective- free from bias and subjectivity
62
replicability
repeating and gaining the same results (not due to chance) use of standardised procedure, control of variables replication is harder in humans due to confounding variables eg mood
63
reliability
measure of consistency | eg a person gets the same result each time on an introvert test
64
test retest reliability
same person/ group is asked to undertake the research measure on different occasions results are correlated with the original results use spearman's to see if correlation is significant
65
inter observer reliability
extent to which two or more observers are observing + recording in a consistent way observers should discuss and agree behavioural categories, ensuring they are trained should observe the same people at the same time, recording observations independently results should be correlated using stats test ( spearman's or pearson's)
66
validity
whether the test measures what it is set out to measure
67
internal validity
whether the results (DV) are solely affected by changed in the IV
68
external validity
whether the data can be generalised to other situations beyond the context of the research situation
69
face validity
on the surface, does the test appear to measure what it should be measuring/set out to measure
70
concurrent validity
comparing a new procedure with a similar procedure that has been done before, where the validity has already been established test scores in new measure are correlated with those from the established test correlation of +0.80 or higher
71
temporal validity
do findings endure over time or are they era dependent
72
ecological validity
can we apply our findings to real life situations outside of our research setting
73
abstract
key details of study | aims, hypothesis, methods + results
74
introduction
reviews background info and previous research | ends with aims and hypotheses
75
method
``` split into subsections design sample apparatus/materials procedure ethics detailed method section allows for accurate replication ```
76
results
summary of data- descriptive statistics + inferential statistics
77
discussion
findings of research are interpreted/discussion on relation to relevant research evaluation of investigation wider implications + for future research significance, at what level eg p=0.05 and what it means
78
referencing
all sources must be referenced to avoid plagiarism give credit to appropriate sources author (year) title, location, publisher (book) AYTLP author (year) article title, journal title, volume number, (issue number) page number
79
appendicies
includes raw data
80
type I errors- alpha
p=0.1 10% too leniant reject a null hypothesis and accept an alternative hypothesis when it should be the other way round H0 is actually true assume the results were due to the IV when they were due to chance
81
type II errors- beta
p=.0.01 1% accept a null hypothesis and reject alternative hypothesis when it should be the other way round H0 is not true assume the results were due to chance when they were due to the IV where we attempt to reject the null hyp by comparing our z score to critical value - we are conducting a z test
82
why do we use p=0.05 5%
to balance type 1 and type 2 errors
83
questionairre strength
quick, easy to carry out cost effective lots of people can be reached easily as they are all given the same thing
84
random allocation
reduces researcher bias | more likely that the manipulation of the IV causes the change in the DV/ the results
85
designing an observation
``` type of obs- covert, overt etc where they will stand operationalised behav categories train observers + how many how they will record eg table/ tally video recording + content analysis inter observer reliability (use the bullet points given) ```
86
designing an experiment
use bullet points aim= statement identification of variables- incl IV, DV extraneous variables, confounding variables control EV= use random allocation, standardised procedure
87
evaluation of peer review
easier to criticise from an outside point of view as authors/researchers can't sport every mistake prevents publication of irrelevant findings or personal views ensures research is taken more seriously as it has been scrutinised by fellow researchers avoids publication bias eg only publishing results that are significant so that they grab the attention of the headlines research should be published even if they aren't really significant as they reflect the current state of the area which is important
88
sign test
calculate S= calc how many changed eg + or - calculate N= how many changed eg ignore = calculate if it is significant using table two tailed = non directional (there will be a dif) one tailed = directional (there will be an improvement)
89
research methods
experimental method eg lab field etc observational techniques self report techniques eg questionnaires, interviews correlations eg relationship between co variables
90
interval scale
allows us to put scores in order of magnitude, have equal intervals between adjacent points on the scale
91
ratio scale
have features of interval but have an absolute zero point
92
nominal scale
attributes are only names
93
ordinal scale
attributes are ordered eg rating scales
94
correlational design
measure the variable of interest and how each variable changes in relation to the changes in the other variables relationships/ measures of association casusal relationships between variables pearsons, chi squared, spearmans, multi and linear progression tests
95
experimental design
manipulating independent variable to see the effect it has on the dependent variable looking for differences in between conditions/ treatments of IV hypothesis is prediction of how variables may be related to one another random allocation of ps to conditions
96
structure of experimental design
1. observation made in relation to dependent variable- may be two or more (pre and post test) 2. experimental treatment- manipulation of IV 3. no exp= control group 4. T = timing of observations made in relation to dependent variable difference between each group pre and post analysed to see if there was a difference
97
between participants | +strengths and weaknesses
independent/ unrelated design different groups of ps in different conditions of the IV strength- less likley to have order/ demand effects weakness- need more ps, lose some control over confounding variables, individual characteristics may affect results
98
within participants | + strengths and weaknesses
repeated measures related design have same ps in every condition of IV each p performs under all conditions of study strength- can control inter individual confounding variables fewer ps needed so lower cost weaknesses- order effects so differences may be due to practice, fatigue or boredom demand effects- know the purpose of exp
99
cross sectional design
social survey design collect data from a number of different individuals at one time in connection with two or more variables interested in variation- can be established when one or more cases are examined allows examination of relationship between variables- associations not causes | comprises data on a series of variables at a single point in time
100
questionairres + surveys
mostly used for collection of large data study attitudes, values, beliefs and motives reduced bias open and closed questions used mostly for descriptive research large sample easy to enter data + analyse
101
pre coding surveys/ questionairres
set format closed responses makes pre coding possible number can be entered or scanned into a pre coded SPSS data sheet
102
self completion questionairre | guidance for how to design
easy to follow design clear instructions about how to respond avoid leading questions dont presuppose info vary questions to avoid boredom + keep qs short test questionnaire in person to trial respondants
103
SCQ strengths and weaknesses
strengths- cheaper + quicker to administer no interviewer effect/ interviewer variability convenience for respondents weaknesses- can't prompt or probe, cant ask too many qs, cannot collect additional data, greater risk of missing data
104
continuous and categorical variables
continuous- linear scale categorical- making categories continuous + ordinal categorical variables- calculate mean or median (all ratio/ interval) categorical variables (nominal)- calculate mode- dont calculate mean or median
105
comparing variables
comparing means - eg mean distances comparing medians- eg median satisfaction on rating scale comparing proportions or percentages for the same factor you can compare means, medians or proportions
106
choosing best analysis
think of variables- ratio, interval, ordinal or nominal whether variables are continuous or categorical all data categorical= contigency table grouping/ continuous= compare means and or medians mean- with normal shaped data median- with skewed data
107
measures of spread - percentiles
value at or below which a specified percentage of the scores in the distrbution fall allows us to say where a value lies on a distribution of data | eg 60% tile= 60% lie above or below and 40% lie at or above
108
calculating %tiles
value of percentile = (percentile/100) x (n+1)th observation n= total number of observations 25th%tile= lower quartile 50th %tile= median 75th%tile= upper quartile commonly calculated percentiles calculation= (n+1) x (percentile)
109
quintiles | 20%
lowest quintiles- multiply by 0.2 highest quintiles- multiply by 0.8
110
deciles | 10%
lowest decile- multiply by 0.1 highest decile- multiply by 0.9
111
tertiles
lowest tertile- bottom 1/3- multiply 0.33 highest tertile- multiply 0.69
112
measures of spread
range- difference between highest and lowest value IQ range- difference between upper + lower quartile SD- measures average deviation from mean variance- SD squared
113
producing good tables- guidelines
cleart title, well labelled columns, includes untis of measurement, consistent use of dps, include source of data
114
% tables
convert counts into % calculate % by dividing number in group by total number rules think whether you need row or column % to answer research q always make it clear which totals add up (must be to 100%)
115
why use graphs
generally easier to visualise patterns and trends than numbers in table effective in presentations as only limited time for audience to digest info effective way of highlighting a particular aspect of your finding
116
graphs - categorical
data that adds to a whole data measured on a nominal or ordinal scale bar charts- bars shouldnt touch stacked bar charts- categories within categories, can display counts or % pie charts- need to convert % to proportions (divide by 100 then x by 360)
117
limitations of pie charts
can be good for small number categories but not large number can be confusing when comparing outcome of different experiments
118
limitations of pie charts
can be good for small number categories but not large number can be confusing when comparing outcome of different experiments
119
continuous data- graphs
data measured on ratio or interval scale- standard units of % stem and leaf plots- frequency distribution more visual and gives idea of shape of distribution can identify whether data are concentrated around middle or skewed in any direction
120
histograms - continuous
only shows count or percent falling within a cateogory, not all data because categories are continuous- bars should be joined bar height represents frequency or % bar width represents width of a category so equal categories have equal width
121
time series
best to use a line graph for data on a variable over time time should be measured on the horizontal axis make sure scales are not misleading - start y axis at 0 scales should be same when comparing more than one graph inclusion of error bars- SE (showing confidence in mean) or SD (describing data) SE will get smaller with greater sample size but SD will not
122
scatterplots
two continuous variables show the relationship between the two variables scales should be clear and appropriate can distinguish groups with different colours or markers
123
misuse of graphs
good graphical displays should reveal what the data convey improper use of vertical and horizantal axes leads to distortions use of infographics (useful for general info not statistical data)
124
theory of sampling
3 things to consider statistical estimation- point estimate or interval estimate testing hypothesis- accept or reject null hypothesis statistical inferences- general statement about the populations
125
limitations of sampling
less accuracy changeability of units misleading conclusions need for specialised knowledge sampling not always possible
126
characteristics of ideal sample
representative independence adequacy homogeneity
127
methods of sampling | probability + non probabilty
probability (random) - simple random sampling, stratified sampling, systematic sampling, multistafe sampling non probability (non- random)- judgement/ purposive/ deliberate, convenience sampling, snowball sampling, quota sampling
128
probability sampling | strengths + weaknesses
strengths- probability sampling requires detailed info about population to be effective provides estimates which can be measured precisely are inherently unbiased can evaluate the relative effectiveness of various sample designs when probability sampling is used weaknesses- requires high degree of skill level and expertise requires a lot of time to plan and determine sample costs involved are higher than non-probability
129
simple random sampling
individual units are selected at random each individual has an equal chance of being chosen everybody in population has the same chance of being selected can allocate everybody or every location a number so equal chance of being chosen, using random number generator
130
simple sampling | strengths + weaknesses
strengths- quite simple as follows mathematical proceddures free from bias and prejudice more representitive as everyone has equal chance of being selected errors are easily deleted w- not always possible to be completely random lack of control of research
131
stratified s+w
strengths- greater control of investigator, easy to achieve representative character, replacement of units is possible weakness- possibility of bias, difficult to achieve proportion, difficulty in making sample representative
132
multistage sampling
uses a form of a random sampling in each of the sampling stages where there are more than 2 stages draw sample from each sample, gets smaller each time control of non-sampling errors is difficuly and costly s- complete list of population isn't required w- errors are likely to be larger than in simple random sampling
133
non probability sampling
methods that don't provide every item an equal chance of participation selection process is partially subjective non random sampling is process of sample selection without use of randomisation includes; judegment, convenience, snowball, quota
134
judgement sampling
judgement of researcher to choose units for study strengths- small number of sampling units in population allows for inclusion of important units allows for more represnetative sample when looking into unknown traits of population practical method weaknesses- no objective way of evaluating the reliability of sample results risk by using units that conform to researchers preconcieved ideas
135
convenience | opportunistic
used when population is not well defined, sampling unit is not clear, complete source is not available
136
snowball sampling
similar to convenience researcher makes contact with small number of people then they use these pople to establish new contacts w- sample will be unrepresentative
137
quota sampling
non random form of stratified sampling 1. population classified 2. proportion of population falling into each type is determined 3. quotas set for each interviewer who selects respondents- so that sample has same proportion as population s- reduces cost of preparing sample + field work, introduced stratification effecr w- introduces bias of investigator, errors of method cannot be estimated by statistical procedures
138
factors determining reliability of a sample
size of sample representativeness of sample parallel sampling homogeneity of the sample unbiased selection
139
types of sampling errors
sampling variability- different samples from same population do not always produce same mean and SD sampling error- mean of a sample will not be same as mean of a population non sampling error- errors not connected with sample method eg leading questions, measurements taken poorly, errors made in coding and recording data
140
sampling distribution of means
means of different sample are known as sample means distribution of sample means from a population will be normally distributed with a reasonable number of samples, means of samples will approx equal the population mean
141
distribution of sample means
to left on curve, sample means smaller than population mean to right of curve, sample means greater than population mean sample means form normal distribution curve but is different as each sample has a mix of scores + high scores are cancelled out by low ones SD around sample means is smaller than that of around population mean
142
sample distribution properties
Sd of a sample means is smaller than SD of individual population values larger number of samples have less variability, new formula for SD takes this into account (standard error) SE= SD/root n
143
using sample means | z scores
because of the properties of sample means and SE we can use z scores 1 z score= 1SE z score= obs- mean/ SE with one individual- z score= obs- mean/SD
144
null hypothesis (h0)
H0 in the real world= nothing going on set a threshold- normally 1% or 5% if sample mean is in rarest 5% (or 1%) we conclude that null hyp is not true we reject null hypothesis and accept the alternate hypothesis (h1) h1- mean of the subgroup represented by the sample is different to the mean of the population
145
sample proportion
can take a sample proportion and consider whether it differs significantly from a proportion population on graph; to left of population proportion- sample proportions < PP to right of PP - SP>PP
146
distribution of sampling proportions
can calculate the standard error to estimate how the theoretical sample proportions cluster around the population proportion distribution of the theoretical sample proportions is affected by sample size- larger sample, closer the sample proportion cluster around population proportion more likely to be lots of sample proportions around the population proportion
147
estimating SE of mean
with means, we use population SD and sample size to calculate standard error = SD/ root n with proportions we use the population proportion and sample size to calculate the standard error (equation in notes)
148
point estimates
estimate of population mean= point estimate (using samples) sample mean based on a random sample is not equal to population mean but this is not the best estimate we have to ensure mean is a good estimate we can caclulate an interval estimate known as confidence interval
149
confidence interval
interval for which we can say with a certain level of confidence that the value we are trying to estimae lies between the two values researcher normally provides 95% (or 99%) CI
150
rules of confidence intervals
population SD should be used to calculate CIs but dont have this data so can use sample SD as a reasonable estimate of the population SD if the data are normally distributed or we have. a sample size greater than 30
151
Z scores
can be used as measure. of distance between the sample statistic and the population parameter can consult normal tables to decide exact probability that the subgroup represented by the sample will have the same mean as the population z scores for proportions perform the same function
152
z scores for means
would ideally use the population SD but if we dont have this, can use the sample SD (if sample is over 30) if we dont have either can use; one sample T test if data is normally distributed one sample sign test if data is not normally distributed can decide if distribution is normal by looking at histogram or checking SPSS output
153
non normal distributions
kurtosis- measure of how peaked or flat the distribution is flat distribution = platykurtic very peaked (high peak in middle) = leptokurtic
154
skewness and kurtosis using SPSS
use zskew and zkurt to caclulate z tests test 2 null hypotheses: 1. H0 distrbution is not skewed 2. H0 distribution does not exhibit kurtosis would hope to not reject H0 and carry out test based on the normal distribution (parametic) always two tailed test (aplha 5%) if data are skewed we compare sample + proportion median rather than means
155
purpose of research
information- acquire new knowledge, find answers + solutions to problems facilitating change- disrupt and enhance practice + find new ways of doing things ethical issues- prevent harm, improve life conditions, social change academic mission- interest, profile, contributions + enterprise
156
qualitative research
describing or understanding the meaning of something words/ images - meaning emphasis on individuals' experiences and feelings sees the world as changing no hypotheses inductive as builds theory
157
quantitative
measuring/ measured by the quantity of something influenced by paradigms numbers general statements about people as groups looks to prove causal relationships sets hypotheses deductive- testing theories
158
positivism
beleives there is one single truth to be found belives in facts, objective view + conducts research to find out the truth
159
constructivism
belief that there is no fixed truth- there are multiple truths and depends on experience characteristics: context- social world can only be understood from the standpoint of individuals who are participating in it rich detail- constructivist reserch makes room for + values all of complexity that positivist research and cannot accomodate, open ended and explatory researcher role- the researcher is an intrinsic part of the research, they are the main instrument so will bring their biases with them
160
asking qualitative questions
qualitative research does not seek to test a hypothesis but instead is guided by research questions research questions allow for the uncovering of rich detail and thick description that captures the subjectivities of human experience
161
qualitative methods
different research questions have different implications for the ways you collect data need to know: type of data needed to understand topic in question type of ps youre researching with logistical issues
162
interviews
conversation with a purpose can be semi structured (standardised, control over topic) + unstructured (less control over topics) in depth data discovers how individuals think/ feel, explore their context, why they hold certain opinions, private/ confidential conversation person centred approach
163
focus groups
require structure and purpose questions around a core aim but with flexibility to embrace ideas put forward by participants room for creativity and can involve tasks to aid engagement good explatory tool for under researched areas as they do not require any prior research or knowledge
164
observations
researcher can take an outsider view or can be more involved with a participation style good for understanding real life by exploring context valuable when research is interested in practice may be most effective when used in combination with other approaches
165
sampling in qual research
making informed + strategic choices about which people, places, setting, events and times are best for gaining data you need to address research question purposive sampling- researcher attempts to recruit people to gain as much info as possible theoretical sampling- constructs theoretically meaningful sample to help develop or test a theory
166
inclusion criteria
researchers choose individuals who they will beleive will provide info rich cases, can be homogenous or people from different contexts recruitment strategies criterion based sampling- predetermined criteria snowball sampling- ps direct the researcher to other ps total population sampling- everyone who is involved with the topic of study
167
thematic analysis
common approach to analysing qualitative data organises and describes the data set meaningfully in relation to the research question used for identifying, analysing and reporting patterns in the data involves identifying and describing explicit and implicit ideas within the data requires involvement and interpretation from the researcher
168
thematic analysis models- braun and clark
1. familiarisation of data 2. generating intial codes 3. searching for themes 4. reviewing the themes 5. defining and naming themes 6. writing the report
169
sparkes and smith (TA)
phase 1- immersion in data- reading transcripts etc phase 2- generate intial codes phase 3- identify themes through the data, sort themes, create visual map of themes phase 4- review themes, examine coherence, tell story of research through these
170
thematic analysis
stage 1- transcription- converting speech into text for analysis should be vertabrim (not paraphrased) and accurate - true represntation of the conversation stage 2- coding data- pulling apart data code- description attached to a piece of data which can be later related to a theme helps. tosystematically identify patterns in data + reduce data sets into manageable chunks stage 3- identify themes patterns across data sets that are important, categorising codes that fit together, commonly recurring topics and or codes stage 4+5- reviewing and labelling constant comparison- process of repeatedly going through data in order to make sense of it, involves comparing codes to generate common themes thick description- providing enough description to enable reader to understand the context visual mapping stage 6- interpreting and writing up
171
coding data (TA)
how to code: working through texts and marking them up, cutting and sorting, using post it notes, writing notes in margins on transcripts types of coding: semantic code- data derived latent code- researcher driven (pre conceived ideas)
172
visual mapping
useful when reviewing themes condenses large amounts of data, making sense/ testing relationship of codes. and themes, start to see links and patterns, factilitates writing process + understanding for the reader
173
rigor
helps to make good research term used to describe the trustworthiness in research can be judged in many ways and can helpfully be associated with methodological and theorteical robustness and use of a systematic approach
174
box plot
to draw you need to calculate median lower quartile + upper quartile interquartile range lower fence = LQ - (1.5 x IQR) upper fence = UP + (1.5 x IQR)