research methods Flashcards

1
Q

define aim

A

description of what you are researching and why

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

define hypothesis

A

states the relationship between the variables and predicts the results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

define directional hypothesis

A

states the direction and correlation the experiment is expected to go in based on previous research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

define non directional hypothesis

A

predicts there will be a difference in results but the direction is unknown as there is no previous research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

define null hypothesis

A

predicts there will be no difference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

define IV

A

variable we change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

define DV

A

variable we measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

define operationalisation of variables and how to do it

A

clearly defining the variables and stating how they will be measured by adding values and units

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

define extraneous variables

A

variables other than IV that may have an effect on DV if not controlled and doesn’t relate to IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

define demand characteristics

A

clues that allow participant to guess the aim and changes their behaviour to help or sabotage the experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

define social desirability

A

when the participant tries to please the researcher or try to make themselves look better

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

define the hawthorne effect

A

when people are interested so they show a more positive response which leads to artificially high results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

define investigator/experimenter effects

A

experimenter unconsciously conveys to participant how they should behave

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

examples of investigator effects

A

tone, accent, body language, leading questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

define situational variables

A

aspects of environment that may affect the participants behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

examples of situational variables

A

temperature, noise, authenticity of experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

define participant variables

A

the ways each participant varies and how this affects their results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

examples of participant variables

A

trauma, mood, intelligence, anxiety, gender, culture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

how can you control extraneous variables

A

single blind design
double blind design
experimental realism
randomisation
standardisation
controls

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

define single blind design

A

participant is not aware of the research aims

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

define double blind design

A

participant and experimenter are unaware of aim and hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

define experimental realism

A

researcher makes the task engaging that the participant doesn’t know they are being observed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

define randomisation

A

randomly allocating tasks and roles to avoid bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

define standardisation

A

experience of experiment is kept almost identical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
define confounding variable
variable other than IV that had a direct effect on the DV and is related to IV
26
define pilot studies
small scale practice investigations to help identify potential problems before doing the real experiment, so money and time is saved and stops floor and ceiling effect
27
define validity
the extent to which a study measures what it intends to measure
28
define internal validity
whether the effects observed are due to the IV and not another factor
29
define mundane realism
how realistic the task is
30
define external validity
how well you can even compare your findings to other people, places and times
31
define ecological validity
the extent to which the results reflect real life
32
define population validity
how well the sample can be used to generalise to represent the population as a whole
33
define temporal validity
the extent to which the findings are valid when we consider differences in time progressions
34
define face validity
the test/questionnaire looks like it measures what it intends to
35
define concurrent validity
whether the results can be compared to another existing, well established test which measures the same thing and follows the same correlation
36
how to improve validity
- **control group**- compare results with the experimental group to see if IV changes - **covert observations**- participant doesn’t know they are being watched so they are natural - **questionnaires**- keeping them anonymous so they are more truthful - **qualitative methods**- interviews have high ecological validity as they represent humans more accurately - standardise procedures and instructions - single blind or double blind - assure results are anonymous so participants are truthful - incorporate a lie scale to assess the consistency of responses - triangulation- use of different sources of evidence
37
define experimental design
researcher has to decide how they will use their participants
38
define a repeated measures experiment
same group if participants in all conditions
39
advantages of repeated measures experiment
- no participant variables - fewer participants so more economical
40
disadvantages of repeated measures experiment
- order effects - demand characteristics
41
define order effects and what are the different types
doing the same task twice boredom, fatigue, practise
42
solutions to repeated measures experiment
counterbalancing and randomisation
43
define counterbalancing
when two groups do the tasks in different order to cancel our order effects
44
define independent group design
different groups perform only one condition
45
advantages of independent group design
- no practice effects - reduces demand characteristics
46
disadvantages of independent group design
- needs more participants - participant variables between groups
47
solutions to independent group design
random allocation as each participant had an equal chance to being in either group and tried to avoid imbalance of participant variables in either group
48
define matched pairs design
pair up participants on a certain quality that is believed to affect the performance on the DV and their results are compared
49
advantages of matched pairs design
- participant variables reduced - no order effects
50
disadvantages of matched pairs design
- larger number of participants needed - difficult to match on characteristics like personality - difficult to know which variables are relevant
51
solutions to matched pair design
pilot study to help choose which variables are most important to match on
52
define ceiling effect
task is too easy so all the scores are high
53
define floor effect
task is too difficult so all scores are low
54
define construct validity
extent to which a test captures a specific construct or trait and it overlaps with some other aspects of validity
55
define experiment
IV that is changed so that the effect on DV can be observed and aims to establish cause and effect relationship
56
define laboratory experiment
takes place in a carefully controlled lab where the IV is manipulated by the experimenter so the DV can be measured
57
pros of laboratory experiment
- extraneous variables are closely controlled so increases internal validity - easily repeated as it is controlled so increases reliability - shows cause and effect relationship
58
cons of laboratory experiment
- artificial nature so lacks ecological validity - know they are tested so may lead to demand characteristics - lacks mundane realism
59
define field experiment
conducted in natural setting where the IV is still manipulated so the DV can be measured
60
pros of field experiment
- higher mundane realism - naturalistic so high ecological validity - demand characteristics are less likely
61
cons of field experiment
- harder to control extraneous variables - ethical issues as the participant don’t know they are being studied - harder to replicate - IV may be operationalised in a way that lacks mundane realism
62
define natural experiment
IV naturally occurs, and would take place even without the research taking place, and DV is then measured
63
pros of natural experiment
- high external validity - provides opportunities for research that would otherwise be impossible to replicate - reduced demand characteristics
64
cons of natural experiment
- less control over extraneous variables - very unlikely to be able to replicate - random allocation of participant not possible so there may be bias and lead to participant variables
65
define quasi experiment
participants are automatically assigned to a condition depending on their characteristics or features that don’t change
66
pros of quasi experiment
- controlled experiments so can be replicated - high ecological validity as you can compare to real life
67
cons of quasi experiment
- cannot randomly allocate so more chance of extraneous variables - demand characteristics as they may become more aware - DV may be articulate and reduced ecological validity
68
define sampling
choosing a group of people to represent the target population
69
define target population
population to which the researcher would like to generalise their results to
70
define opportunity sample
using people who are available at the time of testing
71
define random sample
each member of the population has an equal chance of being selected like names in a hat or random generator
72
define stratified sample
subgroups are identified and participants are chosen at random from each group in proportion to target population
73
define volunteer sample
participants out themselves forward to take part
74
define systematic sample
using a system to pick a pattern of participants e.g every nth time
75
pros of opportunity sampling
- quick and cheap - can have face to face ethical debriefings
76
cons of opportunity sampling
- researcher bias as they bc hoods who they want - depends on who’s available, different factors eliminate who is free
77
pros of random sampling
- avoids bias - aims to be fair and representative
78
cons of random sampling
- impossible to have all names of target populayion - doesn’t guarantee full representation - time consuming
79
pros of stratified sampling
- highly representative so has population validity
80
cons of stratified sampling
- time consuming and difficult to gather
81
pros of volunteer sampling
- give their informed consent - will be interested and less likely to withdraw - large number may apply so it gives more accurate results and in depth analysis - helpful to find people who can be seen as atypical
82
cons of volunteer sampling
- biased as it is not representative of the whole population - hawthorne effect - demand characteristics
83
pros of systematic sampling
- normally representative
84
cons systematic sampling
- may not be able to identify all members of the population - unexpected bias that has a pattern - starting point and deciding on list type may be biased
85
what are the ethical issues
- informed consent - deception - right to withdraw - protection from harm - privacy and confidentiality
86
features of informed consent
- making participants aware of the aims of the research, procedures, risks, rights and what their data will be used for - use consent forms - under 16s need parental consent - consent cannot be given by those under the influence
87
define presumptive consent
similar group of people are told the details of the study and asked if it is acceptable, and their answer will presume the answer of the actual participants
88
define prior general consent
participants give their permission to be deceived but not knowing how
89
define retrospective consent
asking them after they have taken part of their data can be used
90
limitations of informed consent
- invalidate purpose of study - participants do not know fully what they are getting in to - demand characteristics
91
features of deception
- BPS only allows when there is scientific justification and no alternative procedure - full debrief after to discuss concerns - cost-benefit analysis
92
limitations of deception
- cost- benefit decisions are flawed - debriefing cant turn back time - distrust in psychologists
93
features of right to withdraw
- enticed by financial incentives - fully informed consent so they know what they are doing and less likely to withdraw - volunteer samples as people are more eager and won’t withdraw
94
limitations of right to withdraw
- time consuming - guilty to withdraw - economic pressure because they are getting paid
95
features of protection from harm
- physical or psychological - should be in same state after the experiment as they were before - no greater harm than what they would experience in every day life - offer therapy and counselling at the end - stop the study immediately if the participant is harmed too much
96
limitations of protection from harm
- harm may not be apparent or obvious yet - don’t always know what will be harmful beforehand
97
features of privacy and confidentiality
- protected under data protection act - using code names and anonymity - deleting unnecessary daya
98
limitations of privacy and confidentiality
still work out participants from limited amount of information
99
define naturalistic observation
behaviour in natural situation or environment without any intervention
100
advantage of naturalistic observation
high external and ecological validity
101
disadvantages of naturalistic observation
- less control over extraneous variables - replication is difficult
102
define controlled observations
some variables are controlled by the researcher and participants are likely aware that they are being studied
103
advantages of controlled observation
- more control over extraneous variables - easy replication
104
disadvantage of controlled observation
- low ecological validity and mundane realism - demand characteristics
105
define overt observation
participants are aware they are being observed, but observers may try to be as unobtrusive as possible
106
advantages of overt observation
infrom participants and ask for consent
107
disadvantage of overt observation
- demand characteristics - social desirability bias
108
define covert observation
participants are unaware they are being observed
109
advantages of covert observation
- higher ecological validity - natural behaviour so high internal validity
110
disadvantages of covert observation
ethical concerns because they cannot give consent
111
define non participant observation
observer is merely watching or listening to the behaviour of others and not interacting with
112
advantage of non participant observation
observer effects less likely
113
disadvantage of non participant observation
less insightful
114
define participant observation
observer is part of the group being observed
115
advantage of participant observation
more insightful
116
disadvantage of participant observation
- demand characteristics - lose objectivity
117
advantages of observations
- high validity - captures spontaneous and unexpected behaviour
118
disadvantages of observation
-observer bias - only observable behaviour can be recorded - hard to replicate
119
define inter-observer reliability
if several observers are coding behaviours, their codings agree with each other
120
features of inter observer reliability
- should agree beforehand the behavioural categories and their interpretations of them - carry observations at the same time but in different places - total number of agreements / number of observations
121
define unstructured observation
all relevant behaviours is recorded and no system is used
122
evaluation of unstructured observation
+ greater insight - observer bias, unnecessary behaviours noted (time wasting)
123
define event sampling
counting the number of times a certain behaviours occurs
124
evaluate event sampling
+ focuses on an event, find averages - cant note abnormal behaviours, no indication of when it happened
125
define time sampling
recording behaviour at preset intervals of time
126
evaluate time sampling
+ frequencies within observation, more objective - may miss something, demand characteristics, observer bias, social desirability bias
127
define structured observations
use various systems to record behaviour
128
evaluate structured observations
+ smaller risk of observer bias - less insight as they are nothing frequencies of behaviour, interesting behaviours unwritten
129
define behavioural categories and criteria of them
target behaviour is operationalised so it’s more reliable and measurable - objective- no inferences have to be made - cover all possible behaviours and no waste basket - criteria shouldn’t overlap
130
define self report techniques and why they are useful
participants give information about themselves, including their experiences, beliefs and feelings
131
types of closed questions
likert scale- indicated agreement from strongly agree to strongly disagree ranked scale- from 1 to 10 semantic differential scale- indicate where they fall between two extremes multiple choice- choose from options
132
advantages of closed auestions
easy to analyse
133
disadvantages of closed questions
- forced to pick an option that doesnt represent them - waste baskets
134
advantages of open questions
- more detail and can expand on answers - allow for unexpected answers
135
disadvantages of open questions
- worry of confidentiality - qualitative data not produced
136
advantages of questionnaires
- can be distributed to large numbers cheaply and quickly - may be more willing to participate
137
disadvantages of questionnaires
- not accessible to all (literate) - social desirability bias - leading questions so response bias - takes a long time to design - participant bias - sample not representative - acquiescence bias (tendency to agree with things)
138
criteria for questionnaire design
- easily analysed so more likely closer questions - free from bias and leading questions - should be clear and avoid using double negatives - make language understandable for all - contain filler questions to reduce demand characteristics - sequence questions sensibly - avoid double barrelled questions with more than one answer - pilot study
139
define correlation
relationship and strength between two variables
140
define positive correlation
as one co-variables increases, the other increases too
141
define negative correlation
as one co-variable increases, the other decreases
142
define no correlation
no relationship between the two variables
143
define intervening variables
another variables that had not been studied
144
define continuous variable
variable that can take on any value within a certain range and not categorised
145
explain correlation coefficients
- between -1 and +1 - show strength of the co-variables - coefficients above 0.8 have a strong correlation and are reliable and valid
146
define quantitative data
data in the form of numbers
147
strengths of quantative data
- reliable - can be analysed statistically - easy to compare and analyse
148
weaknesses of quantitative data
- lacks detail - may oversimplify
149
define qualitative data
data in the form of words
150
strengths of qualitative data
-detailed
151
weaknesses of qualitative data
- subjective - unreliable - hard to compare - time consuming - researcher bias
152
define triangulation
use of a mixture of qualitative and quantitative data
153
define primary data
information observer and collected directly from first hand experience, including designing and carrying out the study
154
strengths of primary data
- more reliable - can cater to your research
155
weaknesses of primary data
- time consuming and costly - many not have access to groups and data you need - ethical considerations
156
define secondary data
information that was collected from other studies
157
benefits of secondary data
- use data from bigger samples - access to information you wouldn’t be able to reach - meta- analysis - quicker - objective and detached
158
weaknesses of secondary data
- may not be exactly what you are researching - may not understand research in detail - takes time to analyse - outdated - unreliable
159
define meta analysis
analyse results from loads of different studies and come up with general conclusions
160
benefits of meta analysis
- help to identify trends - increase sample size and reliability of findings
161
weaknesses of meta analysis
- publication bias like file drawer problem where researcher intentionally does not publish all the data - some research may contradict each other
162
what are the measures of central tendency and how do you work them out
mean- add all number together and divide by how many values there are median- putting the numbers in order and finding the middle number mode- most common number
163
evaluation of mean
+considers all data, used for further calculations - can be skewed by extreme values and make it unrepresentative, can give unrealistically precise values that don’t work for discrete data
164
evaluation of median
+ will not be affected by extreme values - may not be representative, little further use
165
evaluation of mode
+ will not be affected by extreme values, makes more sense when presenting discrete values, easy to use - does not use all data, may have more than one mode, little further use
166
what are the dispersion techniques and how to work them out
range- highest minus lowest standard deviation- spread around the mean
167
evaluation of range
+ can see consistency, easy to calculate - affected by extreme values, fails to account distraction, does not account numbers in the middle
168
evaluation of standard deviation
+ precise measure where all values are taken into account - difficult to calculate, affected by extreme values
169
define longitudinal studies
studies conducted over a long period of time to observe long term effects between the same individual
170
evaluation of longitudinal studies
+ in depth, reduces participant variables - extraneous variables, people might drop out
171
define cross sectional studies
group of participants are compared to another group at the same point in time
172
evaluation of cross sectional studies
+ efficient, more control over experiment - participant variables
173
define cross cultural studies
compare behaviours in different cultures
174
how to display quantitive data
- table - line graph - histogram - bar chart - scattergram - pie chart
175
features of tables
- clearly present data and show any patterns - raw data to show scored before analysis
176
features of line graphs
- can show more than one set of data - continuous data in list form - independent on x and dependent on y - join each point up - see trends over time
177
features of histograms
- continuous scale - uses class intervals - columns touch each other - frequencies of scores
178
features of bar charts
- non- continuous data - columns do not touch
179
features of scattergrams
- show relationship and correlation between two variables - continuous data on both axis - draw line of best fit
180
features of pie chart
- sectors of a circle to show proportion
181
define content analysis
quantifying qualitative data through the use of coding units
182
features of content analysis
-indirect form of observation as you analyse artefact people have produced - put into categories or typologies, quotations and summaries
183
what sampling method in content analysis
analysing content every n number of times
184
how to carry out coding in content analysis
1. watch/read the sample and identify potential categories which have emerged 2. compare categories/ coding unit with another psychologist and use the ones they have agreed upon 3. give examples of the categories that they would be looking for and operationalise 4. carry out content analysis separately and counting the number of examples that fall into each category 5. compare examples to look for agreement
185
define thematic analysis
recurring themes identified during coding and are described further in greater detail, perhaps by conducting further analysis
186
define test-retest reliability
conduct the content analysis and then recode them at a later date and compare the two sets of data
187
evaluation of content analysis
+ easy to perform, non-invasive and ethical, high ecological validity, easily repeated and reliable - observer bias, subjective, non-descriptive, cultural bias, may not have ecological validity compared to real life, choice of content can be biased
188
define case study
in depth investigations of a single person, group of people or event
189
features of a case study
- represents thoughts, emotions, experiences and abilities - longitudinal- follow over an extended period of time - qualitative data like interviews, observations and questionnaires - examples like HM and KF
190
evaluation of case studies
+ rich detail, help construct theories, help study the unusual - hard to generalise, ethical issues like confidentiality and psychological harm, objectivity, past records may be biased or incomplete, hard to establish cause and effect
191
how to assess validity
face validity or concurrent validity
192
define reliability
measure of consistency
193
how to assess reliability
- test-retest reliability- administering the same test or questionnaire in the same person on different occasions, 2 week time frame - inter-observer reliability- assess observations by conducting content analysis
194
how to improve reliability
- questionnaires can be rewritten so they aren’t ambiguous - use same interviewer - properly trained interviewers - no leading or ambiguous questions - structured interviews - operationalised behavioural categories that do not overlap
195
define a normal distribution curve
symmetrical pattern of data that creates a bell shaped curve, all measures of central tendency are th same or similar and in the middle
196
what is a positively skewed distribution
data is concentrated to the right (ceiling effect) and mean is mean higher
197
what is a negatively skewed distribution
data is concentrated to the right (floor effect) and mean is much lower
198
what is nominal data
data is in separate categories
199
what is ordinal data
data is ordered in some way but the scores do not use standardised scales
200
what is interval data
data that is measured using equal intervals and can go into minuses
201
what is ratio data
data measured with equal intervals but cannot go into minuses
202
define statistical testing
provides a way of determining whether a hypothesis should be accepted or rejected
203
factors affecting choice of statistical test
levels of measurement- nominal, ordinal, interval or ratio type of test- difference or correlation tion design- related (repeated measures/matched pairs) or unrelated (independent groups)
204
test for unrelated, nominal data
chi squared
205
test for related, nominal data
sign test
206
test for unrelated, ordinal data
mann whitney
207
test for related, ordinal data
wilcoxon
208
test for correlated, ordinal data
spearmans
209
test for unrelated, interval data
unrelated t test
210
test for related, interval data
related t test
211
test for correlated, interval
pearson’s
212
how to conduct a sign test
1. state the hypothesis 2. find out if each participants score increased, decreased or stayed the same 3. find s value- number of participants with least frequent sign 4. find n value- number if participants either change in results 5. check results in statistical table to find the critical value 6. state a conclusion
213
how do you know if your results are significant or not
s value must be equal to or less than critical value for it to be significant and reject the null otherwise, it is not significant and you accept the null
214
what is peer review
a way of assessing the scientific credibility of a research paper by other psychologists who work in a similar field
215
ways of conducting a peer review
single blind- names of reviewers not revealed double blind- both reviewers and researchers are anonymous open- both reviewers and researchers are known to each other
216
purpose of peer review
- allocation of research funding - publication of research in journals and books - assess research rating of a univeristy
217
evaluation of peer review strengths
+ check validity of research and determine how important it is + anonymity means reviewers can be honest
218
evaluation of peer review weaknesses
- appropriate experts may not conduct the review - may be biased towards prestigious researchers - anonymity may mean some are too harsh or critical - potential for research to be stolen- affect social relationships - publication bias where only positive results are published - can be misleading as once published it is in the public domain, even if it is wrong, like MMR link to autism - prevents progress of new ideas as radical ideas are often overlooked - takes a long time and may become out of date
219
what does p<=0.05 mean
5% likelihood of the results occurring by chance, so you can be 95% certain your results are significant
220
what is a type 1 error
when experimental hypothesis is accepted when actually results were chance findings so null should have been (usually due to high probability)
221
what is type 2 error
when null hypothesis is accepted when there was a real difference (due to low probability)
222
what to remember when completing other statistical tests
- independent groups designs have two N values - N is number of participants - for correlation tests, ignore the sign but focus on the magnitude - for t-tests and pearsons, degree of freedom (N) is N-2 - for chi squared, degrees of freedom (N) is (number of rows - 1) x (number of columns - 1)
223
how are the different methods of central tendency and dispersion useful when designing a study
- mean- cannot be used with nominal data - median- appropriate for ordinal data - mode- only method that can be used for nominal data - range- useful for ordinal data - SD- best used with mean to describe interval/ratio data that is normally distributed
224
tips when designing a study
- stick to what they ask you to do - use bullet points as subheadings - correct terminology - justify why you have chosen to do what you suggest - suggestions must be well thought out, practical and ethical - link everything to study - plan
225
sections of a psychology report
- title - abstract - introduction - method - results - discussion - references - appendices
226
what is a title in a psychology report
short clear description of study
227
what is in the abstract of a report
- summary of study including aims, hypothesis, method and results - allows reader to determine if the report is worth reading
228
what is in the introduction of a report
- review of previous research that is relevant - start eternal and become more specific (funnel) - end with stating aims and hypothesis
229
what’s in the method of a report
- detailed descriptions of what the research did - in great detail that it could be replicated - design, participants, material, procedures, ethics
230
what is in the results of a study
- what the research found - descriptive statistics- graphs and measures of central tendency and dispersion - inferential statistics with significance levels and justification with hypothesis rejected or accepted
231
what is in the discussion of a study
- interpret the results - relationship to previous research - strengths and weaknesses of methodology - implications for theories and real work application - suggestions for future research - contribution to research in the current field
232
how to write references for a journal
last name. first initial, middle initial. (year). title of journal article. name of journal, volume number (issue number), page numbers.
233
how to references for a book printed
last name. first initial, middle initial. (year). book title. place of publication: publisher
234
how to write reference for book online
last name. first initial, middle initial. (year). book title. retrieved from URL
235
how to write reference for website
last name. first initial, middle initial. (year, month date published). article title. retrieved from URL
236
what is in the appendix of a study
supporting material like raw data
237
features of science
- empirical method- method of gaining knowledge through direct observation or testing rather than unfounded beliefs - objectivity- empirical data should not be affected by bias - replicability- ability to repeat research to check the validity of results - theory construction- explanation or theories must be constructed to make sense of facts - hypothesis testing- validity is tested to see if results are significant - falsifiability- prove a hypothesis wrong in order to be sure of results
238
define theory
collection of general principles that explain observations and facts
239
define inductive research
begins with research question which helps form hypotheses and theory
240
define deductive research
research is theory driven which guides data collection
241
define a paradigm and paradigm shift
-shared set if assumptions about the subject matter of a discipline - paradigm shift is when a new minority idea is accepted - Kuhn argues something is a science if it has a paradigm