Final Exam Study Guide Flashcards

(191 cards)

1
Q

what is a population

A

group who you are interested in learning something about

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

group who you are interested in learning something about

A

Population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is a sample

A

small group of people from TP included in the actual study (study participants)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

small group of people from TP included in the actual study (study participants)

A

sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

where can you find the sample information

A

methods section

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is the sample size and what is an appropriate size

A

how many people being studied; red flag <20 people
Small leads to inconclusive results or random chance dominates outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is a representative sample

A

reflects characteristics of TP
Provides generalizability
Similar to the TP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

reflects characteristics of TP
Provides generalizability
Similar to the TP

A

representative sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is a power analysis

A

determines if the research manuscript considered sample size when designing their study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

determines if the research manuscript considered sample size when designing their study

A

power analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is generalizability influenced by

A

sampling bias & inc/exc criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what has an impact on generalizability

A

sampling bias, sample error, small sample size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is generalizability closely tied to

A

external validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is external validity

A

extent to which the findings can be applied to broader populations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is a non-representative sample

A

only relevant to that sample & not TP
Biased samples, Over and under estimate certain pop attributes, impacts overall validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

only relevant to that sample & not TP
Biased samples, Over and under estimate certain pop attributes, impacts overall validity

A

non representative sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How to know if sample represents TP

A

similar profile

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what is sampling

A

choosing sub-group to represent TP
Process used to pool the sample from the TP
How the sample was chosen

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

what is sampling error

A

difference between the makeup of the study sample and that of the population of interest
Happens naturally

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

what is sampling error influenced by

A

sample size & variability in TP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what reduces sample error

A

larger sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what is sampling bias

A

certain members are more/less likely to be included in a sample than others

Meaningful differences between non-participants and participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Meaningful differences between non-participants and participants

A

sampling bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

how to determine sampling bias

A

Look for how participants are chosen, demographics are broad, groups that were explicitly excluded?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
what is a consort diagram
insight into how many potential participants actually enrolled & completed the study
26
insight into how many potential participants actually enrolled & completed the study
consort diagram
27
Used to record the study outcomes Indicates if intervention is effective or not Are they reliable & valid
measurements
28
what are measurement scales
quantify, categorize, analyze & report data
29
what are the measurement scales
ratio interval ordinal nominal
30
describe ratio
cannot have - #s, true 0, equal interval bw units, allows for math (+,-,/,x): ex duration, SPL, grip strength, height, weight
31
describe interval
can have - #s, no true 0, equal interval bs units, ex range of motion, BC thresh, temp, allows for math
32
describe ordinal
inherent ranking/ordering, #s have no value, order of finishing a race, agree on satisfaction survey, Likert scale
33
describe nominal
categorize/label variables, #s have no value, no order, ex 1-f 2-m, 3dead
34
describe binary
only allows for two values, ex below 50 & above 50 or yes no
35
what are the 3 sources, how do you fix them
instrument (calibration), person (training/unfamiliar w/ instrument), variable (inherent instability - think BP variability; take multiple measures)
36
what are the types of errors
systemic & random
37
describe systemic errors
occurs in 1 direction & constant magnitude; graph is straight line
38
describe random errors
unpredictable, due to chance; graph is random
39
concerned with degree of random error in measurement process
reliability
40
Asking if measures are
dependable, reproducible, & if measured more than once is the result the same
41
what are the types of reliability
test retest rater - intra & inter internal consistency
42
describe test retest reliability
measuring repeatedly & compare results; error due to instrument
43
what are the rater reliabilities
inter and intra
44
describe intra reliabiity
1 person measuring same construct multiple times & do they get same answerd
45
describe inter reliability
multiple raters and calibrated to measure the same thing, do they get the same result
46
describe internal consistency
related to questionnaires & reliability bw set of items measuring same concepts (depression, pain, anxiety, happiness, etc.)
47
what are good and bad reliability coefficients
<.5 poor, .5 - .75 moderate, >.75 - 1 good Cannot be negative or below 0 Context specific (psychological, motion, ROM, physiological, chemical) - look at previous lit
48
A measure with perfect agreement will never demonstrate a perfect correlation
FALSE always
49
Perfect reliable instrument = observed measurement is true
true
50
what is validity
is instrument measuring what is intended to
51
what are the types of validity
face content construct criterion
52
what is face validity
instrument is testing what it is supposed to, Weakest form
53
instrument is testing what it is supposed to, Weakest form
face validity
54
what is content validity
items making up instrument adequately sample universe of content that defines the variable measured; useful w/ questionnaires & inventories Experts subjectively determine if an instrument captures all domains of a construct
55
items making up instrument adequately sample universe of content that defines the variable measured; useful w/ questionnaires & inventories Experts subjectively determine if an instrument captures all domains of a construct
content validity
56
what is construct validity
establishes ability of instrument to measure abstract construct & which it reflects theoretical components of the construct (QOL, depression, burnout etc)
57
establishes ability of instrument to measure abstract construct & which it reflects theoretical components of the construct (QOL, depression, burnout etc)
construct validity
58
types of construct
convergent discriminant
59
describe convergent construct validity
tests that are intended to measure same thing actually does so Compare your outcome measures to established outcome measures of the same construct - should have similar results Have one test already established & you want to compare your new one
60
tests that are intended to measure same thing actually does so Compare your outcome measures to established outcome measures of the same construct - should have similar results Have one test already established & you want to compare your new one
convergent construct validity
61
describe discriminant construct validity
tests that are measuring diff things actually do so (depression from anxiety) Take an already established outcome measure that assesses another related construct and compare your outcomes to it Scores should differe because they are measuring different constructs (discriminates between the two)
62
tests that are measuring diff things actually do so (depression from anxiety) Take an already established outcome measure that assesses another related construct and compare your outcomes to it Scores should differe because they are measuring different constructs (discriminates between the two)
discriminant construct validity
63
what is criterion validity
assess ability of 1 test to predict results obtained from another (criterion) test; most practical & objective Indicates outcome of 1 instrument, target test, can be used as a substitute measure for an established gold standard criterion test
64
assess ability of 1 test to predict results obtained from another (criterion) test; most practical & objective Indicates outcome of 1 instrument, target test, can be used as a substitute measure for an established gold standard criterion test
criterion validity
65
what are the types of criterion validity
concurrent predictive
66
describe concurrent criterion validity
compares less established/new instrument to gold standard measuring the same thing with 2 instruments @ the same time
67
describe predictive criterion validity
correlates one thing Ex: gre test scores to measure succes in grad school or correlates gpa in science to future performance
68
correlates one thing Ex: gre test scores to measure succes in grad school or correlates gpa in science to future performance
predictive crierion validity
69
compares less established/new instrument to gold standard measuring the same thing with 2 instruments @ the same time
concurrent criterion validity
70
Questions to ask in Instrumentation or Outcomes section
What are DV of the study How is each variable operationalized Potential range of values/scores for each instrument & how are they interpreted Is there evidence of instrument reliability & validity
71
what is the target population
group to whom you can generalize your results
72
how do you operationalize the target population
Inclusion-characteristics of the population you are interested in studying; characteristics they have to qualify for the study Exclusion-identify attributes that confound study findings
73
what is the inclusion criteria
characteristics of the population you are interested in studying; characteristics they have to qualify for the study
74
what is the exclusion criteria
identify attributes that confound study findings
75
what is the independent variable
variable being manipulated, differs between groups, typically identify levels of the independent , always described as a construct (treatment or intervention)
76
how to operationalize the IV
distinguish bw levels of IV & replicate same procedures in another study
77
what is the dependent variable
the outcome of the study/variable which you expect the see a diff bw groups
78
how do you operationalize the DV
describe how it is being measured (what measurement tool will be used to measure the variable & describe the measurement procedures enough detail to replicate the measurement
79
what is internal validity
are results caused by the thing you are testing or is it something else influencing it? Accuracy of study, Did show that IV and only IV caused results.
80
what is external validity
degree to which the results of study can be generalized to the larger target pop, application to other situations how you can relate the study to the outside world. Can we generalize results beyond the sample, decreases as internal increases
81
Which of the following can directly influence the generalizability of a study? Sampling bias Scales of measurements Inclusion and exclusion criteria All of the above Sampling bias and inclusion/exclusion criteria
Sampling bias and inclusion/exclusion criteria
82
When recruiting subjects, you should consider whether non-participants (those who decline to be included in the study) differ from participants (those who agree to be included in the study) in any meaningful way. If there is a meaningful difference between non-participants and participants, this may indicate: Sampling bias A strict exclusion criterion The need to stratify your sample None of the above
Sampling bias
83
The study sample will be described in the _________ section of a manuscript. Discussion Abstract Introduction Methods
Methods
84
The generalizability of a study is closely tied to: Internal validity Statistical validity The p-value Operationalization of the dependent variable External validity
External validity
85
A representative sample is a sample that reflects: Characteristics of the target population The researchers' ability to access the target population Sample size All of the above
Characteristics of the target population
86
Sampling error refers to: A respondent's adoption of a meaningless response pattern, such as "a,b,c. . .a,b,c. . a,b,c." The number of groups in the study Critique provided by an expert panel The difference between the makeup of the study sample and that of the population of interest Within group variability
The difference between the makeup of the study sample and that of the population of interest
87
In order to determine if the authors of a research manuscript considered sample size when designing their study, you should look for evidence of: Table 1 A power analysis A statistically significant effect Effect size calculations
a power analysis
88
A ___________ will provide insight into how many potential participants were actually enrolled in, and completed the study. Copyright issues Table 1 Power analysis Significant result CONSORT diagram
CONSORT diagram
89
A subset of individuals from the larger group of individuals we want to learn something about is called the Sampling Sample Population Independent variable
Sample
90
In most cases, it is impractical to study an entire population because (check all that apply): It would be cost prohibitive It would be time intensive It would be too efficient It would be difficult to identify everyone in a given population
It would be cost prohibitive It would be time intensive It would be difficult to identify everyone in a given population
91
A sample that is not representative of the population (check all that apply): Can underestimate certain population attributes being studied Can overestimate certain population attributes being studied Is also known as a biased sample Will impact the overall validity of a study
Can underestimate certain population attributes being studied Can overestimate certain population attributes being studied Is also known as a biased sample Will impact the overall validity of a study
92
Generalizability is dependent upon A representative sample The independent variable A sample that reflects the relevant variables of a population A sample that reflects the relevant characteristics of a population The dependent variable
A representative sample A sample that reflects the relevant variables of a population A sample that reflects the relevant characteristics of a population
93
Which of the following can impact the generalizability of study findings? Sampling error Sampling bias Small sample size All of the above
all
94
Measurements are an essential component of any research study.
t
95
If you cannot trust the study measurements, you cannot trust the study findings.
t
96
When critically reviewing measurements from a study, you are primarily scrutinizing the measurement's: Validity Randomization Sampling methods Reliability
validity reliability
97
Which of the following measurement scales would always allow you to implement mathematical operations? Ordinal Ratio Nominal Interval
ratio interval
98
Which of the following correctly ranks the measurement scales from most informative to least informative? Ratio, interval, ordinal, nominal Interval, nominal, ordinal, ratio Nominal, ordinal, interval, ratio Ratio, ordinal, interval, nominal
Ratio, interval, ordinal, nominal
99
I measured my patients’ handgrip strength using a dynamometer. I record the measurements in kilograms then classify each patient as 1=weak, 2=normal, 3=strong. I have converted my data from a(n) _____________ scale to a(n) ______________ scale. Interval, ordinal Ratio, interval Interval, nominal Ratio, ordinal
ratio ordinal
100
I measured my patients’ handgrip strength using a dynamometer. I record the measurements as 1=weak, 2=normal, 3=strong, then classify patients as 1=cleared for participation, 2=not cleared. I have converted my data from a(n) _____________ scale to a(n) ______________ scale. Interval, nominal Nominal, ordinal Ordinal, nominal Ratio, ordinal
ordinal nominal
101
Primary sources of error include: The variable The instrument The rater All of the above
all
102
A _______ error will ALWAYS be in the same direction and in the same magnitude. Random Rater Response variable Systematic
systematic
103
When determining the reliability of a measure, you are likely asking: How is the dependent variable operationalized? Are the measurements reproducible? Are the measurements dependable? If the variable is measured more than once, will you get the same result? How is the independent variable operationalized?
Are the measurements reproducible? Are the measurements dependable? If the variable is measured more than once, will you get the same result?
104
Acceptable intra- AND inter-rater reliability should always be established prior to data collection.
t
105
____________ reliability is concerned with the stability of data recorded by one rater across two or more trials.
intra
106
Raters and response variables are not considered during _______________ reliability. Internal consistency Intra-rater Inter-rater Test-retest
test retest
107
Which of the following is true? A measure with perfect agreement will always demonstrate a perfect correlation A perfectly correlated measure will always demonstrate perfect agreement A perfectly correlated measure will never demonstrate perfect agreement A measure with perfect agreement will never demonstrate a perfect correlation
A measure with perfect agreement will always demonstrate a perfect correlation
108
When an instrument is perfectly reliable: The true measurement is random error plus systematic error The observed measurement is the true measurement The observed measurement is the true measurement plus random error The true measurement is the observed measurement plus systematic error
The observed measurement is the true measurement
109
If you, as a clinician, review an instrument and make a subjective judgment call that the instrument measures what it intends to measure, you are determining: Concurrent validity Face validity Convergent validity Content validity
face
110
If, during the development phase of an instrument, a researcher asks experts (eg, researchers, clinicians, patient) to subjectively determine if an instrument captures all domains of a construct, the researcher is aiming to establish: Concurrent validity Face validity Convergent validity Content validity
content
111
Which of the following are classified as types of construct validity? Convergent Concurrent Predictive Discriminant
convergent discriminate
112
Which of the following are classified as types of criterion validity? Convergent Concurrent Predictive Discriminant
concurrent predictive
113
If I measured my subjects' activities of daily living using a scale in which the total score consisted of the sum of their endorsements of 20 items, but chose to report the percentage of subjects who scored below 50 and those who scored above 50, I would be converting my data from a(n) _____ scale to a(n) _____ scale. nominal, ordinal ratio, binary interval, ratio ordinal, ratio interval, ordinal
ratio binary
114
Authors of a research manuscript conventionally provide evidence of a scale's reliability or validity by citing a(n): Paper in which the scale was used previously Validation article Personal communication from the scale's developer Analysis conducted during the current study
validation article
115
You are interested in studying the effects of prolonged exposure to loud noises on hearing ability. At the end of your study, you realize that your audiogram underestimated hearing threshold by exactly 10 decibels for all study subjects. This means that your hearing ability measures are: Reliable and valid Not reliable but valid Reliable but not valid Neither reliable nor valid
reliable but not valid
116
You are interested in studying the effects of caffeine (low dose vs. high dose) on heart rate. For your study, you have operationalized heart rate as 1=below normal, 2=normal, 3=above normal. Which of the following mathematical operation(s) are permissible on your dependent variable? Multiplication Addition Counting None of the above
counting
117
You are interested in studying the effects of caffeine (low dose vs. high dose) on heart rate. For your study, you have operationalized heart rate as average number of beats per minute during a 5-minute session. Which of the following mathematical operation(s) are permissible on your dependent variable? Counting Multiplication Average Division All of these operations
all
118
I read a manuscript that reported developing a scale I want to use. The authors reported that the scale is valid. How do I determine if it is reliable? Find another development article for the scale Assume that the scale is reliable because it has been shown to be valid Find another article on the scale’s reliability Find another scale
Assume that the scale is reliable because it has been shown to be valid
119
The developers of the GRE argue that it is capable of predicting performance in graduate school, but the administration at Harvard claims that it does not because it is biased. They are arguing about: Face validity Content validity Reliability Criterion validity Discriminant validity
criterion
120
One version of the SF-36 quality-of-life scale has been validated for acute stroke patients. The authors of this scale noted that the population they used for validation had suffered a stroke within the last 60 days, and were more than 65 years of age. It is permissible to use this scale for: Patients who suffered a stroke within the last 60 days Patients who are more than 65 years of age Patients who suffered a stroke within the last 60 days, and are more than 65 years of age Stoke patients Patients over 65 years of age
Patients who suffered a stroke within the last 60 days, and are more than 65 years of age
121
The quality of a manuscript is best evaluated: Either awful or great On a continuum Black or white Either yes or no Blue or red
continuum
122
The authors of a research manuscript used the Neurobevioural Functional Inventory, but report no validation information for this scale. You are interested in potentially using the results of their study to modify your practice. What should you do? Search for a validation article for the Neurobevioural Functional Inventory Ignore the authors’ study because they were too stupid to include validation information Assume that the Neurobevioural Functional Inventory is valid, and go forward Validate the Neurobevioural Functional Inventory yourself
Search for a validation article for the Neurobevioural Functional Inventory
123
Which of the following are primary components to evidence-based practice? Clinical expertise Patient values Reliable measurements Best available evidence
Clinical expertise Patient values Best available evidence
124
Primary research articles collect data from: The participants Published research articles The peer-reviewers Textbooks
participants
125
Secondary research articles collect data from: The participants Published research articles The peer-reviewers Textbooks
published research articles
126
Systematic reviews and meta-analyses can be very useful for busy clinicians because the authors do all of the hard work for you including Completing a comprehensive search of the literature to find relevant articles Critically reviewing all of the relevant articles Incorporating the evidence into your patient care Offering a concise summary of the findings from the articles
Completing a comprehensive search of the literature to find relevant articles Critically reviewing all of the relevant articles Offering a concise summary of the findings from the articles
127
The basic structure and sections of the methodology of systematic reviews and meta-analyses include the: Search strategy Selection criteria Assessment of study quality Synthesis of the evidence
Search strategy Selection criteria Assessment of study quality Synthesis of the evidence
128
All systematic reviews and meta-analyses are of good quality.
f
129
how certain are they that the DV caused the IV
bias
130
if they can convince that manipulation of IV resulted in change in DV you get low risk of bias scores in most categories
true
131
Primary components to evidence-based practice
Clinical expertise, PT values, & best available evidence
132
Primary research methodology (randomized controlled trials, cohort studies, etc.)
Study design Participants Procedures study/main outcomes Data is collected from participants
133
Secondary research articles methodology (systematic & meta-analysis)
Search strategy, selection criteria, assessment of study quality, synthesis of evidence
134
Review of databases & sources used to identify relevant articles
search strategy
135
Review of the inclusion & exclusion criteria of the articles
selection criteria
136
Ensures high-quality studies (e.g., well designed) are weighted more in synthesis of findings
assessment of study qulity
137
Stage at which investigators determine if meta is appropriate
synthesis of evidence
138
Meta is pursued if included studies are
similar enough
139
Can also use _________to determine if meta is appropriate
statistical measures of heterogeneity
140
If I square is you can consider meta
<.6
141
Uses quantitative methods to summarize current status of evidence by pooling results of similar studies asking the same question
meta
142
what is the purpose of i squared
describes the % of variation across studies in a meta-analysis that can be attributed to heterogeneity beyond chance
143
what is a forest plot
graphical representation displaying pooled results of included studies
144
what is the 95% CI
Estimates what would likely happen if the entire pop was included in the study want the line smaller
145
triangles/sample size of forest plot
contribution of the study to the overall results
146
weight of forest plots
How much each study contributes to overall effect It is a fxn of the sample size and how wide the standard deviation was (smaller = more weight) & how they are calculated
147
overall large triangle at bottom of forest plots
Pooled effect size of all included studies
148
Solid vertical line in forest plot
null line or line of null effect No difference between intervention & control group Divides one result from the other - ratio
149
Each of the following would be included in the methodology of a systematic review EXCEPT: Synthesis of evidence Statistical analysis Search strategy Study quality assessment All of the above would be included in a systematic review
Statistical analysis
150
The fundamental methodological difference between primary research and secondary research is their __________________ . Outcome variables Target population Unit of analysis Intervention of interest
Unit of analysis
151
The overall quality of a meta-analysis will depend upon __________________ . The number of articles included in the analysis Its common effect The reported effect size across all studies Its methodology
methodology
152
An I-squared value of .90 would suggest that: The effect of the pooled data significantly favors the intervention group The effect of the pooled data significantly favors the control group A meta-analysis should not be performed Data from multiple studies can be pooled to derive one overall estimate
A meta-analysis should not be performed
153
The methodology of a systematic review should be explicit and structured to ensure that readers can: A. Reproduce the study B. Assess the overall quality of the study C. Determine whether the proper statistical analyses were utilized for the study D. Both A and B E. All of the above
D. Both A and B
154
In a forest plot, if the diamond for “overall effect” crosses the vertical line at “0” this indicates: The effect of the pooled data significantly favors the intervention group The effect of the pooled data significantly favors the control group The difference between the pooled groups is not statistically significant Significant heterogeneity
The difference between the pooled groups is not statistically significant
155
Which of the following would you tend to trust most? Meta-analysis of cohort studies Meta-analysis of randomized controlled trials Meta-analysis of cross-sectional studies Systematic review of case studies Systematic review of case-control studies
Meta-analysis of randomized controlled trials
156
A systematic review’s “target population” is operationalized by the Objective Search strategy Selection criteria Bias evaluation I-squared value
selection criteria
157
Which type of research is primarily focused on numerical data and statistical analysis? Qualitative research Quantitative research Both Neither
quant
158
Qualitative research methods include surveys with closed-ended questions. True False
false
159
Which approach is more likely to use participant observation and interviews for data collection? Qualitative research Quantitative research Both Neither
qual
160
Cross-verifying data through multiple sources or methods
triangulation
161
what is triangulation
use of two or more strategies to collect, interpret or analyze information, e.g., combining data from multiple sources: interviews, observations or multiple raters
162
Enhances the credibility of the findings Validating findings with study participants
member checking
163
what is member checking
Investigators verify their interpretations about the data with the subjects who provided the information
164
Systematically maintained set of documentation
audit trail
165
what is the purpose of audit trail
To ensure the research process was systematic and transparent, To check for the confirmability of the study outcomes , and To verify the consistency and dependability of the findings
166
accuracy and truthfulness of the findings
credibility
167
applicability of the study's findings to other contexts
transferability
168
stability and consistency of the data over time
dependatbility
169
degree to which the findings are shaped by the participants and not researcher bias
confirmability
170
similar to objectivity
confirmability
171
similar to reliability
dependability
172
similar to external validity
transferability
173
similar to internal validity
credibility
174
Transferability in qualitative research is similar to which concept in quantitative research? Internal validity Validity Reliability Generalizability
Generalizability
175
What does credibility in qualitative research refer to? The statistical significance of the results The size of the participant sample The potential to replicate the study The accuracy and truthfulness of the findings
The accuracy and truthfulness of the findings
176
Which of the following best describes member checking's contribution to research? Increases numerical data accuracy Reduces the need for data analysis Enhances the credibility of the findings Decreases the study's duration
Enhances the credibility of the findings
177
Credibility in qualitative research is similar to which concept in quantitative research? Generalizability Validity Reliability Internal validity
nternal validity
178
Triangulation in qualitative research refers to: Using three data collection methods only Limiting research to three participants for depth Focusing solely on triangular relationships Cross-verifying data through multiple sources or methods
Cross-verifying data through multiple sources or methods
179
What is the primary focus of qualitative research? Statistical analysis Understanding human experiences Predicting future trends Generalizing findings to large populations
Understanding human experiences
180
Which is NOT a purpose of conducting an audit in qualitative research? To ensure the research process was systematic and transparent To verify the consistency and dependability of the findings To assess the financial expenditures of the study To check for the confirmability of the study outcomes
To assess the financial expenditures of the study
181
Dependability in qualitative research is similar to which concept in quantitative research? Reliability Internal validity Validity Generalizability
Reliability
182
Member checking involves: Checking members' credentials in a research team Peer review of the research methodology Membership in relevant research organizations Validating findings with study participants
Validating findings with study participants
183
Which data collection method is commonly used in qualitative research? Surveys with closed-ended questions In-depth interviews Numerical data analysis Large-scale experiments
In-depth interviews
184
for qualitative studies, which of the following study features would likely indicate a good quality study? A control group External auditor Member checking Random assignment Triangulation
External auditor Member checking triangulation
185
Which is of the following questions would be associated with the critical review of the study sample? **Check all that apply** Is the study sample large and diverse enough to generalize study findings to the target population? Does the target population look like your patient population? Are the inclusion / exclusion criteria too strict or narrow? Did the investigators provide enough details of the intervention for a reader or researcher to replicate? Were participants recruited from multiple locations to create a diverse study sample?
Is the study sample large and diverse enough to generalize study findings to the target population? Does the target population look like your patient population? Are the inclusion / exclusion criteria too strict or narrow? Were participants recruited from multiple locations to create a diverse study sample?
186
Which is of the following questions would be associated with the critical review of the independent variable? Did the investigators provide enough details of the intervention for a reader or researcher to replicate? Is the study sample large and diverse enough to generalize study findings to the target population? Were participants recruited from multiple locations to create a diverse study sample? Did the investigators provide enough details for the reader to distinguish between the levels of the independent variable? Are the inclusion / exclusion criteria too strict or narrow?
Did the investigators provide enough details of the intervention for a reader or researcher to replicate? Did the investigators provide enough details for the reader to distinguish between the levels of the independent variable?
187
In critically reviewing a description of the target population, what are you looking for?
Inclusion/exclusion criteria External validity (can the results be generalized, and do these subjects look like my patients?) How were the subjects recruited (e.g., single center or multiple centers)
188
Critically reviewing the study sample
Is the study sample large and diverse enough to generalize study findings to the target population? Does the target population look like your patient population? Are the inclusion / exclusion criteria too strict or narrow? Were participants recruited from multiple locations to create a diverse study sample?
189
Critical review of IV
Did the investigators provide enough details of the intervention for a reader or researcher to replicate? Did the investigators provide enough details for the reader to distinguish between the levels of the independent variable?
190
Critical Review of DV
Are the instruments/tools chosen to measure the outcome valid and reliable?
191
if they are valid they are also reliable But if they are reliable it may not be valid
T