Research Design Flashcards

(115 cards)

1
Q

What is a theory ?

A

A general principle or body of principles offered to explain a phenomenon.

Like Dalton’s atomic theory or Einsteins theory of relativity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Range of nursing theories

A

Grand theories
Broadest scope, most abstract
Apply to all nursing activities

Mid-range theories
Narrower in scope
Bridge between grand theories & practice

Practice theories
Most narrow scope & least abstract

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Jean Watson’s Caring Science Theory

A

Really looks into caring and what it consists of and its role in nursing

  • read on if you desire

Caring can be effectively demonstrated & practiced only interpersonally.
Caring consists of carative factors that result in the satisfaction of certain human needs.
Effective caring promotes health & individual or family growth.
Caring responses accept a person not only as he or she is now but as what he or she may become.
A caring environment is one that offers the development of potential while allowing the person to choose the best action for himself or herself at a given point in time.
Caring is more “ healthogenic” than is curing. A science of caring is complementary to the science of curing.
The practice of caring is central to nursing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Conceptual Models

A

Represent a less formal attempt to explain phenomena than theories
Deal with abstractions, assembled in a coherent scheme

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Just understand that implicitly or explicitly, studies should have a ____________ or ______________ framework.

A

theoretical; conceptual

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the caveat with nursing theories ?

A

Nursing “Grand Theories” evolved from efforts to establish nursing as a profession, separate from medicine.
Difficult to empirically test the aspirational, abstract grand theories, so less relevance to evidence-based practice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

From a population a portion of the population is selected to represent the entire population …. what is this called ?

A

Sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Eligibility criteria include

A

inclusion and exclusion criteria, specific characteristics that defines the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a strata ?

A

Subpopulations of a population - such as male and female

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the target population

A

The entire population of interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a representative population ?

A

A sample whose key characteristics closely approximate those of the target population—a sampling goal in quantitative research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Representative samples are more easily achieved with …..

A

Probability sampling
Homogeneous populations
Larger samples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is sampling bias ?

A

The systematic over- or under-representation of segments of the population on key variables when the sample is not representative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a sampling error?

A

Differences between sample values and population values

E.g. population mean age = 65.6 yrs, sample mean age = 59.2 yrs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Difference between probability sampling and non probability sampling …..

A

One involves random selection of elements with each having an equal, independent chance of being selected

The other does not involve random selection of elements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Types of nonprobability sampling

A

Convenience sampling
Snowball (network) sampling
Quota sampling
Purposive sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Convenience sampling involves

A

really whatever is most accessible and conveniently available

Most widely used approach by quantitative researchers
Most vulnerable to sampling biases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Snowball Sampling

A

Referrals from other people already in a sample

Used to identify people with distinctive characteristics
Used by both quantitative and qualitative researchers; more common in qualitative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Quota Sampling

A

Convenience sampling within specified strata of the population
Enhances representativeness of sample
Infrequently used, despite being a fairly easy method of enhancing representativeness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Consecutive sampling involves ….

A

COME, LETS GO, EVERYONE INSIDE, everyone one who is here!!!!

Involves taking all of the people from an accessible population who meet the eligibility criteria over a specific time interval, or for a specified sample size
A strong nonprobability approach for “rolling enrollment” type accessible populations
Risk of bias low unless there are seasonal or temporal fluctuations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Purposive (Judgemental ) Sampling

A

Sample members are hand-picked by researcher to achieve certain goals

Used more often by qualitative than quantitative researchers
Can be used in quantitative studies to select experts or to achieve other goals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Types of Probability Sampling

A

Simple random sampling
Stratified random sampling
Cluster (multistage) sampling
Systematic sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Simple Random sampling

A

Uses a sampling frame – a list of all population elements
Involves random selection of elements from the sampling frame

Example- a list of all households in Montgomery County - then 500 households are randomly selected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Stratified Sampling

A

Population is first divided into strata, then random selection is done from the stratified sampling frames
Enhances representativeness
Can sample proportionately or disproportionately from the strata

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Cluster (Multistage) Sampling
Successive random sampling of units from larger to smaller units (e.g., states, then zip codes, then households) Widely used in national surveys Larger sampling error than in simple random sampling, but more efficient
26
Sample size adequacy is a key determinant of ___________ in quantitative research. Sample size needs can and should be estimated through ________ for studies seeking causal inference.
sample quality; power analysis
27
The big question of data collection ?
do I collect new data specifically for research purposes or do i collect existing data (historical data, records, existing data set)
28
Major types of data collection methods ?
Self report; observation; biophysiologic measures
29
major considerations in choosing the data collection approach ....
Do you want more open-ended data or more objective, quantifiable data? How obtrusive is the method?
30
Structured self reports can be either
Interview schedule Questions are prespecified but asked orally. Either face-to-face or by telephone Questionnaire Questions prespecified in written form, to be self-administered by respondents
31
Advantages of Questionnaires (compared with interviews)
Lower costs Possibility of anonymity, greater privacy Lack of interviewer bias
32
Advantages of Interviews (Compared with Questionnaires)
Higher response rates Appropriate for more diverse audiences Opportunities to clarify questions or to determine comprehension Opportunity to collect supplementary data through observation
33
What are scales used for ?
used to make fine quantitative discriminations among people with different attitudes, perceptions, traits The Likert Scale is an example - Consist of several declarative statements (items) expressing viewpoints Responses are on an agree/disagree continuum (usually 5 or 7 response options). Responses to items are summed to compute a total scale score. Semantic Differential Scores - Require ratings of various concepts Rating scales involve bipolar adjective pairs, with 7-point ratings. Ratings for each dimension are summed to compute a total score for each concept.
34
Visual Analog scale does what ?
measures subjective experiences (pain, nausea) on a straight line measuring 100 mm
35
Response set biases
Biases reflecting the tendency of some people to respond to items in characteristic ways, independently of item content
36
Observational Rating Scales
Ratings are on a descriptive continuum, typically bipolar Ratings can occur: at specific intervals upon the occurrence of certain events after an observational session (global ratings)
37
Evaluation of Observational Methods
Excellent method for capturing many clinical phenomena and behaviors Potential problem of reactivity when people are aware that they are being observed Risk of observational biases—factors that can interfere with objective observation
38
Evaluation of Self Report Methods
Strong on directness Allows access to information otherwise not available to researchers But can we be sure participants actually feel or act the way they say they do?
39
Difference between in vivo measurements and in vitro biophysiologic measurements
In vivo measurements occur on or within organisms body (blood pressure) In vitro is performed outside the organisms body
40
Evaluation of biophysiologic measures
Strong on accuracy, objectivity, validity, and precision May or may not be cost-effective for nurse researchers Advanced skills may be needed for interpretation.
41
What is a psychometric assessment ? | What are the key criteria ?
an evaluation of the quality of a measuring instrument. Key criteria in a psychometric assessment: Reliability Validity
42
An experimental research design contains what?
Intervention - Randomization - Control
43
Quasi Experimental
Intervention but missing randomization and control
44
Nonexperimental
No intervention Observational or descriptive may have random sampling- but this is not the same as random assignment
45
Within subjects design - what is it ?
The same people in the experiment are compared at different times or under different conditions
46
Between subjects design
Different people are compared Group A subjects take actual study drug Group B subjects take placebo
47
What type of comparisons will be made to illuminate relationships ? Isn't that the question ....
WIthin subjects Between subjects
48
Single blind and double blind
Single blind - subjects don't know which group they are in | Double blind- neither researchers or subjects know who is in which group
49
Prospective and Retrospective Data Collection
Prospective - looking forward | Retrospective - looking backward
50
Three key criteria for making causal inferences
cause must precede the effect in time must be demonstrated relationship between the cause and effect Relationship between the presumed cause and effect cannot be explained by a third variable
51
Biologic plausibility
Another criteria for causality - basically the causal relationship should be consistent with evidence from basic physiologic studies
52
Coherence
Another criteria for causality - multiple sources should be involved when it comes to establishing existence of relationship between cause and effect
53
What type of designs offer the strongest evidence of whether a cause results in an effect?
Experimental Designs
54
Characteristics of a true experiment
Manipulation Control Randomization
55
Crossover design
Subjects are exposed to 2+ conditions in random order | subjects "serve as their own control"
56
Factorial
More than one independent variable is experimentally manipulated
57
What is treatment fidelity ?
Also called intervention fidelity ... | whether the treatment as planned was actually delivered and received
58
Quasi experiements involve an intervention but lack ......
randomization or control group
59
If there is no intervention, this is called .............
observational research non experimental research
60
What are the two main categories of quasi experiments ?
Within subject designs - one group is studied before and after the intervention Nonequivalent control group designs - those getting the intervention are compared with a nonrandomized comparison group
61
Cause probing questions for which manipulation is not possible are typically addressed with a .............
correlational design There is prospective and retrospective correlational design
62
Is all research cause probing ?
No Some research is descriptive (like ascertaining the prevalence of a health problem) Others are descriptive correlational - purpose is to describe whether variables are related , without ascribing a cause and effect connection
63
Cross sectional design
Data are collected at a single point in time across different stratas or groups like ages
64
Longitudinal design
Data are collected two or more times during an extended period
65
Ways of controlling confounding variables
Achieving constancy of conditions Control over environment, setting and time Control over intervention via a formal protocol
66
A more ____________ sample may minimize confounders, but limits the ability to generalize outside the study
homogenous
67
Inclusion and Exclusion criteria work to exclude what ?
Confounding variables
68
What is intrinsic factor ?
Control over subject characteristics which is done through inclusion and exclusion criteria
69
Different methods of controlling intrinsic factor ?
``` Randomization Subjects as own controls (crossover design) Homogeneity (restricting sample) Matching Statistical control e.g., analysis of covariance ```
70
What is internal validity ?
the extent to which it can be inferred that the independent variable caused or influenced the dependent variable
71
What is external validity ?
the generalizability of the observed relationships to the target population
72
What is statistical conclusion validity /
the ability to detect true relationships statistically
73
Threats to internal validity
Temporal ambiguity - unclear whether presumed cause occured before the outcome selection threat (single biggest threat to studies that do not use an experimental design)
74
What is history threat, maturation threat and mortality threat ?
These are all threats to internal validity History threat - something else occuring at the same time as causal factor Maturation threat - processes that result simply from the passage of time Mortality threat - loss of participants for whatever reason
75
Threats to external validity
Selection bias - sample selected for the study does not accurately represent the target population Expectancy Effect - (Hawthorne effect) makes effects observed in a study unlikely to be replicated in real life.
76
Threats to Statistical Conclusion Validity
Low statistical power (e.g., sample too small) TIP: If researchers show no difference in outcome measure (DV) between experimental & control groups, sample size may have been too small to detect difference! Weakly defined “cause”—independent variable not powerful Unreliable implementation of a treatment—low intervention fidelity
77
What is reliability ?
The degree to which an instrument accurately and consistently measures the target attribute
78
Reliability coefficients range from ________ and are considered good/acceptable at 0.________ or more
0.00-1.0 ; 0.70
79
What are the 3 aspects of reliability that can be evaluated ?
Stability Internal Consistency Equivalence
80
Stability involves
the test- retest reliability | It is the the extent to which scores are similar on two separate administrations of an instrument
81
Internal Consistency is assessed by computing
the coefficient alpha (0.70 or more is desirable) | this is the most widely used approach to assessing reliability
82
What is internal consistency ?
The extent to which all the items on an instrument are measuring the same unitary attribute An anxiety questionnaire should all have questions aimed at assessing anxiety levels
83
Equivalence is most relevant for ______________
structured observations Assessed by comparing agreement between observations or ratings of two or more observers Equivalence is the degree of similarity between alternative forms of an instrument or between multiple raters/observers using an instrument
84
Reliability is ______ in homogeneous than heterogeneous subject samples.
lower
85
Reliability is ____________ in shorter than longer multi-item scales.
lower
86
Reliability is necessary (but not sufficient) for validity. | True or false?
True
87
An instrument can be _____________ but not __________________ but it can't be valid if it lacks _______________
reliable; valid; reliability
88
An instrument can be valid if it lacks reliability . true or false ?
False. | An instrument can be reliable but not valid
89
What is validity ?
The degree to which an instrument measures what it is supposed to measure
90
Four aspects of validity
Face validity Content validity Criterion-related validity Construct validity
91
Face validity
Refers to whether the instrument looks as though it is an appropriate measure of the construct Based on judgment; no objective criteria for assessment
92
Content validity is evaluated by _________
expert observation, often via the content validity index. (CVI)
93
What is criterion related validity ?
The degree to which the instrument is related to an external criterion
94
Validity coefficient acceptable score
Validity coefficient is calculated by analyzing the relationship between scores on the instrument and the criterion (.7 or higher)
95
Predictive validity
Predictive validity: the instrument’s ability to distinguish people whose performance differs on a future criterion (e.g., SAT is predictive of college GPA)
96
Concurrent validity
Concurrent validity: the instrument’s ability to distinguish individuals who differ on a present criterion (e.g., SAT & current GPA are positively correlated >.7)
97
Construct validity - what is it concerned with ?
What is this instrument really measuring? | Does it adequately measure the construct of interest?
98
What are two ways of assessing construct validity ?
Known-groups technique Testing relationships based on theoretical predictions E.g., a tool for fatigue scores high for patients receiving radiation therapy, low for healthy persons Factor analysis Statistical test to determine whether items load on single construct
99
Criteria for Assessing/Screening Diagnostic instruments
Sensitivity: the instruments’ ability to correctly identify a “case”—i.e., to diagnose a condition Specificity: the instrument’s ability to correctly identify noncases, that is, to screen out those without the condition
100
What are some studies that involve an intervention ?
Mixed Method Clinical trials Evaluation research Nursing intervention research
101
Studies that do not involve an intervention
Outcomes research Surveys Secondary analyses Methodologic research
102
Mixed Method Research
Research that integrates qualitative and quantitative data and strategies in a single study or coordinated set of studies
103
What are clinical trials ?
Studies that develop clinical interventions and test their efficacy and effectiveness May be conducted in four phases
104
What is phase 1 of clinical trial ?
finalizes the intervention (includes efforts to determine dose, assess safety, strengthen the intervention)
105
Phase 2 of clinical trial
seeks preliminary evidence of effectiveness—a pilot test; may use a quasi-experimental design
106
Phase III of clinical trial
fully tests the efficacy of the treatment via a randomized clinical trial (RCT), often in multiple sites; sometimes called an efficacy study
107
Phase 4 of clinical trial
focuses on long-term consequences of the intervention and on generalizability; sometimes called an effectiveness study
108
What does evaluation research do ?
Examines how well a specific program, practice, procedure, or policy is working
109
What does outcome analysis do ?
Seeks preliminary evidence about program success
110
Outcomes research
Designed to document the quality and effectiveness of health care and nursing services key concepts: Structure of care (e.g., nursing skill mix) Processes (e.g., clinical decision-making) Outcomes (end results of patient care)
111
Survey research obtains information via
self reports through face to face interviews, telephone calls,self administered questionnaires,
112
Survey research is better for an ___________ rather than an _________________ inquiry
extensive; intensive
113
What does a secondary analysis do ?
Study that uses previously gathered data to address new questions Can be undertaken with qualitative or quantitative data Cost-effective; data collection is expensive and time-consuming Secondary analyst may not be aware of data quality problems and typically faces “if only” issues (e.g., if only there was a measure of X in the dataset).
114
What does methodologic research do ?
Studies that focus on the ways of obtaining, organizing, and analyzing data Can involve qualitative or quantitative data Examples: Developing and testing a new data-collection instrument Testing the effectiveness of stipends in facilitating recruitment
115
How confident are you going into this exam ?
I'm very confident !!!!!!! I will be victorious