280 Flashcards

1
Q

scientific inquiry

A

a method of inquiry

a way of learning and knowing things that can guide the decisions made in social work practice

can offer protection against human mistakes and other ways of knowing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

utility of scientific inquiry in social work

A

1 - many practitioners use methods that are untested or inadequately tested

2 - need more evidence to guide practice decisions

3 - make practice more effective

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

explicit norm of science

A

science is an open-ended enterprise and conclusions are constantly being modified; the scientist may have to overcome a great deal of initial resistance and disparagement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

need to critique research quality

A

publication does not guarantee quality; separating the wheat from the chaff; and answering critics of social work

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

flaws in unscientific sources

A

other ways of knowing play important roles in society but knowledge produced sociallly is particularly subject to common errors and fallacies

1 - inaccurate observation
2 - overgeneralization
3 - selective observation
4 -  ex post facto hypothesizing
5 -  ego involvement in understanding
6 - pseudoscience

the scientific method was developed as an alternative to reduce these problems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

inaccurate observations

A

result from human error, which includes a failure to observe things right in front of us and mistakenly observe things that are not so

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

overgeneralization

A

the assumption that a few similar events are evidence of a general pattern

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

selective observation

A

the tendency to pay attention to future events and situations that correspond to, or confirm, a pattern perceived to be true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

ex post facto hypothesizing

A

proposing a new argument to explain findings AFTER the ressearch has been conducted; this is not a problem unless the new hypothesis is not tested again in another study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

ego involvement in understanding

A

occurs when personal involvement or investment in a particular result or finding clouds objectivity; a common case is when a developer tests his or her own intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

pseudoscience

A

social workers should know enough about research that they can determine between strong and weak studies; it’s pseudoscience if it violates one or more principles or contains fallacies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

rejected practices

A

ineffective or even harmful interventions have been used by practitioners (for example: insight-oriented therapy for schizophrenia, coercive restraint therapies for attachment disorders, critical incident stress debriefing for trauma)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

social work research

A

aims to solve practical problems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

reviews of social work effectiveness

A

practices cannot be assumed to be effective; although research knowledge is growing, a great deal remains unknown about ‘what works’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

main reason for research

A

compassion for clients by providing the most effective services; ethical responsibility requires social workes remain current in relevant research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

other ways of knowing

A
tradition
authority
common sense
popular media
experience

(these are important but cannot always be relied upon because we do make mistakes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

tradition

A

shared meaning and understanding that is often considered obvious

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

authority

A

knowledge accepted based on the status or power of the messenger

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

common sense

A

reasoning or commonly held beliefs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

social work ways of knowing

A

agreement reality, experiental reality, science

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

other forms of illogical reasoning

A

1 - ad hominem attack (discrediting the person)
2 - newness (touting something because it is novel)
3 - bandwagon (everyone else is doing it argument)
4 - straw person argument (distorting an argument in order to attack it)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

experience

A

making observations and seeking patterns of regularities in what we observe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

principles of the scientific method

A

everything is open to question; knowledge is provisional and always subject to change; empirical evidence is based on specified and systematic observation - not authority, tradition, or common sense; studies should be replicable to avoid overgenerationalization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

good research is… (principles of the scientific method)

A

based on a sample of observations that are large and diverse; specified well enough so that it may be accurately replicated by others; honest about potential biases and actively minimizes their effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

scientific observation

A

the scientific method should be based on scientific observations, which should be specific, systematic, comprehensive, and as objective as possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

contradictory evidence

A

researchers commit themselves to upholding the evidence revealed, regardless of whether it is contradictory to the conclusions or not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

safeguards against illogical reasoning

A

being careful and deliberate; public nature of science

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

research hierarchy for EBP

A

higher-level studies are well designed (systematic reviews and meta-analyses of RCTS, etc.)

1 - reviews
2 - quasi-experimental designs
3 - single case evaluation designs
4 - uncontrolled pretest-posttest
5 - correlational studies
6 - anecdotal case reports and expert opinions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

evidence-based practice (EBP)

A

a process in which practitioners make decisions based on the best evidence available; evaluates outcomes; a practice model based primarily on the scientific method and evidence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

steps of the EBP process

A

1 - formulating a question
2 - searching for evidence
3 - critically appraising the studies you find
4 - determining which evidence-based intervention is most appropriate
5 - applying the intervention; and evaluating progress and providing feedback

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

bottom-up strategy

A

search the literature and critically appraise to identify a course of action to be best

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

top-down strategy

A

use the results of someone else’s search, usually available in books; not starting from scratch

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

challenges to EBP

A

insufficient resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

questions should

A

target practice decisions and consider variations in client characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

CIAO

A

if one or more interventions are specified in advance:

C - client characteristics
I - intervention being considered
A - alternative intervention
O - outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

search for the evidence

A

computerized library searches, searches of professional literature databases, internet search engines, search terms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

critically appraise the relevant studies

A

was treatment outcome measured in a reliable, valid, and unbiased manner? Was the research design strong enough to indicate conclusively?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

determine which evidence-based intervention is most appropriate

A

consider quality of the evidence, client characteristics and context, and values and expectations of clients

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

evaluation and feedback

A

communicate findings with relevant colleagues; ongoing discussion of evaluation and outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

apply the evidence-based intervention

A

obtain training in the intervention and readings on how to implement the intervention; arrange for consultation or supervision; formulate measurable treatment goals with client to aid in evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

controversies and misconceptions of EBP

A

1 - ebp is based on studies of clients unlike those typically encountered
2 - ebp is an overly restrictive cookbook that denigrates professional expertise
3 - ebp hinders the therapeutic alliance
4 - ebp is merely a cost-cutting tool
5 - evidence is in short supply
6 - real world problems prevent implementation of ebp

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

positivism

A

this paradigm emphasizes objectivity, precision, and generalizability in research; for example: is the new policy effective in reducing poverty?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

interpretivism

A

this paradigm emphasizes gaining an empahtic understanding of how people feel inside, how they interpret their everyday experiences, and what reasons they may have for their behaviors; for example: how do welfare recipients experience their lives changing under the new policy?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

critical social science

A

this paradigm emphasizes oppression and uses research procedures that empower oppressed groups; for example: does the new policy really help the poor or does it keep them oppressed?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

feminist

A

what impact does women the new policy have on poor women?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

deductive methods

A

the researcher begins with a theory and then derives one or more hyptheses from it for testing; quantitative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

inductive methods

A

the researcher begins with observed data and develops a hypothesis to explain the specific observations; qualitative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

the wheel of science

A

science is a process that involves an alternating use of deduction and induction (picture)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

ideology

A

closed system of beliefs and values that shape the understanding and behavior of those in it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

paradigm

A

a fundamental model or scheme that organizes our view of something

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

objectivity

A

being completely objective is impossible; it is an import element of scientific inquity, but not all scholars agree on how best to attain it (blind observers, self-report scales outside of researcher’s presence, existing information)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

theory

A

a systematic set of interrelated statements intended to explain some aspect of social life or enrich our sense of how people conduct and find meaning in their lives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

empirical support

A

when our observations are consistent with what we would expect to experience if a theory is correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

hypothesis

A

a tentative and testable statement about how changes in one thing are expected to explain changes in something else; predicts relationships

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

independent variables

A

explains or causes something

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

dependent variables

A

the variable being explained or caused

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

variables and attributes

A

variable: a concept; attributes: the concepts that make up a variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

credible theories

A

depend on (1) empirical support of observations and (2) systematic and logical components to help us better understand the world

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

idiographic model of explanation

A

aims to explain through the enumeration of the many and perhaps unique considerations that lie behind a given action (ex: why has a particular young man become delinquent?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

probabilistic knowledge

A

we speak in terms of probability, not certainty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

nomothetic model

A

aims to understand a general phenomenon partially using few factors (ex: what factors are most important for explaining delinquency among young people?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

quantitative research

A

attempt to produce findings that are precise and generalizable; more appropriate for nomothetic aims

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

qualitative research

A

emphasize depth of understanding, attempt to subjectively tap the deeper meanings of human experience, and are intended to generate theoretically rich observations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

3 main threats culturally competent measurement

A

1 - using interviewers who offend or intimidate minority respondents
2 - not using the appropriate language
3 - cultural bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

measurement equivalence

A

a measurement procedure developed in one culture will have the same value and meaning in other cultures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

linguistic equivalence

A

when an instrument has been translated and back-translated successfully

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

conceptual equivalence

A

instruments and observed behaviors have the same meanings across cultures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

metric equivalence

A

scores on a measure are comparable across cultures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

complications of culturally competent research

A

1 - who qualifies as a member of the culture
2 - labeling and classifying
3 - cultures are not homogenous

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

conceptualization

A

the process through which we specify precisely what we will mean when we use particular terms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

operationalization

A

an extension of the conceptualization process; begins in study design and continues throughout the research project, including the analysis of data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

problem formulation

A

social work research and EBP begin with problem formulation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

measurement error

A

data do not accurately portray the concept we attempt to measure (systematic error & random error); systematic error: acquiescent response set, social desirability bias, cultural bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

random error

A

have no consistent pattern of effects; they do not bias the measures (ex: complex, boring measurement procedures or using professional jargon that respondents are not familiar with)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

systematic error

A

when the information we collect consistently reflects a false picture of the concept we seek to measure, either because of the way we collect the data or the dynamics of those who are providing the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

“so what?” test

A

a good research topic should pass this phase

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

overview of the research process

A
phase 1 - problem formulation
phase 2 - designing the study
phase 3 - data collection
phase 4 - data processing,
phase 5 - data analysis
phase 6 - interpreting the findings
phase 7 - writing the research report
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

“good” research topics

A

1 - specific
2 - capable of being answered by observable evidence
3 - feasible to study
4 - open to doubt and answerable in more than one way
5 - addresses the decision-making needs of agencies or practical problems
6 - has clear significance for guiding social welfare policy or practice

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

narrowing a research topic

A

the narrowing of the topic is impacted by the researcher’s personal interest, the agency’s information needs, feasibility, policy and practice relevancy, and the findings of the literature review

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

feasability

A

whether or not a study can be done practically and successfully; not always synoymous with methodological rigor or inferential capacity; researchers must consider scope, time, fiscal cost, ethical issues, cooperation with research partners, study participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

collaboration in the problem formulation process

A

involving agencies in the process of problem formulation and research design planning helps overcome resistance to research; obtaining critical feedback from colleagues can improve study utility, clarify ideas, uncover alternate approaches to the problem, and identify potential pragmatic or ethical obstacles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

literature review

A

a step in the problem-formulation process; a good grounding in the literature should start prior to selecting a topic and be an ongoing process; it helps determine if the question has already been answered, identify obstacles, and build on existing research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

anticipating issues

A

a step in the problem-formulation process; when planning a study consider time constraints, fiscal costs, lack of cooperation, and ethical dilemmas

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

purposes of research

A

exploratory, descriptive, explanatory, evaluation, construct measurement instruments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

exploration

A

a purpose of research; the attempt to develop an initial rough understanding of some phenomenon

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

description

A

a purpose of research; the precise measurement and reporting of the characteristics of some population or phenomenon under study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

explanation

A

a purpose of research; the discovery and reporting of relationships among different aspects of the phenomenon under study

88
Q

evaluation

A

a type of study that can be conducted with exploratory, descriptive, and explanatory purposes

89
Q

cross-sectional studies

A

based on observations made at one time

90
Q

longitudinal studies

A

based on observations made at many times

91
Q

trend studies

A

samples drawn from general populations

92
Q

cohort studies

A

samples drawn from more specific subpopulations

93
Q

panel studies

A

samples of the same people each time

94
Q

units of analysis

A

the people or things whose characteristics social researchers observe, describe, and explain; individuals, groups, or social artifacts

95
Q

ecological fallacy

A

occurs when a researcher erroneously draws conclusions about individuals based on the examination of groups or other aggregations

96
Q

control variables

A

also known as extraneous variables, can be examined to see if the observed relationship is misleading

97
Q

spurious causal relationship

A

one that no longer exists when a third variable is controlled

98
Q

reductionism

A

an overly strict limitation on the kinds of concepts and variables to be considered relevant to the phenomenon under study

99
Q

research proposal

A

a standard proposal includes the following elements that detail research plan and design: research problem or objective, conceptual framework, literature review, sampling procedures, measurement, design and data collection methods, data analysis plan, schedule, budget

100
Q

types of hypothetical relationships between variables

A

positive ( / ); negative ( \ ); curvilinear

101
Q

mediating variables

A

intervening mechanisms by which independent variables affect dependent variables

102
Q

moderating variables

A

influence the strength or direction of relationships between independent and dependent variables

103
Q

concepts

A

mental images that symbolize ideas, objects, events, behavior, people, etc; it is possible to measure these concepts

104
Q

operational definitions

A

translate variables into observable terms before a study is implemented; how we choose to operationally define a variable can greatly influence our research findings

105
Q

indicators

A

the end product of the conceptualization process (e.g. visiting children’s hospital vs. compassion)

106
Q

dimensions

A

a specifiable aspect or facet of a concept (e.g. ‘economic dimension’ and ‘civil rights dimension’ of social justice)

107
Q

existing self-report scales

A

a popular way to operationally define many social work variables, mainly because they have been used successfully by others and provide cost advantages in terms of time and money; they must be selected carefully and are not always the best way to operationally define a variable; there are other options: direct behavioral observation, interviews, and available records

108
Q

qualitative studies

A

there is an anticipated set of meanings that are refined during data collection and interpretation, rather than predetermined specific and objective variables to measure

109
Q

avoiding measurement error

A

use unbiased wording, carefully train interviewers, use unobtrusive observations to minimize social desirability bias, understand how existing records are kept, triangulation

110
Q

alternative forms of measurement

A

1 - written self-reports
2 - interviews
3 - direct behavioral observation
4 - examining available records

111
Q

principle of triangulation

A

using several different research methods to determine if they collect the same information

112
Q

reliability

A

a particular measurement technique, when applied repeatedly to the same object, would yield the same result each time; the more reliable the measure, the less random error

113
Q

types of reliability

A

interobserver reliability or interrater reliability, test-retest reliability, parallel-forms reliability, and internal consistency reliability

114
Q

internal consistency reliability

A

assess whether the items of a measure are internally consistent; methods to assess internal consistency: split-halves method, parallel-forms reliability, coefficient alpha

115
Q

validity

A

the extent to which a specific measurement provides data that relate to commonly accepted meanings of a particular concept

116
Q

determining validity

A

1 - face validity
2 - content validity
3 - criterion-related validity
4 - construct validity

1 & 2 are based on expert judgments and 3 & 4 are empirical forms of validity

117
Q

content validity

A

the degree to which a measure covers the range of meanings included within the concept

118
Q

predictive validity

A

the measure is being tested according to ability to predict a criterion that will occur in the future

119
Q

concurrent validity

A

the measure is being tested for its correspondence to a criterion that is known concurrently

120
Q

known groups validity

A

assesses whether an instrument accurately differentiates between groups known to differ in respect to the variable being measured

121
Q

sensitivity of an instrument

A

the ability to detect subtle differences between groups or subtle changes over time within a group

122
Q

construct validity

A

testing whether a measure fits theoretical expectations; involves testing the measure’s convergent validity and discriminant validity

123
Q

convergent validity

A

when a measure’s results correspond to the results of other methods of measuring the same construct

124
Q

discriminant validity

A

when a measure’s results do not correspond as highly with measures of other constructs as they do with other measures of the same construct than do measures of alternative constructs

125
Q

factorial validity

A

how many different constructs a scale measures and whether the number of constructs and the items making up those constructs are what the researcher intends

126
Q

validity of qualititative research

A

researchers disagree about definitions and criteria for reliability and validity, some argue these concepts are not applicable; its connected to the different assumptions about the nature of reality and objectivity

127
Q

questionnaires

A

provide a method of collecting data by asking people questions OR asking them to agree or disagree with statements that repersent differing points of view

128
Q

types of questions

A

open-ended (respondents supply their own answers) or close-ended (they select from a list of answers provided them)

129
Q

best question type in a questionnaire

A

short items; guidelines: spread out and uncluttered, avoid double-barreled questions, respondents must be willing and competent to answer,

130
Q

things to avoid in questionnaires

A

negative items and terms because they may confuse respondents

131
Q

bias

A

in questionnaires, this is the quality that encourages respondents to answer in a particular way to avoid or support a particular point of view

132
Q

acquiescent response set

A

agreeing or disagreeing with most or all statements regardless of their content

133
Q

contingency questions

A

questions that should be answered only by people giving a particular response to some preceding question; this is highly useful because it doesn’t ask people questions that have no meaning for them

134
Q

matrix questions

A

a standardized set of closed-ended response categories are used in answering several questionnaire; this format can facilitate the presentation and completion of items

135
Q

ordering questions

A

question order can affect the answers; be sensitive to the problem; pretest different forms to measure ordering effect; in interviews, start with generic non-threatening questions

136
Q

single indicators

A

may not have sufficiently clear validity to warrant their use

137
Q

composite measures

A

include several indicators of a variable in one summary measure

138
Q

face validity

A

an indicator seems at face value to provide some measure of the variable; this is the first criterion for the selection of indicators to be included in a composite measure

139
Q

index or scales

A

once they have been constructed, it is essential that it be validated, expecially across cultures

140
Q

likert scales

A

a measurement technique that is, based on the use of standardized response categories for several questionnaire items (strong agree strongly disagree); this format is popular and extremely useful

141
Q

quantitative measures

A

always highly structured, tend to use closed-ended questions, and may be administered in either an interview or questionnairre format

142
Q

qualitative measures

A

rely on interviews that are often unstructured and mainly contain open-ended questions and in-depth probes

143
Q

inference

A

a conclusion that can be logically drawn in light of our research design and our findings

144
Q

causal inference

A

derived from a research design and findings that logically imply that the independent variable really has a causal impact on the dependent variable

145
Q

research design

A

all the decisions made in planning and conducting research, including decisions about measurement, sampling, how to collect data, and logical arrangement designed to permit certain kinds of inferences

146
Q

three basic criteria for the determination of causation in scientific research

A

1 - the independent (cause) and dependent (effect) variables must be empirically related to each other

2 - the independent variable must occur earlier in time than the dependent variable

3 - the observed relationship between these two variables cannot be explained away as being due to the influence of some third variable that causes both of them

147
Q

internal validity

A

the confidence we have that the results of a study accurately depict whether one variable is or is not a cause of another

148
Q

common threats to internal validity

A
1 - history
2 - maturation
3 - testing
4 - instrumentation changes
5 - statistical regression
6 - selection bias
7 - causal time order
149
Q

three forms of pre-experiments

A

one-shot case study, the one-group pretest-posttest design, and the posttest-only design with nonequivalent groups

150
Q

experiments

A

an excellent vehicle for the controlled testing of causal processes; the classical experiment tests the effect of an experimental stimulus on some dependent variable through the pretesting and posttesting of experimental and control groups

151
Q

solomon four-group design & posttest-only control group design

A

a variation on the classical experiment that attempts to safeguard against problems associated with testing effects

152
Q

randomization

A

the generally preferred method for achieving comparability in the experimental and control groups; it is more important that experimental and control groups be similar to one another than that a group of experimental subjects be representative of some larger population

153
Q

control group

A

in social work settings they do need to be denied services, they can receive alternate, routine services OR be put on a waiting list to receive the experimental intervention

154
Q

attrition

A

techniques to minimize attrition include:

1 - reimbursing participants for their participation
2 - avoiding intervention or research procedures that disappoint or frustrate them
3 - tracking participants

155
Q

blind raters

A

a measurement procedure that can control for researcher or practitioner bias toward perceiving results that would confirm the hypothesis

156
Q

obtrusive observation

A

occurs when the particpant is keenly aware of being observed and thus may be predisposed to behave in ways that meet experimenter expectancies

157
Q

unobtrusive observation

A

menas that the participant does not notice the observation

158
Q

external validity

A

the extent to which we can generalize the findings of a study to settings and populations beyond the study conditions

159
Q

nonequivalent comparison groups design

A

the experimental group is compared to an existing group that appears similar to it, which is necessary when random assignment to experimental and control groups isn’t possible

160
Q

internal validity of nonequivalent comparison group designs

A

ways to strengthen internal validity include:

1 - selecting a comparison group as similar as possible to the experimental group
2 - administering multiple pretests
3 - switching replications

161
Q

time-series designs

A

can be used as an alternative to the nonequivalent comparison groups design; attempts to attain internal validity through the use of repeated measures before and after the introduction of an intervention

162
Q

case-control design

A

compares groups of cases that have had contrasting outcomes and then collects retrospective data about past differences that might explain the difference in outcomes

163
Q

recall bias

A

when a person’s current recollections of the quality and value of past experiences are tainted by knowing that things did or did not work out for them in later life; a common limitation in case-control designs

164
Q

effects of pitfalls in implementation of experiments in service-oriented agencies

A

may compromise the fidelity of the interventions being evaluated, contaminate the control condition or the case assignment protocol, or hinder client recruitment and retention

165
Q

detect and alleviate practical pitfalls

A

conduct a pilot study of the experiment or quasi-experiment before implementing it in full

166
Q

assessing intervention fidelity

A

videotape several randomly selected treatment sessions from each of your practitioners, have two experts in the intervention independenty view each taped session and then complete a rating scale assessing their judgment of the degree to which the intervention in the session was implemented appropriately

167
Q

baselines

A

control phases of repeated measures taken before an intervention is introduced; should be extended until a stable trend in the data is evident

168
Q

single-case designs

A

can be used by practitioners to monitor client progress or their own effectiveness more scientifically and systematically; they have special measurement problems so triangulation is recommended

169
Q

AB designs

A

have the weakest control for history but they are the most feasible designs, can provide useful information, and are an excellent way to implement the final stage of the evidence-based process

170
Q

weakness of single-case designs

A

they have limited external validity because it’s a sample of one; this problem can be alleviated through replication

171
Q

purpose of program evaluation

A

applies various research methods and designs in order to assess and improve the:

1 - conceptualization
2 - design
3 - planning
4 - administration
5 - implementation
6 - effectiveness
7 - efficiency
8 - utility of social interventions and human service programs
172
Q

age of accountability

A

systematic and scientific approaches to program evaluation during the latter half of the 20th century burgeoned as social welfare spending increased

173
Q

advent of managed care

A

has intensified pressures on service providers to evaluate outcomes of their services and vested interests in obtaining findings that make their services look effective

174
Q

highly political atmosphere

A

program evaluation can impact funding decisions, thus stakeholders may impede free scientific inquiry; political and ideological forces can influence methodology, interpretation of evaluative research, and how its findings are used

175
Q

steps to alleviate potential problems in studies

A

1 - learning as much as possible about the stakeholders and their vested interests in the evaluation
2 - involving them in a meaningful way in all phases of planning and performing and the evaluation
3 - maintaining ongoing mutual feedback between stakeholders and the evaluator
4 - tailoring the evaluation and its reportage to their needs and preferences as much as possible without sacrificing scientific objectivity

176
Q

complications of evaluating program outcome attainment

A

ambiguities in…

1 - determining the specific outcome indicators implied by organizational goals
2 - the intentionally grandiose nature of mission statements
3 - the displacement of official goals by unofficial ones (outcomes were never implemented as intended)

177
Q

supplementing outcome evaluations

A

outcome evaluations should be supplemented by evaluations that monitor program implementation, which can help resolve problems early on, keep agencies accountable, and identify the best ways to implement and maintain programs

178
Q

five approaches to needs assessment

A

(ideally a needs assessment will combine more than one approach)

1 - surveying key informants
2 - holding a community forum
3 - examining rates under treatment
4 - analyzing social indicators
5 - conducting a direct survey of the community or target group
179
Q

logic models

A

a graphic picture depicting essential program components, their linkage to short-term process objectives, indicators of success in achieving short-term objectives, how those short-term objectives lead to long-term program outcomes, and indicators of success in achieving long-term outcomes

180
Q

focus groups

A

a qualitative method that can be useful in program evaluation

181
Q

sample

A

a special subset of a population that is observed for purposes of making inferences about the nature of the total population

182
Q

purposive sample

A

also called a judgmental sample, a type of nonprobability sampling method in which the researcher uses his or her own judgment in selecting sample members (ex: handpick community leaders or experts known for their expertise on target population)

183
Q

informants

A

should be selected in such a fashion as to provide a broad, diverse view of the group under study

184
Q

nonproability sampling methods

A

in general, these are regarded as less reliable than probablity sampling methods; they are often easier and cheaper to use; 4 types: reliance on available subjects, purposive or judgmental sampling, quota sampling, snowball sampling

185
Q

quality of a sample

A

the main criteria of the quality of the sample is the degree to which it is representative

186
Q

probability sampling methods

A

the chief principle is that every member of the total population must have the same known, nonzero probability of being selected into the sample; one excellent way to select samples that will be quite representative

187
Q

reliance on available subjects

A

sampling from subjects who are available (ex: how much an agency’s services help a particular client or group of clients)

188
Q

snowball sampling

A

process of accumulation as each located subject suggests other participants

189
Q

quota sampling

A

a relative proportion of the total population is asisgned for the target’s population’s characteristics

190
Q

sampling error

A

there will always be some degree of sampling error even if the most carefully selected sample will almost never perfectly represent the population

191
Q

sampling frame

A

a list or quasi-list of the members of a population, it is the resource used in the selection of a sample; a sample’s representativeness depends directly on the accuracy of the sampling frame

192
Q

simple random sampling

A

each element in sampling frame is assigned a number; the most fundamental technique in probability sampling

193
Q

systematic sampling

A

the selection of every kth member from a sampling frame; this method is functionally equivalent to simple random sampling except elements chosen based on interval and the first element is selected at random to avoid bias; it’s important to carefully examine the nature of the list and whether a particular order of the elements will bias the sample selected

194
Q

stratified sampling

A

the process of grouping the members of a population into relatively homogenous strata before sampling; this practice improves the representativeness of a sample by reducing the degree of sampling error

195
Q

multistage cluster sampling

A

a more complex sampling technique that is frequently used in those cases in which a list of all the members of a population does not exist; an initial sample of groups of members (clusters) is selected first, and then all members of the selected cluster are listed, finally the listed members are subsamples, which provides the final sample of members

196
Q

probability proportionate to size (PPS)

A

a special, efficient method for multistage cluster sampling

197
Q

unequal probabilities

A

bias should be avoided in sampling; assign weights to the different observations made in order to provide a representative picture of the total population

198
Q

survey research

A

the administration of questionnaires to a sample of respondents selected from some population; a popular social research method; appropriate for making descriptive studies of large populations

199
Q

ways of adminstering questionnaires

A

1 - self-administered
2 - face-to-face encounters
3 - telephone surveys

200
Q

nonresponse bias

A

occurs when a substantial number of people in a randomly selected sample choose not to participate

201
Q

follow-up mailings

A

should be sent to potential respondents who fail to respond to the initial appeal of a self-administered questionnaire; properly monitor returns to determine when this is appropriate

202
Q

interviewers

A

they must be neutral and carefully trained to be familiar with the questionnaire

203
Q

probe

A

a neutral, nondirective question that is designed to elicit an elaboration on an incomplete or ambiguous response given in an interview to an open-ended question; EX: “How is that?” or “In what ways?”

204
Q

self-administered questionnaire

A

advantages: inexpensive, speed, lack of interviewer bias, and the possibility of anonymity and privacy to encourage more candid responses on sensitive issues

205
Q

telephone surveys

A

more common and more effective

206
Q

unobtrusive measures

A

ways of studying social behavior without affecting it in the process

207
Q

secondary analysis

A

a form of research in which the data collected and processed in one study are re-analyzed in a subsequent study (often by a different researcher and for a different purpose)

208
Q

institutional review board (irb)

A

studies involving human subjects need to obtain approval from the irb, which became widespread in the 1970s; most research runs some risk of hark, but the harm must be minimized, even embarrassment and other psychological harm

209
Q

voluntary participation

A

research ethics: no one should be forced to participate

210
Q

informed consent

A

all participants must be aware that they are participating in a study, be informed of all the consequences, and consent to participate; this can conflict with the scientific need for generalizability

211
Q

ethical issues in social work research

A

deception is unethical; negative findings should be reported

212
Q

benefits and costs

A

IRBs determine if the long term benefits of a study are thought to outweigh the violations of certain ethical norms, a difficult and highly subjective process

213
Q

NASW code of ethics

A

social workers can violate our ethical responsibilities when we conduct research or when we refrain from using it to guide our practice

214
Q

politics of social work

A

ideological priorities can restrict inquiry and lead to incomplete or distorted knowledge building

215
Q

culturally competent researchers

A

will attempt to include a sufficient and representative number of research participants from minority and oppressed groups; will use measurement procedures that have been shown to be reliable and valid for the minority groups