Chapter 10 & Readings Flashcards

1
Q

What are some critical issues related to data collection?
A. Language of participants
B. Literacy level of participants
C. Use of a dominant or colonizing language
D. All of the above

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Mertens and Wilson recommend that when you use mixed methods data collection that you be aware of the implications of your ontological and epistemological beliefs.
A. True
B. False

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Reliability can be influenced by how the instrument is administered.
A. True
B. False

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Reliability and validity are commonly used terms to describe the quality of quantitative data collection. What does validity mean in this situation?
A. Does the instrument measure cultural competence?
B. Does the instrument (as used with the participants) really measure what it is supposed to measure?
C. Does the instrument measure what is it is supposed to measure consistently?
D. Does the instrument reliably measure what it is supposed to measure over time?

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
An evaluator created a test. In order to test reliability, he had the participants take the test and he analyzed the results to examine the consistency of their responses. What is this an example of?
 A. Repeated measures reliability
 B. Intraparticipant reliability
   C. Internal-consistency reliability
 D. Multi-dimensional reliability
A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In multiple regression, when we say that we control for the effects of some variable(s) we are:
A. statistically adjusting or subtracting the effects of a variable to see what a relationship would have been without it
B. actually removing a variable from a model so that it does not interact with the effects of other variables
C. changing the mediating capabilities of an endogenous variable
D. changing the mediating capabilities of an exogenous variable

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are some important things to notice when you are conducting observation based on Michael Patton (2002b) as discussed in your textbook?
A. Observing what does not happen, program setting, and native language used
B. Social setting, program activities and behaviors, and nonverbal communications
C. Informal interactions and unplanned activities
D. All of the above

A

All of the Above

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
What is the validity/credibility evidence strategy used when evaluators share data with participants to obtain feedback on perceived accuracy and quality?
   A. Multiple data sources
 B. Member checks
 C. Persistent observations
 D. Progressive subjectivity
A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is INTRArater reliability?
A. It is used to determine whether a single rater or observer is consistent over time.
B. It compares the data of two raters or observers to see whether they are rating the same behavior consistently.
C. It is used to compare two kinds of data collection to see whether they are describing the same event.
D. It is used to compare when different raters are administering similar instruments

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are some forms of evidence used to support validity/credibility in quantitative data collection?
A. Construct validity and criterion-related validity
B. Peer debriefing
C. Member checks
D. Persistent observations

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does the depiction of an evaluand include:

A

Specification of outputs
outcomes
knowledge
impacts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the levels that outcomes and impacts need to be measured?

A

individual client level
program or system level
broader community level
organizational level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

___________ is a critical issue that permeates decisions about data collection.

A

Language

language differences can be translated and then back translated into the original language

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

_________ means does the instrument really measure what it is supposed to measure?

A

Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

______________ is the consistency in measurement. Does the instrument measure what it is supposed to.

A

Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Evaluators in the Values Branch developed parallel critiera for the quality of qualitative evaluations; ________ instead of reliability and __________ instead of validity.

A

dependability: credibility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Reliability Coefficient can be interpreted in two ways_______ and ____________.

A

Coefficient of stability and alternate-form coefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

__________ is when the evaluator administers the same instrument twice, separated by a short period of time. Results are compared using a statistic such as correlation coefficient, also called test-retest reliability.

A

Coefficient of stability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

___________ is when the evaluator administers two equivalent versions of the same instrument (parallel forms) to the same group of people. Results are compared using a coefficient of stability.

A

Alternate form coefficeint

20
Q

____________ is when participants take one instrument, their socres are subjected to an analysis to reveal their consistency of responses within the instrument.

A

Reliability/precision

21
Q

_______________ is when 2 observer’s data are compared to see whether they are consistently recording the same behaviors when they view the same events.

A

Interrator reliability

22
Q

______________is used to determine whether a single observer is consistently recording data over a period of time.

A

intrarater reliability

23
Q

___________ can be influenced by how the instrument is administered.

A

Reliability

24
Q

Psychologists recommend the use of different types of evidence to support ________ claims.

A

validity

25
Q

_________ is considered the unitary concept of validity. It is to what degree does all accumulated evidence suppor the intended interpretation of scores for the proposed purpose.

A

Construct Validity

26
Q

_________ refers to items on the test that represent content covered in the program.

A

Content-related evidence

27
Q

__________ indicates that the measure actually reflects current or future behaviors or dispositions.

A

Criterion-related evidence

28
Q

Evaluators need to be aware of the consequences of using data, especially with regard to the potential to worse inequities

A

Consequential evidence

29
Q

Evaluators need to stay on site for sufficient time

A

Prolonged and substantial engagment

Strategy to enhance credibility

30
Q

Observations need to be conducated a variety of times of the day, week, and year

A

Persistant observations

strategy to enhance credibility

31
Q

An evaluator should find a peer with whom to discuss the study at different stages

A

Peer debriefing

strategy to enhance credibility

32
Q

Evaluators need to be aware of their assumptions, hypotheses, and understandings, and of how these change over the period of the study.

A

Progressive subjectivity

strategy to enhance credibility

33
Q

Evaluators can share their data with participants to obtain feedback on the perceived accuracy and quality of their work.

A

Member Checks

strategy to enhance credibility

34
Q

Qualitative evaluators recommend the use of _______.

A

Multiple Data Sources

strategy to enhance credibility

35
Q

Triangulation

A

The use of multiple data of multiple data sources and different data collection strategies to strengthen the credibility of the findings of an evaluation

36
Q

___________ evaluators want to assume their stakeholders that they have measured what they say they are measuring.

A

Methods Branch

37
Q

____________ evaluators want to provide evidence of their believability of their findings.

A

Values Branch

38
Q

___________ evaluators begin data collection by acknowledging power differences between themselves and study participants as well as the need to establish a trusting relationship with community members.

A

Social Justice Branch

39
Q

What are examples of Quantitative Data collection methods?

A

Tests, performance and portfolio assessments, surveys, goal attatinment scaling, and analysis of secondary data sources

40
Q

What are the types of tests?

A

standardized, locally developed, objective, nonobjective, norm-referenced, criterion referenced
*Pilot testing is important

41
Q

Strong survey research utilizes what kind of samples

A

random

42
Q

survey data collected from the research participants during a single, relatively brief time period

A

cross-sectional study

43
Q

Longitundinal, panel and Trend studies; describe them

A

longitudinal collected at one than one point over time
panel study is a synonym for longitudinal
trend-samples are taken over time with the same questions asked, diff than Long because different people are selected.

44
Q

A technique used to measure the meaning participants attach to various attitudinal objects or concepts:

A

semantic differential technique

45
Q

A term whose meaning is debated by philosophers, but in everyday language implies that manipulation of one event produces another event

A

Causation

46
Q

Effect
The difference between what would have happened and what did happen when a treatment
is administered

A

Effect