Session 6 Flashcards Preview

Research Methods in Counseling > Session 6 > Flashcards

Flashcards in Session 6 Deck (98)
Loading flashcards...
1
Q

List 4 examples of measurement devices.

A
  1. Test
  2. Questionnaire
  3. Interview schedule/protocol
  4. Personality scale
2
Q

List 2 factors of validity

A
  1. extent to which a measure/instrument measures what it is designed to measure
  2. accurately performs the function(s) it is purported to perform
3
Q

List 3 KEY points about validity

A
  1. validity is relative to the purpose of testing
  2. validity is a matter of degree
  3. no measure/instrument is perfectly valid
4
Q

What is a ‘construct’?

4 features

A
  1. an abstract concept used in a particular theoretical manner to relate different behaviors according to their underlying features or causes
  2. used to describe, organize, summarize and communicate our interpretations of behavior
  3. abstract term used to summarize and describe behaviors that share certain attributes
  4. collection of related behaviors that are associated in a meaningful way
5
Q

Why is validity important in quantitative research?

A

researchers reduce constructs to numerical scores

6
Q

Why is validity important in qualitative research?

A

researchers must describe results in enough detail so that readers can picture the meanings that have been attached to a construct

7
Q

List 3 types of validity.

A
  1. judgmental
  2. empirical
  3. judgmental-experimental
8
Q

List and define 2 types of judgmental validity.

A
  1. Content: expert judgment

2. Face: participant judgment

9
Q

List and define 4 types of Empirical validity.

A
  1. criterion-predictive: correlation
  2. criterion-concurrent: correlation
  3. Convergent: correlation
  4. Divergent: correlation
10
Q

Judgmental-Empirical Validity is what type?

A

Construct validity

11
Q

Judgmental-Empirical construct validity is established by what 2 things?

A
  1. hypothesize about relationship

2. test of hypothesis

12
Q

Judgmental validity is an approach to establishing validity that uses ______________, usually of _____________ and therefore is only as good as the ____________. (6, 10)

A
  1. judgments
  2. experts
  3. judges
13
Q

Content Validity is a type of _______________ validity.

A

judgmental

14
Q

Content validity is ______________.

A

the degree to which measurements actually reflect the variable of interest

15
Q

What two questions does content validity answer?

A
  1. Are we tapping the appropriate contents by the measure?
  2. Does the instrument cover all the areas needed to be observed AND does it cover them equally or proportionally to the interest?
16
Q

Three principles for writing tests with high content validity

A
  1. Broad content
  2. focus to reflect importance
  3. appropriate level of language (vocabulary, sentence length) for the audience
17
Q

Face Validity is a type of _____________ validity.

A

judgemental

18
Q

Face Validity is __________.

A

the degree to which an instrument appears to be valid on the face of it

19
Q

The _______________ Test does not have very good face validity.

A

Rohrshach

20
Q

What is the question that Face validity answers?

A

On superficial inspection, does the instrument appear to measure what it purports to measure?

21
Q

The Rohrshack Test is designed to measure ____________.

A

psychopathology

22
Q

Who are the experts for the Rohrshack Test?

A

the person taking the test

23
Q

Making the measurement tool LOOK like its measuring what it claims to be measuring is important to ___________ Validity.

A

Face

24
Q

When is low Face Validity desirable?

A

when researchers want to disguise the true purpose of the research from the respondents because the participant might answer inaccurately due to socially acceptable expectations

25
Q

What is Empirical Validity?

A

an approach to establishing validity that relies on, or is based on, observation or planned data collection rather than theory or subjective judgment

26
Q

Empirical validity is usually reported as a ____________ ____________.

A

Validity Coefficient

27
Q

What is the Validity Coefficient?

A

a correlation coefficient used to express validity

28
Q

A correlation coefficient can range from ______ to _____ to ______.

A

-1 to 0 to +1

29
Q

Validity coefficients are typically low because _________ and _________.

A
  1. Performance on many criterion complex, involving many traits
  2. Criterion measures themselves, may not be highly valid
30
Q

The closer a correlation coefficient is to zero means there is _________ correlation.

A

low

31
Q

The closer a correlation coefficient is to -1 or +1, the _________ valid the measurement.

A

MORE

32
Q

Validate the measurement against some kind of criteria such as (3) __________, _________, ___________.

A

rule
standard
already existing test

33
Q

Criterion Validity is a type of ______________ Validity.

A

Empirical

34
Q

Criterion Validity is ____________.

A

the extent to which the scores obtained from a procedure correlate with an observable behavior

35
Q

What is a criterion?

A
  1. a rule or standard for making a judgment

2. The standard by which the test is being judged

36
Q

The two types of Criterion Validity are __________ and ________.

A
  1. Predictive

2. Concurrent

37
Q

Predictive (criterion validity) is ________

A

the extent to which a procedure allows for accurate predictions about a participant’s future behavior

38
Q

Concurrent (criterion validity) is ____________.

A

the extent to which a procedures correlates with the present behavior of participants

39
Q

Convergent Validity is _______

A

correlated with an already established instrument to establish another equally as valid instrument

40
Q

Divergent Validity is __________

A

measurement of a variable that is the opposite of a known measurement that is valid

41
Q

What is Judgmental-Empirical Validity?

A

an approach to establishing validity that relies on subjective judgments and data based on observation
*combo: expert and observation

42
Q

Construct Validity is a type of ______________ Validity.

A

Judgemental-empirical

43
Q

Construct validity is _____.

A

the extent to which a measurement reflects the hypothetical construct of interest
** not observable

44
Q

What is a construct?

A
  1. an abstract concept used in a particular theoretical manner to relate different behaviors according to their underlying features or causes
  2. used to describe, organize, summarize and communicate our interpretations of behavior
  3. term used to summarize and describe behaviors that share certain attributes
  4. a collection of related behaviors that are associated in a meaningful way
45
Q

A ___________ does not have a physical being outside of its indicators.

A

construct

46
Q

Researchers infer the existence of a construct by observing the ____________ of related indicators.

A

collection

47
Q

What is the collection of indicators in a construct?

A
  1. historical facts: family, medical, social
  2. symptoms: behaviors, family reports
  3. Clinical judgment and observation
48
Q

Two factors in determining construct validity.

A
  1. Judgment about the nature of relationship: hypothesize about how the construct in the form of the instrument designed to measure it should effect or relate to other variables
  2. Empirical evidence: test the hypothesis using empirical methods
49
Q

The method for determining construct validity offers only ____________ evidence regarding the validity of a measure.

A

indirect

50
Q

Often construct validity is found through ___________ evidence.

A

indirect

51
Q

Because the evidence for construct validity is indirect, researchers should be very cautious about declaring a measure to be valid on the basis of a ____________ study.

A

single

52
Q

Construct validity is _________ secure

A

less

53
Q

In construct validity researchers usually test a number of ___________ about the construct before determining construct validity.

A

hypotheses

54
Q

A synonym for Reliability is ______.

A

consistency

55
Q

____________ is more reliable than subjective.

A

objective

56
Q

Reliability is __________.

A

the degree to which measurements are consistent

57
Q

Types of Reliability errors are ___________.

A
  1. Random
  2. chance
  3. Unsystematic
    * * interchangeable terms
58
Q

Two important facts about reliability errors

A
  1. since such errors are in principle random and unbiased, they tend to cancel each other out.
  2. the sum of chance errors, when a sufficiently large number of cases is considered, approaches zero
59
Q

The more concerning type of Reliability Error is ___________.

A
  1. Systematic Error

2. Constant Error

60
Q

Definition of systematic error

A

an error produced by some factor that affects ALL observations similarly so that the errors are always in one direction and do not cancel each other out

61
Q

A systematic error is usually a constant error and can be detected and _________________ for during statistical analysis.

A

corrected

62
Q

What is the relationship between Reliability and Validity

A

reliability is a precursor of validity

63
Q

A test cannot be valid if it is not first ____________.

A

reliable

64
Q

Reliability comes ________, before it can be ________.

A

first

valid

65
Q

______ before ______

A

R before V

66
Q

High reliability means ________ random error.

A

little

67
Q

High validity correlates with _______ true score

A

HIGH

68
Q

Low reliability means ___________ random error

A

High

69
Q

Can you have low reliability and high validity?

A

No, because you MUST have high reliability BEFORE validity can be considered

70
Q

Two factors in the classic model for measuring reliability.

A
  1. measure twice
  2. check to see that the scores are consistent with each other usually done with a correlation coefficient, known as a reliability coefficient
71
Q

What is the range of reliability?

A

-1 to 0 to +1

72
Q

What are the three ways of measuring Reliability?

A
  1. Inter observer or Inter-rater
  2. Test-retest
  3. Parallel forms
73
Q

Describe an inter observer or inter-rater method.

A

the extent to which raters agree on the scores they assign to a participant’s behavior

74
Q

Describe Test-retest method.

A

the consistency with which participants obtain the same overall score when tested at different times

75
Q

Describe Parallel forms method.

A

the consistency with which participants obtain the same overall score when given two forms of the same test, spaced slightly apart in time

76
Q

How high should the reliability coefficient be?

A

.80 for individuals

.50 for groups of 25 or more

77
Q

Why can the reliability coefficient for groups be lower than for individuals?

A
  1. reliability coefficients indicate the reliability for individuals’ scores
  2. Group scores are averages
    * statistical theory indicates that averages are more reliable than the scores that underlie them (individual scores) because when computing an average, the negative errors tend to cancel out the positive errors
78
Q

What is internal consistency/reliability?

A

use the scores from a single administration of a test to examine the consistency of test scores
*examines the consistency within the test itself

79
Q

List two methods for establishing internal consistency/reliability.

A
  1. split-half

2. Cronbach’s Alpha (preferred)

80
Q

What is the Split-half method of establishing internal consistency/reliability?

A

correlate scores on one half of the test with scores on the other half of the test

81
Q

What is the Cronbach’s alpha method of establishing internal consistency/reliability?

A

mathematical procedure used to obtain the equivalent of the average of all possible split-half reliability coefficients

82
Q

Larger number of items leads to a _________ result.

A

better

83
Q

Cronbach’s is a formula used frequently in social sciences because it measures one particular _____________.

A

attribute

84
Q

High internal consistency/reliability is desirable when a researcher has developed a test designed to measure a __________ unitary variable

A

single

85
Q

Alphas should be ______ or more.

A

.80

86
Q

In a test that measure several attributes you can still segment out each attribute’s questions and perform a ____________ on those for each attribute

A

Cronbach’s

87
Q

List three types of Norm and Criterion Referenced tests.

A
  1. Norm-referenced
  2. Standardized
  3. Criterion-referenced
88
Q

What is a norm-referenced test?

A

tests designed to facilitate a comparison of an individual’s performance with that of a norm group

89
Q

What is a standardized test?

A

tests that come with standard directions for administration and interpretation

90
Q

What is a criterion-referenced test?

A

tests designed to measure the extent to which individual examinees have met performance standards (i.e. a specific criteria)

91
Q

List 3 attributes of Achievement Tests

A
  1. measures knowledge and skills individuals have already acquired
  2. Reliability: dependent on objectivity of scoring
  3. Validity: dependent on comprehensiveness of coverage of stated knowledge or skill domain
92
Q

What is an achievement test?

A

a measure of optimal performance

93
Q

What is an Aptitude Test?

A

a measure of potential performance

94
Q

List 4 attributes of Aptitude Tests

A
  1. predict some specific type of achievement
  2. measure likelihood that individual will be able to acquire knowledge and skills in a particular area
  3. Reliability: r = .80 or higher for published tests
  4. Validity: determined by correlating scores with a measure of achievement obtained at a later time (r = .20 - .60 for published tests)
95
Q

List 4 attributes of Intelligence Tests

A
  1. predict achievement in general, not any one specific type
  2. measure the likelihood that individual will be able to acquire knowledge and skils in general
  3. Reliability: no information provided
  4. Validity: published tests have low to modest validity for predicting achievement in school
96
Q

List 4 criticisms of Intelligence Tests.

A
  1. tapping into culturally bound knowledge and skills rather than inmate (inborn) intelligence
  2. Slanted towards dominant racial or ethnic groups
  3. measure knowledge and skills that are acquired with instruction/formal schooling
  4. don’t measure all important aspects of intelligence
97
Q

What is a Likert-Type Scale?

A
  1. 5 point scale ranging 1-5
  2. use verbal anchors for each number
  3. reduce response bias by providing positive and negative statements
98
Q

Likert scale is an __________ level scale.

A

interval