test 2 Flashcards

(107 cards)

1
Q

How do we describe personality

A

Personality traits, ex: aggression, extroversion, introversvion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Traits

A

a long term way in which people differ, they are only changeable before emerging adulthood. They are also psychologically constructed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

States

A

distinguish people momentarily (emotions)_

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

constructs

A

a scientific concept developed to describe or explain a behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

how does environment/situation effect traits

A

they control the way that the trait i expressed but it does not change if the trait is still there

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

can states and traits be quantified and mesured

A

yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

how are definition of traits defined/ modified

A

traits definitions are modified based on a person theoretical orientation, different test developers may define and measure a construct in different ways

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

when assessing a trait

A

we must understand the construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

test related behaviors should

A

predict real life behaviorr

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

a component test giver

A

should understand and appreciate the weakness for the test and how to compensate for them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

T/F sources of error are a part of the assesment

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

error

A

factor other then what is being tested will effect the results of the test, component of score that does not have to do with ability or what is being measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

test takers error

A

previous knowledge, no med, distratcion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

test giver error

A

scoring , lack of standerdization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

error variens

A

the part of test scored attributed to sources other the trait or ability measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

who is responsible for error varience

A

assesses and assesor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Assumption about psychological testing

A
  1. psychological traits and states exist
    2.traits and states can be quantified and measured
    3) test related behavior should reflect real life behaviors
    4.Test have strength and weaknesses
    5.various sources of error are apart of assessments
    6)Test and assessment can be conducted in a fair manner
    70 testing and assessment benefit society
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

how to be fair

A

follow guidelines

use for intended population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

relability

A

how consistent are the scores for a particular test, the part of total variance attributed to true variance,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

validity

A

is the test measuring what it is suppose to be

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

a good test should be

A

reliable and valid, should benefit test taker or society

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

norms referenced testing and assement

A

a method of evaluating test scorers by evaluation and individual test takers scores and comparing it to the scores of a group of test takers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Norms

A

the test performance data of a particular group that is used as reference for evaluation and interpetin of individual test scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

normative sample

A

reference group test takers are compared to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
standardization
the process that has specific guidelines for administering the test for a representative sample to help us establish norms
26
stratified sampling
samples that include different sub groups from population
27
stratified random sampling
all of the population available but a strata is randomly picked
28
Purposive sampling
arbitrarily selecting a sample that is believed to represent population
29
incidental/ convenience sample
an easily available sample not always representive
30
developing norm
after obtaining a sample developers : admin test with standard instruction recommend a setting summarize data using descriptive stats provide detail description of data implications
31
types of norms
percentile,age,grade,subgroup etc
32
percentail
percentage of people who fall below or at a specific score . Popular method because easier to calculate. a problem when distributed raw scores will seem small on ends and exggarted in middle
33
Fixed reference group scoring system
the distribution of scores from one test group is used as a basis for calculation score on future administration of test ie:sat
34
norm referenced vs criterion reference interptaion
norm ref is comparing indiv to norm group but criterion test takers are evaluated on weather they meet a set stamdard score
35
Culture and testing
use test appropriate population, when interpreting data it is important to be aware of and account for culture based ideals and traits, conduct culturally informed test
36
relability coeffecient
index of reliability, ratio of true score variance on a test and total varience
37
observed score
true score plus erroe
38
variention
stnd dev squared, true variance plus error varience
39
measurement error
looking for a variable but measuring a variable not accounted for nsted
40
type of error
random and systematic
41
random error
unforeseen and uncontrolled influence on a variable
42
systematic error
that is typical and expected influence on variable that can be accounted for
43
sources of error variance
test construction, test admin,test scoring and others
44
error in test construction
variations in items and between test
45
error in adminstration
testing environment, variation in test takers(stress, lack of sleep,drugs, knowledge),examiners role)appernce and demenor
46
test scoring and interpetation
interpretation is subjective, computer scoring does help with standardization but still require expert interpetation
47
other sources of varince
sampling error and methodological error
48
sampling error
the sample is not reflective of the whole
49
methodological error
the test giver is not trained in administration of test
50
social desirability
a person being tested answering a question in the manner that is the most socially acceptable
51
test- retest relability
when the same person is given the same test two different times, most appropriate for stable varietals, estimates decrees over time,
52
coefficient of stability
found with test retest reliability and measures trait stability
53
coefficient of equivelnce
the degree of relationship between various forms of the test
54
parallel forms
for each for of test the means and variance of observed test scores are equal
55
alternative forms
different version of two test that should be parallel but don't meet criteria usually content and difficulty is the same
56
how is reliability checked
administering two forms of a test to a sample group this is effected by error greatly
57
split half relability
correlating 2 pairs of scores from equal halves of a single test administrated once
58
3 steps of split half reliability
divide test into 2 equal parts calculate person r for each half adjust scores using spearman brown formula
59
sperman brown test
allows developers to estimate internal consistency reliability from a corrilation of two halves of the same test
60
inter item consistency (iic)
relatedness of items on a test, gauges homogeneity
61
homogenity of test
how similar are the test
62
Kude-Richardson formula 20
statistic of choice for determining icc of dichotomous items
63
coefficient alpha
mean of all split half correlations corrected by spearman brown formula. most popular for internal consistency range from 0-1
64
average proportional distance (ADP)
difference between items on test scores. averaging all distances then dived by 1
65
inter score relability
agreement of constancy between two or more scorers with regards to a particular measures, prone to methodological error, best for behavioral measures,
66
coefficient of inter score relability
the scores of different raters are corralted with each other
67
purpose of reliability estimate
varys depending on variable being studied
68
consideration for reliability metrics
1) homogeneous or heterogeneous 2) static or dynamic 3) test scores range restricted or 4) test type : speed or power 5) is or nor criterion referenced
69
classical test theory(CTT)
refers to true score score model, most used because easiest
70
true score
a value according to classical test theory is a true representation of a person a =real ability measured by a particular test
71
ctt assumptions
are readily met with IRT (item response theory)
72
problematic assumption of CTT
has to do with equivalence of items on a test
73
domain sampling theory
estimation of the extent that specific sources of variation under specific conditions are effecting scores
74
generalizability theory
test scores vary because of changes in testing situations
75
inter reliability theory (IRT)
probability that a person with X ability will perform at y level consider difficulty
76
discrimnation
how an item scores differ between different ability
77
standard error of measurement (SEM)
measure of percison in an observed test scores, amount of inherent error, higer reliability is higher error
78
confidence intervals
range of scores that is likely to contain true score
79
slandered error of diffrence
aids test users in determining how large a difference has to be before it is consider significant
80
question answered by slandered error of difference
scores between test 1 and 2 scores between person a on test 1 and person b on tesr 1 scores of how individuals performance on test on compare to everyone else
81
validation
the process of gathering and evaluating evidence about validity
82
local validation
validating a test with person group of test takers
83
concepts of validity
content validity,criterion-related validity, construct validity
84
content validity
measure of validity based on what the test is covering, judgment that test sample is rep of universe
85
criterion related validity
measure validity bu obtained by evaluating the relationship of scores obtained on the test to scores on other test
86
construct validity
found by a comprehensive analysis included content and criterion related validity. ability for a test to measure a construct. if a test has high construct it is predictor
87
face validity
how relevant and item appeasers to be, face validity does not mean valid. low face validity leads to lack in confidence in what test is measuring
88
test blue print
a plan entailing the type of information to be covered, the number of items,the organization of items
89
culture and relativity of content validity
content of a test varies across culture
90
criterion related validity
judgment on how adiquitall a test score can be used to infer a person standing on a particular measure of intrest
91
concurrent validity
a test score related to some criterion measure obtained at the same time
92
predictive validity
used to predict other scores
93
the validity coefficient
a correlation coefficient that provides a measure of the relationship between test scores and criterion scores. Affected by truncated and inflated range
94
incremental validity
the degree in which an additional predictor explains something about the criterion measure that is not explained by predictors already in use
95
expectancy table
percentage of people w/ specified test score interval who subsequently weer placed in categories of criterion
96
evidence of homogeneity
how uniform test are at measuring a single construct
97
evidence of change with age
some construct should change over time
98
evidence pretest/posttesr changes
scores between a pretest and protest change due to experince
99
evidence from a particular group
scores change depending on inclusion in a specific social group
100
convergent evidence
correlation between older test and newer test measuring the same construct
101
discrimination evidence
shoving little relationship between test scores on two test that should not correlate and lot of relationship between scores that should corralate
102
factor anaylsis
new test should have commonality with test measuring the same construct
103
bias
factors that systematically prevent accurate measure, to prevent make clear non subjective test
104
rating error
a judgement from intentional of unintentional misuse of a rating scale
105
halo effect
giving a person a higher score due to likability
106
fairness
the an extent in which a test us used a just equal way
107
utility
usefulness of a test or practical value of testing to improve efficancy