PSYC4022 Testing and Assessment Week One Gathering Information Flashcards Preview

PSYC4022 Testing and Assessment > PSYC4022 Testing and Assessment Week One Gathering Information > Flashcards

Flashcards in PSYC4022 Testing and Assessment Week One Gathering Information Deck (96):


Any distinguishable, relatively enduring way in which one individual varies from another



Any distinguishable, relatively enduring way in which one individual varies from another but less enduring that Traits


Sensation Seeking

the need for varied, novel, and complex sensations and experiences and the willingness to take physical and social risks for the sake of such experiences.


Cumulative Scoring

Inherent in cumulative scoring is the assumption that the more the testtaker responds in a particular direction as keyed by the test manual as correct or consistent with a particular trait, the higher that testtaker is presumed to be on the targeted ability or trait.



Long-standing assumption that factors other than what a test attempts to measure will influence performance on a test.


Error Variance

The component of a test score attributable to sources other than the trait or ability measured.


Classical Test Theory (CTT) or true score theory

The assumption is made than each test-taker has a true score on a test that would be obtained but for the action of measurement error.


Psychometric Soundness

Reliability and Validity.



The consistency of the measuring tool.



A test is considered valid for a particular purpose if it does, in fact, measure what it purports to measure.



Also referred to as normative data, norms provide a standard with which the results of measurement can be compared.



behaviour that is usual, average, normal, standard, expected or typical.


Normative Sample

Is that group of people whose performance on a particular test is analyzed for reference in evaluating the performance of individual testtatkers


to norm or norming

The process of deriving norms.


Standardisation or Test Standardisation

The process of administering a test to a representative sample of testtakers for the purpose of establishing norms.



The process of selecting the portion of the universe deemed to be representative of the whole population is referred to as Sampling.



a portion of the univese of people deemed to be representative of the whole population.


Stratified Sampling

The process of inlcuding everyone in your representative population. i.e. All religions, races etc included in the Manhattan area.


Stratified Random Sampling

If everyone in the sample has the same chance of being included.


Standard Error of measurement

A statistic used to estimate the extent to which an observed score deviates from a true score.


Standard Error of Estimate

In regression, an estimate of the degree of error involved in predicting the value of one variable from another


Standard Error of the mean

A measure of Sampling Error


Standard Error of Difference

A statistic used to estimate how large a difference between two scores should be before the difference is considered statistically significant.


Purposive Sampling

Arbitrarily selected sample based on representativeness of the population.


Incidental Sampling/ Convenience Sampling

A sample based on the greatest level of convenience.



The percentage of people who fall below a particular raw score.


Percentage Correct

The distribution of raw scores - the number of items answered correct multiplied by 100 and divided by the total number of items.


Age Norms/ Age-Equivalent Scores

Indicate the average performance of different samples of testtakers who were at various ages at the time the test was administered


Grade Norms

Are developed by administering the test to representative samples of children over a range of consecutive grade levels.


Developmental Norms

Age or Grade Norms which develop, deteriorate or otherwise be affected by chronological age, school grade or stage of life.


National Norms

are derived from a normative sample that was natioanlly representative of the population at the time the norming study was conducted.


National Anchor Norms

You can anchor the scores on one test against the scores on another test.


Equipercentile Method

the equivalency of scores on different tests is calculated with reference to corresponding percentile scores.


Subgroup Norms

Segmentation of the normative sample.


Local Norms

Provide normative information with respect to the local population's performance on some test.


Fixed Reference Group Scoring System

The distribution of scores from one group of testtakers (the fixed reference group) is used as the basis for the calculation of test scores on future administrations of the test.


Norm Referenced

When you compare scores on a test to other scores on that test.


Criterion Referenced Evaluation

When you compare scores to some other criterion.


Psychological Testing

involves measuring psychological variables by means of methods to obtain a sample of behaviour.


Psychological Measurement

is the integration of psychological data gathered via psychological tests, interviews, case studies and behavioural observation, for the purpose of making a psychological evaluation.



Is the administration of a psychological test, is an integral part of assessment.



Involves much more than testing.



Furthest From



Closest Too


Reliability Coefficient

The proportion that indicates the ratio between the true score variance on a test and the total variance.



A useful statistic in measuring in describing test score variability (SD squared).


True Variance

Variance from true difference or reliability.


Error Variance2

Variance from irrelevant, random sources.


Random Error

is a source of error in measuring a targeted variable caused by unpredictable fluctuations and inconsistencies of other variables in the measurement process. Sometimes referred to as "noise"


Systematic Error

Error that is typically constant or proportionate to what is presumed to be thet true value being measured.


Test-Retest Reliability

Test-Retest Reliability is an estimate of reliability obtained by correlating pairs of scores from the same people on two different administrations of the same test.


Coefficient of Stability

When the interval between testing is > 6 months in a test re-test, the estimate of tst-retest reliability is often referred to as the coefficient of stability


Coefficient of Equivalence

The degree of the relationship between various forms of a test can be evaluated by means of an alternate-forms or parallel-forms coeffecieint of reliability, which is often terms the coefficient of equivalence.


Split-Half Reliability

An estimate of split-half reliability is obtained by correlated two pairs of scores obtained from equivalent halves of a single tests administered once.


Spearman Brown Formula

Allows a test developer or user to estimate internal consistency reliability from a correlation of two halves of a test.


SEM Formula

SD*(SQRT 1-r)


Z-T Score Conversion

X(SD) - M


SD Formula



What is the Objective of Testing?

To obtain some gauge, usually numerical in nature, with regard to an ability or attribute.


What is the Objective of Assessment?

Typically, to answer a referral question, solve a problem, or arrive at a decision through the use of tools of evaluation.


What is the process of Testing?

Testing may be individual or group in nature. After test administration, the tester will typically add up the number of correct answers or the number of certain types of responses… with little if any regard for the how or the mechanics of such content.


What is the Process of Assessment?

Assessment is typically individualised. In contrast to testing, assessment more typically focuses on how an individual processes rather than simply the result of that processing.


What is the Role of Evaluator in Testing?

The tester is not key to the process; practically speaking, one tester may be substituted for another tester without appreciably affecting the evaluation.


What is the Role of the Evaluator in Assessment?

The assessor is key to the process of selecting tests and/or other tools of evaluation as well as in drawing conclusions from the entire evaluation.


What is the Skill of the Evaluator in Testing?

Testing typically requires technician-like skills in terms of administering and scoring a test as well as interpreting a test result.


What is the Skill of the Evaluator in Assessment?

Assessment typically requires an educated selection of tools of evaluation, skill in evaluation, and thoughtful organisation and integration of data.


What is the Outcome of Testing?

Typically, testing yields a test score or series of test scores


What is the Outcome of Assessment?

Typically, assessment entails a logical problem-solving approach that brings to bear many sources of data designed to shed light on a referral question.


What are the 5 Micro-Skills of Interviewing?

Interview Micro-Skills
1. Squarely face the client.
2.Open Posture
3. Lean Toward the Client
4. Eye Contact
5. Relax


What the things to avoid in an Interview?

Interview Micro-Skills - Things to Avoid
1. Non-Listening
2. Partial Listening
3. Tape-recorder Listening
4. Rehearsing
5. Interruptions
6. Question threat


Name 8 Tools of Assessment

Tools of Assessment
1. Tests
2. Portfolio Assessment
3. Performance-based assessment
4. the case history
5. behavioural observation
6. role-play tests
7. computerised assessment
8. assessment using simulations or video


Name 16 sources of information for an Assessment

1.   Referral
2.   Consent/ Limitations
3.   Procedures/ Documents
4.   Mental Status Examination
5.   Psychosocial History
6.  Mental Health History
7.   History of Present Problem
8.   Past Intervention/ Responses
9.   Response Style/ Psychometric Testing
10.   Psychological Formulation
11.   Diagnosis
12.   Client Goals/ Proposals
13.  Risk
14.   Recommendations/ Intervention Plans
15.   Report & Technical Addendum
16.   Informing Interview


Give a brief History of the Clinical Interview

1. Synder (1945) - non-directive approach encouraged self-exploration
2. Strupp (1958) - importance of interviewer experience
3. Rogers (1961) - therapeutic alliance and client-centred approaches
4.1960‟s fracturing of approaches
5.1980's greater granularity in disorders paved way for very specific diagnostic criterion
6. 1980's hybrid of structured and non-structured
7. 1990's managed health care's impact on practice
8. 1990's computer-assisted interviewing
9. 1994 - single session therapy
10. 1990's repressed memories
11. 2000's cultural awareness


Name 4 Biases for Assessment

1. Halo
2. Confirmatory
3. Physical Attractiveness (Gilmore et al, 1986)
4. Interviewee distortions


Mehrabian (1972) broke down information received into verbal and non-verbal information. What % of information is gathered through facial expressions, tone and content of what is being said?

1. 55% facial expression.
2. 38% tone
3. 7% content of what is being said


What are the phases of clinical assessment? (According to Maloney & Ward (1976)

1.     Phase 1 – Initial Data Collection.
2.     Phase 2 – Development of Inferences
3.     Phase 3 – Reject, Modify or Accept Inferences
4.     Phase 4 – Develop and Integrate Hypothesis
5.     Phase 5 – Dynamic Model of the Person
6.     Phase 6 – Situational Variables
7.     Phase 7 – Prediction of Behaviour


What are 3 sources of error variance?

Sources of Error Variance
1.   Assessees are sources of error variance.
2.   Assessors are also sources of error variance.
3   Measuring Instruments are sources of error variance.


There are 11 cues from which you can take information for an assessment. What are they?

1. Personal Information Cues
2. Medical Cues
3. Immediacy Cues
4. Speech Cues
5. Language Cues
6. Physical Cues
7. Cognitive Cues
8. Risk Assessment Cues
9. Collateral Information Cues
10. Overt Behavioural Cues
11. Personal History Cues


Give some examples of a Personal Information Cue

Gender, Occupation, race, religious affiliations, socioeconomic status, appearance, lifestyle factors.


Give some examples of a Medical Cue

Medication prescribed, compliance, blood serology, previous diagnosis, current diagnosis, family history of diagnosis.


Give some examples of an Immediacy Cue

Engagement, affect, communication style, facial expressions, emotional expression, personality traits/ temperament, transferences.


Give an example of a Speech Cue

Tone, flow, perserverative, slurred, volume, pitch, pace.


Give an example of a Language Cue

Descriptors, words used, developmentally appropriate, use of humour


Give an example of a Physical Cue

Breathing, eye contact, voice, body movements.


Give an example of a Cognitive Cue

Attention, memory, intelligence, intellectual disability, judgement, decision making, perceptions.


Give an example of a Risk Assessment Cue

Risk of Harm to self, others, intent, means and plan.


Give an example of a Collateral Information Cue

Congruency between verbal and non-verbal, consistency between collateral, psychometrics and narrative. Referral Source and Question.


Give an example of an Overt Behavioural Cue

Behaviour in Waiting Room, occupation of space in therapeutic environment, feedback from client.


Give an example of a Personal History Cue

Psychosocial History, relationship status, conflicts, support networks.


What are 10 things you want from an interview?

1. Standardisation (see Groth-Marnat, 2009)
2. Intake Interview
3. Planning
4. Rapport-Building
5. Open-ended questions (TED)
6. Active Listening/ Attending behaviours (beware negative attending)
7. Open mind-set - avoid biases like?
8. Accurate (and discrete) note taking.
9. How would you approach note taking with a client?
"I'm going to jot down a few notes to make sure I'm remembering everything correctly. Is that alright with you?" (Shea, 1998)
10. Empathy -> Active listening, paraphrasing.


What are the 5 P's of a Clinical Interview?

1. Presenting
2. Precipitating
3. Perpetuating
4. Pre-morbid
5. Protective


Describe a Presenting Issue

Exactly what are the thoughts, behaviours, feelings associated with their concerns.


Describe a Precipitating Issue

(Distal) if these experiences started 6 months ago, tell me about the month or so before that ALSO (Proximal) so these experiences come on in waves, talk me through the last time it happened, starting 10 minutes before you noticed the feelings.


Describe a Perpetuating Issue

What makes things worse for you? What things aid these feelings to continue happening (for example, panic attacks).


Describe a Pre-morbid Issue

Previous physical health and mental health status also risk factors (e.g. homelessness, history of abuse) though you wouldn't just plough in with "have you ever been abused" it would need to come in a long way down the track when true trust and rapport have developed.


Describe a Protective Issue

What helps this person continue to function (e.g. they are working, have a close family, have a good education).