Final Flashcards

(113 cards)

1
Q

Important information on groups is collected by which type of test?

A

Surveys

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Outcomes or attributes are measured by what type of test?

A

Psychological test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What tests focus on individual outcomes?

A

Psychological tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Results of a psychological test (individual) are reported at what level?

A

Test level, with overall scoring (high/low scores)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Results of surveys are reported at what level?

A

Question level, showing percentages per answered question

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the 4 main steps to constructing, administering and using survey data?

A
  1. Preparing
  2. Pre-testing
  3. Administering
  4. Collecting and coding
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a systematic examinations of published and unpublished reports/articles/studies on a topic?

A

A literature review

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are some important things to define when preparing for survey development?

A

Defining objectives, questions and plans.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Surveys and psychological tests must provide 3 sets of instructions for who?

A
  1. The one being tested
  2. The administrator
  3. The scorer
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

When developing administrator instructions, what must we take into account about each individual environment?

A

Testing environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When developing instructions for the test takers, what should the developer assure about the test?

A

Participants are explained how to respond, questions are clear and concise, and ensure honesty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What instructions ensure that each person who scores the test will follow the same process?

A

The scoring instructions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why should we write more questions than needed in the preparation phase of survey development?

A

You are likely to remove questions when pre-testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What type of language must you always avoid when developing questionnaires and surveys?

A

Slang and colloquial language

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When preparing a survey, what face details are important?

A

Format, instructions and layout

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

When pre-testing a survey or psychological test, we should watch out for what type of measurement errors? Hint: errors
associated with the design and administration of the survey

A

Nonsampling measurement errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

When examining the amount of times a
question was not answered, what are researchers looking at?

A

Item non-response rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

When administering a survey, a representative subset of the
population is known as a _____?

A

Sample (population)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What type of sampling uses statistics to ensure that a sample is
representative of a population?

A

Probability sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What type of sampling does not ensure
equal chance of being selected from the population?

A

Non-probability sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How can we code the Survey Question Responses, such as Likert scales?

A

We can assign numerical labels to add value to response choices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Which form of presenting findings involves the use of research results, including dissemination, transfer, exchange, and co-creation or co-
production by researchers and knowledge users?

A

Knowledge mobilization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Defining the testing universe,
audience, and purpose; Developing a test plan; Composing the test items
; and Writing the administration
instructions are all a part of ____ development?

A

Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Why should we be developing new tests if old tests already exist?

A

Needs are constantly evolving: behaviours change, some tests are no longer accurate, or do not properly evaluate what it is intended to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What helps define the testing universe of a survey or test?
Working definition
26
What identifiable measures help us most accurately define a target audience for a survey or psychological test?
Characteristics of a population
27
What a test will measure, and how the test users will use the test scores defines the ____?
Test purpose
28
An _____ test question has one response that is designated as “correct” or that provides evidence of a specific construct.
Objective test
29
This test format does not have a single response that is designated as “correct.”
Subjective test
30
What is the most common model of scoring that determines an individual’s final test score? Eg. One point per “correct” answer
Cumulative model of scoring
31
What model of scoring places test takers in a particular **group or class** by looking for their pattern of scores (e.g., pattern of certain symptoms = a diagnosis)?
Categorical model of scoring
32
What model of scoring scales items to distribute **points**, summing to a specific total?
Ipsative model of scoring
33
Conducting the pilot test and _____ are integral parts of the test development process.
Analyzing its data
34
What is the name for the scientific evaluation of the test’s performance?
A pilot test
35
Why do we call test items "items" rather than "questions"?
Questions are not always questions, they can be statements, pictures, or incomplete sentences.
36
What are 2 types of objective test items?
Multiple choice Forced choice
37
What are super common, objective test items used for a variety of purposes including preemployment tests, standardized achievement tests, and classroom tests? These may include stems and distractors.
Multiple choice questions
38
What objective test items presents the test taker with little room for response variety? Eg. "this is more like me” OR “this is less like me”
Forced choice questions
39
Name 3 subjective test items
Essay questions Interview questions Projective techniques
40
Which subjective testing items are often lengthy and written?
Essay questions
41
Which subjective testing item involves verbal conversation and allows for follow-up questions and exploration of additional relevant topics?
Interview questions
42
What techniques involve using highly ambiguous stimulus to elicit an unstructured response? E.g., show a child a picture, ask them to describe it
Projective techniques
43
What refers to the tendency of test takers to respond inaccurately to questions?
Test bias
44
What patterns of responding can result in false or misleading information?
Response sets
45
What is the term for the tendency of some test takers to provide or choose answers that are socially acceptable or that present themselves in a favourable light?
Social desirability bias
46
What is the name for the tendency to agree with any ideas or behaviours presented on test items?
Acquiescence
47
What is the name for responding to items in a random way by marking answers, without reading or considering them?
Random responding (random response patterns)
48
What is the act of answering items in a way that will cause a desired outcome or diagnosis?
Faking
49
What are 4 elements that may contribute to response bias?
Social desirability bias Acquiescence Random responding Faking
50
What is the term for evidence based on content?
Validity
51
What is the term for logically examining and evaluating the content of a test (including the test questions, format, wording, and tasks required of test takers) to (1) determine the extent to which the content is representative of the concepts that the test is designed to measure ... (2) without including elements that are irrelevant to their measurement?
Evidence based on test content
52
Why must we use concise and exact language when writing tests and test items?
Brevity reduces errors.
53
When writing effective test items, you should ensure to include the following 8 things:
Brevity Complete sentences Relevant and realistic time periods Accessible language (avoid jargon) No leading questions Avoid double barrelled questions No double negatives Avoid assumptive questions
54
What is the name for a question where biased language has the effect of pushing a test taker toward a particular answer option?
Leading questions
55
What is the name for a question that assumes that the taker is already familiar with something and disregards the possibility that the test taker may not be familiar with the concept?
Assumptive questions
56
How can developers evaluate the performance of each test item? This is important during pilot testing
Item analysis
57
How can we calculate item difficulty?
By dividing the number of persons who answered correctly by the total number of persons who responded to the question
58
If an item difficulty scores at .5 yield, what does this mean?
There is a lot of variation in responses (pvalue)
59
This can be calculated for tests of personality, and shows the percentage of test takers who respond correctly.
Item difficulty
60
What index compares the performance of those who obtained very high test scores (the upper group) with the performance of those who obtained very low test scores (the lower group) on each item?
Discrimination index
61
The discrimination index will range from ____?
-1.0 to +1.0
62
True or false: As a general rule the more positive the discrimination index the better the quality of the question.
True
63
Which of the following two discrimination index examples involve a better test? A DI of 0.6 or -0.1?
0.6
64
What is a measure of the strength and direction of the relation between the way test takers responded to one item and the way they responded to all of the items as a whole
Item total correlation
65
What would a low item-total correlation signify?
We should probably drop the question
66
What matrix displays the correlation of each item with every other item?
Inter-item correlation matrix
67
What measures the correlation of item responses with a criterion measure?
Item criterion correlation
68
True or false: Criterion validity asks: is the test related to other tests?
True
69
What is the name for a measure of the relationship between individuals' performances on one test item and the test takers' levels of performance on the overall measure of the construct the test is measuring?
Item response theory
70
What is the name of the theory that lets us relate how a test taker did on each individual item to a statistical estimate of the test taker’s ability on the construct being measured?
Item response theory
71
How can quantitative item analysis be done at an individual level?
Test takers are given a survey/questionnaire about the actual items
72
How can quantitative item analysis be done at a group level?
An expert panel is convened where they may provide feedback on the items
73
Why is it important to revise the test you just came up with?
You want to ensure that items are good by scoring well on the Item Statistics Matrix
74
What is a validation study?
To establish evidence of validity based on test content.
75
What is the purpose of a validation study?
To make sure the test can provide meaningful results.
76
What is it called to administer the same test to another sample of test takers from the target audience?
Replication
77
What is the replication crisis in psychology?
We are finding that not a lot of studies actually replicate... This means many theories and findings (and measures!) we believe are “valid” may not be
78
What is the statistical measure that expresses the extent to which two variables are related? This often shows relationships are measured in a linear way, meaning that they change together at a constant rate.
Correlation
79
"What does the one item contribute to the overall test" is a great example of what theory?
Item response theory
80
What are the 5 components of quantitative item analysis?
Item Difficulty Item Discrimination Item-Total Correlation Inter-Item Correlation Item-Criterion Correlation
81
What is the name for an interview collection method that involves a rgid set of questions, interviewer cannot deviate from questions or ask follow-ups? Often delivered in a standardized way
Structured interviews
82
What is the name for an interview collection method that involves informal, free-flowing questioning? The interviewer may have a general guide but is free to go in any direction
Unstructured Interviews
83
What is the name for an interview collection method that is more open than structured interviews? Interviewer may ask follow-up questions
Semi-Structured Interviews
84
What is a probing question?
To follow-up where you seek more detail
85
What is a prompt in semi-structured interviews?
To follow-up where you give them more info to help them answer
86
Who mainly oversees ethics of psychological testing?
Psychological associations
87
Why is the Canadian Code of Ethics for Psychologists meant as a guide?
Ethical decisions are complicated
88
What are the 4 principles of the Canadian Code of Ethics for Psychologists?
1. Respect for the Dignity of Persons and Peoples. 2. Responsible Caring. 3. Integrity in Relationships. 4. Responsibility to Society
89
What is the name for a professional credential individuals earn by demonstrating that they have met predetermined qualifications?
Certification
90
What is the name for a mandatory credential individuals must obtain to practice within their professions?
Licensure
91
How do governing bodies enforce their ethical codes of conduct?
Licensure can be revoked or suspended.
92
What are some responsibilities of test publishers?
Ensuring professionalism and ethics Attention to distribution Providing user manuals
93
What is the name for a person who responds to test questions or whose behavior is measured or observed?
Test taker
94
What are some responsibilities of test takers?
Must understand the consequences of their decision to take the test (or not take it), must ask questions if anything is unclear, must protect test security.
95
What are 4 test taker rights?
Issue 1: Right to Privacy Issue 2: Right to Informed Consent Issue 3: Right to Know and Understand Results Issue 4: Right to Protection From Stigma
96
How can groups can further be minoritized by testing? e.g., if the test doesn’t take into account unique cultural conditions that may impact results
Marginalization
97
What is the job of a psychometrist?
To test, score and analyze psychological tests.
98
To truly be consent, it must be:
Free, informed and ongoing
99
What do we call when prospective participants are recruited by individuals in a position of authority or otherwise pressured to participate?
Undue influence
100
What is a more extreme form of undue influence, involving a threat of harm or punishment for failure to participate or remain in the project?
Coercion
101
What is the name for anything offered to participants, monetary or otherwise, for participation in research?
Incentives
102
What could be a consequence of breaking confidentiality in research?
Institutional consequences (e.g., losing job at uOttawa)
103
What could be a consequence of breaking confidentiality in clinical contexts?
Losing/suspending licensure
104
What to do when there is a breach in proper consent or confidentiality procedures?
We must follow pre-determined steps, often under supervision of a superior or in consultation with a colleague
105
What is the main goal of Research Ethics Boards?
To review the ethical acceptability of all research involving humans
106
What do REBs review?
Research conducted by faculty, staff or students, or members of the institution (e.g., hospital, university)
107
Who is on a REB?
- two members must have relevant expertise; - one member is knowledgeable in ethics; - one member is knowledgeable in the relevant law; - one community member has no affiliation with the institution.
108
How many people should be on a REB to ensure competent, independent review?
REB must consist of at least 5 members
109
What is the name of forms that explain study purpose, risks, benefits, how confidentiality will be protected, how data will be conserved, and any compensation?
Consent forms
110
How does application review work for a study?
An REB will consider the possible level or risk associated with possible ethical issues and then choose an appropriate level of review
111
How do REBs ensure that researchers will stick to the plan that has been approved?
Modifications require reapplication and review
112
What is open science?
A movement to make research transparent, accessible, verifiable, by anyone
113
True or false: Open access articles tend to get cited less.
False. They are cited more.