Quiz 2 Flashcards

(57 cards)

1
Q

What is an operation definition?

A

One that defines a thing in such a way that it can be measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

If you want to examine the impact of amount of education on voting behaviors, which one is the independent variable and which one is the dependent variable?

A

Amount of education is the IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe what an extraneous variable is (could be mediating, moderating, or confounding)

A

One that is not the focus of your study but that does have relationships with both the independent and dependent variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Give an operational definiton for driving under the inluence

A

Driving with a blood alcohol content of .08 or greater

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Give an operation definition of “sleep deprived”

A

Many answers are possible. Any one answer is right ass long as it related to the construct of intetest, is rational, and CAN BE MEASURED

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the difference between systematic error and random error? which is worse?

A

Random measurement error occurs unpredictably. Systematic measurment error occurs the same way everyime for either all pariticpants completeing a measure for some sb-group of the participants.
Systematic error is worse because it consistently biases estimation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain what reliability is

A

Given that what is beaing measure has not actually changed, the measure of that construct is clsoe to the same every time the construct is measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are two types of reliability, and explain what they mean

A
  1. Test-Retest: you test a thing two times (close together in time) and get similiar answers
  2. Inter-rater reliabilityL two different raters using the same instrument rate a phenomenon similiarlily

3, Internal consistency h’s reliability: a chronbachs alpha statistic indicated all the items in an instrument relate to the same construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is validity

A

confidence that you are truly measuring the ocnstruct that you intend to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is face validity

A

seems reasonable “on the face of it”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is content validity

A

experts agree that the instrument captures the content that is necessary to measure and does not include content that does not relate to the construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is criteron related validity

A

alligns with an external measure (criterion) of the construct. Could include predictive validity, concurrent validity, or known groups validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is construct validity

A

a way to actually measure content validity by using other measurements to measure the construced being tapped (convergent) or onstruments that measure closely related but seperate constructs (discriminant).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are three criteria necessary to infer causality

A
  1. the cause precedes the effect ( IV is established before hte DV)
  2. the IV and the DV are correlated
  3. Nothing other an the IV could have causes the change in the DV
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is internal validity

A

confidence that you can infer causality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a threat to internal validity?

A

maturation, history, selection effect, regression to the mean, testing effect, instrumentation changes, ambiguous temporal order

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the characteristics of a true experimental desighn (randomized control trial)

A

1) random assignment to at least 2 groups
2) measurement of the construct of interest before and afer delivery of an intervention
3) the intervention is delivered to only of the two groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Explain why true experimenta ldesign allows us to rule out threats to internal alisity

A

the design allwos us to measure change that would have taken place in the absensce of the intervention as well as change that is associated with the intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a quasi-experimental design? How is it different from experimental design?

A

Quasi-experimental desgn: Utilizes at least 2 groups, only one of which gets the desire intervention. However, the two groups are not randomly assigned. Other quasi-experimental designs include multiple measures/time series designs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Why is quasi-experimental design not as strong as / does not allow us to infer causality in the same way that eperimental design does?

A

Because the twp groups were not randomized, they may not be closely comparable. The groups may be non comparable on both measure characteristics and unmeasure characteristics. If this is so, those chaaracteristcs (rather than the independent variable) might have influence the change in thedependent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What can you do to strengthen your quasi-experimental design?

A

1) ensure that the two groups are comparable on as many measures as possible
2) add more pretest/post test measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What kind of design is this? What is its strength at inferring causality and why? OXO

A

Pre-experimental, pretest/post-test design. Not strong because it does not include a comparison or control group- therefore we cannout rule out factors, such as maturation and historty, which might have been responsible for the change in the DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What would you do to the following design to strneth the study you are carrying out?

A

1) add a control group, preferably or at lease a comparson group matched closely to the intervention group so you can see how similair people performed over time in absence of the intervention
2) ad a pre-test so you can determine how the intervention group was functioning before and after the intervention took palce

24
Q

What is wrong with the design of this question:

Do you think it is untrue that higher education noes not teach students critical thinking?

A

it includes negatives

25
What is wrong with the design of this question? Please rate your agreement with the following statement: graduate study should focus on teaching students to solve problems rather than focusing on teaching them specific skills. 1 = disagree2 = somewhat agree3 = agree4 = strongly agree
The Likert scale is unbalanced and does not include the same range between each rating level
26
What is social desirability bias? What is acquiescence bias
Social desirability- unwilling to answer in a way that is not considered socially acceptable Acqueince- answers in a way that the participant beleives aligns with what the experimenter wants
27
What is the Hawthorne effect
The participant's awareness that he/she is being observed causes him/her to behave differently.
28
How does the topic of measurement relate to qualitative research?
Constructs are not operationally defined. Formal scales or instruments are not used. However, concerns about validity (trustworthiness) and reliablity (consistency) are still relvant
29
______ is the process through which we specify precisely what we will mean when we use partiular terms
conceptualiation
30
What is conceptual order
conceptualization ==> nominal definiton --> operational definition --> measurements in the real world
31
the extenet to which we combine attributes in fairly gross categories
range of variation
32
Problems in operationally definin variables
- we may not know in advance what all the msot salient variables are - Limited understanding of the variales may keep up from anticipating the best way to operationally define those variables - Even the best operational definitions are necesarrily superficial
33
What is measurement errir
Data does not accuarely portray the concept that we attmept to measure
34
What is systematic errr
when the information we collecte consistently reflets a false picture (it BAISES the measure)
35
Ways to avoid measurement errorq
- use unbuased wording - carefully train intervewers - use unobtrusive observations to minimize the social desirabilty bias - Understand how exisiting records are kept - Triangulation
36
the more reliable, the less...
random erorr
37
acceptable test retest reliablity?
.7 or above
38
PRedictive validity
measure can predict a criterion that will occur in the future
39
what is concurrent validity
measure that correspons to a criterion that is known concurrently
40
what is facotrial validitity
refers to whether the number of constructs and the items that make up those constructs measure what the researcher intends
41
How do reliabiity and Validity work with qualitative research
Qualitative researchers study and describe things from multiple perspectives and meanings Less emphasis on whether one particular measure is really measuring what it’s intended to measure
42
How to evaluate validity in qualitivative research (4)
The interpretation of parts of the text should be consistent The interpretation should be complete, taking all of the evidence into account The interpretation should be the most compelling one given the evidence within the text The interpretation should make sense of the text and extend our understanding of it
43
Do pilot studies ahave a lot of internal validity?
no- meant to just generate information about an intervention that has little research
44
3 common pre experimental desighsn
one shot case study one group pretest postest design postest only design with non equivalent groups
45
problem with X O
one shot case study fails to control for any threats to internal validity
46
01 X 02
one gtoup pretest post test design established correlation and time order but does not account for the factors otehr than the iv that might cause the change in the DV
47
X 0 | 0
postest only design with noequivalen grouo cannot inger than any difference between the two groups was caused by the inervention
48
what does randomization control for
selection bias in experimental designs
49
what is research reacitivty and what are some examples of it
``` Refers to changes in outcome data that are caused by researchers or research procedures rather than the independent variable Measurement Bias Experimental Demand Characteristics Experimenter Expectancies Obtrusive Observation Novelty and Disruption Effects Placebo Effect ```
50
What is attrition and how to avoid it
A threat to the validity of an experiment that occurs when participants drop out of an experiment before it is completed Strategies to minimize attrition: Reimbursement Avoid intervention or research procedures that disappoint participants Utilize tracking methods
51
What is a nonequivalent compariosn groups design
- 2 existing groups tjhat appear to be similiar are identified or created - the dependent variable is assesed before and after an intervention is introduced to one of the groups - comparion group does not receive the intervention
52
how to strengthen the validity of nonequivalent comparison groups design
- use multiple pretests | - use switching replication
53
what is a simple time series design
a simple interrupted time series design attmepts to develpp causaul inferences based on a comparion of trends over mulriple measurements before and after the an intervention is introduced and requires no comparison group
54
what is a multiple time series design
both an experimental group and a non equivalen comparison group are measure at multiple points in time before and after an intervention is introduced to the experimental group
55
pitfalls in carrying out experiments and quasi experiments in social science agencies
Social work experiments tend to take place in agency settings where administrators are not researchers, may not understand the requisites of experimental and quasi-experimental designs, and may even resent and attempt to undermine the demands of the research design
56
What is a case control study
compares groups of cases with contrasting outcomes and collects retrospective data that may expalin thedifferences in the outcome popular because of feasibility as data can be collected at one point in time inference and generalizabilty are limited
57
4 common pitfalls commonly encountered when implementeing research in social service agencies
Fidelity of the intervention Contamination of the control condition Resistance to the case assignment protocol Client recruitment and retention