Final Flashcards

1
Q

Control Groups and Treatments of Questionable Efficacy

A

Intervention Research Issues

In research on the effectiveness of psychotherapies, there is often a group that is not expected to improve. This situation raises ethical issues:

  1. Over the course of the study, people in a control group may stay the same or get worse.
  2. Participants in the group that does not receive an effective treatment may be discouraged from seeking psychotherapy in the future.

Unfortunately, research using control conditions that raise ethical issues is “fundamental to progress in understanding treatment.” To find out whether or not a treatment works, a no-treatment or waiting-list control group is necessary. To find out why a treatment works, a nonspecific treatment control group is necessary.

Some of the ethical issues can be addressed by providing treatment to participants in the control group after the study is over.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Obtrusive measures

A

When subjects are aware their performance is being assessed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Longitudinal studies

A

make comparisons over an extended period of time often involving several years

a study with pre, post and follow up is longitudinal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Quasi-experiment

A

when researcher cannot control who is in each group

often occurs when doing research in schools or hospitals (because classes or wards already exist)

Sometimes researcher is able to randomly assign P in some groups but not all groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Major qualitative research methods

A
Interviews
Focus Groups
Direct observations
Statements of personal experience
Documents
Photographs
Audio or video recordings
Films
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Test Bias

A

A test is biased when it does not predict as accurately for one group as it does for another. Bias occurs when the data for two groups have different slopes or different intercepts. When two groups get different mean scores on a test, that alone does not mean that the test is biased.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

4 conditions for mediator

A
  1. “The intervention (e.g. exercise) leads to change on [an] outcome measure (e.g. depression).
  2. The intervention alters the proposed mediator (e.g. …stress level).
  3. The mediator is related to [the] outcome (stress level is related to symptoms).
  4. Outcome effects (changes in depression) are not evident or substantially less evident if the proposed mediator (stress in this example) did not change.”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Reactive measures

A

If awareness of assessment leads persons to respond differently from ow they would usually respond

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Concurrent validity

A

correlation of a measure with performance on another measure or criterion at the same point in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Sample Characteristics

A

External Validity Threat

The extent to which the results can be extended to subjects or clients whose characteristics may differ from those included in the investigation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Internal Consistency

A

Degree of consistency or homogeneity of the items within a scale. Different reliability measures are used toward this end such as split-half, kuder-richardson 20 formula, and coefficient alpha

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Steps in Scale Development

A
  1. Determine what you want to measure, including level of specificity.
  2. Generate an item pool.
    Redundancy is fine.
    Begin with more items than you want in the final scale. Internal consistency
    reliability is related to the number of items.
  3. Determine the format of the questions. Decide on the number of response categories,
    whether you want an odd or even number of responses, and type of response format.
    Possible formats:
    Likert scale
    Semantic differential
    Visual analog
    Binary options
  4. Have initial item pool reviewed by experts.
  5. Consider inclusion of validation items.
  6. Administer items to a development sample. It should be large and representative of
    the population for which the scale will be used.
  7. Evaluate the items—including computing coefficient alpha (or Cronbach’s alpha).
    Usually, .70 is considered acceptable. Alpha depends on the number of items and on
    the average inter-item correlation.
  8. Optimize scale length—longer scales are more reliable.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Discriminant validity

A

correlation between measures that are expected not to relate to each other or to assess dissimilar and unrelated constructs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Narrow Stimulus Sampling

A

External Validity Threat

The extent to which the results might be restricted to a restricted range of sampling materials (stimuli) or other features the experimenters used in the experiment

Stimulus characteristics include the experimenters, setting, interviewers or other factors

Most commonly occurs when there is one experimenter, one therapist, one setting, one taped story, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Sensitivity of the measure

A

“The measure ought to be sensitive enough to reflect the type and magnitude of change or group differences that the investigator is expecting.”

Desirable characteristics of a measure of the DV:

a. It should allow a large range of scores so that it can pick up differences between
groups or conditions.

b. It should for bi-directional changes—increases or decreases. Should not allow
ceiling or floor effects.

c. Looking at the measure’s items should show that the measure could pick up changes
or group differences. Sometimes existing literature shows that the measure can pick
up changes.

When the measure of the DV used is not sensitive, that can lead to finding no change or no difference between groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Translational research

A

applies finding from basic research (e.g. laboratory research) to people in real life (applied research)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Moderated mediation

A

occurs when strength (or direction) of the relation of the mediator to outcome depends on the level of some other variable

mediator that doesn’t work for everyone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Predictive validity

A

correlation of a measure at one point in time with performance on another measure or criterion at some point in the future

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Nonspecific Treatment or Attention-Placebo Control Group

A

Get everything except actual treatment (including attention)

Difficult to have expectations for improvement without using an intervention

Issues with APCG can be avoided by using Treatment as Usual (TAU)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Stratified Sampling

A

members of groups in the pop are selection in proportion to their representation in the pop

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Fairness in Treatment During the Testing Process

A

“Regardless of the purpose of testing, the goal of fairness is to maximize, to the
extent possible, the opportunity for test takers to demonstrate their standing on
the construct(s) the test is intended to measure” (p. 51).

Accordingly, procedures “for the standardized administration of a test should be

       carefully documented by the test developer and followed carefully by the test 
       administrator” (p. 51).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Multiple Measures

A

Multiple Measures should be used to assess each construct being measured in a study. Usually, one measure alone cannot capture all facets of a construct. Also, when only one measure is used, the results are restricted “to the construct as assessed by a particular” measure. There are exceptions, e.g. when the DV is survival.

“Evidence for a particular hypothesis obtained on more than one measure increases the confidence that the construct of interest has been assessed.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Confidentiality

A

Source of Protection

“means that the information will not be disclosed to a third party without the awareness and consent of the participant.” Are exceptions—e.g. if child abuse is found.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Cultural Influences on Data Analysis

A

The major issue is cultural response sets—a tendency to respond in a certain way on tests or scales.

When two cultures’ means differ, do not know whether there is a difference between the groups in levels of a construct or whether the two groups use a scale differently.

People from collectivistic cultures may not use the ends of bipolar scales.

People from one culture may check off more items on a list than people from another culture.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Guidelines for Research with Human Participants

A
  1. Institutional Approval—provide accurate information and conduct the research as described
  2. Informed Consent to Research
  3. Informed Consent for Recording Voices and Images in Research—usually required prior to the recording
  4. Client/Patient, Student, and Subordinate Research Participants—potential participants are not penalized for deciding not to participate or withdrawing from a study. “When research participation is a course requirement or an opportunity for extra credit, the prospective participant is given the choice of equitable alternative activities.”
  5. Dispensing with Informed Consent for Research—possible when participation is viewed as harmless and certain other conditions exist (e.g. use of anonymous questionnaires that would not put participants at risk).
  6. Offering Inducements for Research Participation—payment should not be so large as to constitute coercion.
  7. Deception in Research
  8. Debriefing

“Subjects must be guaranteed that all information they provide will be anonymous and confidential and told how these conditions will be achieved.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Interrelations among Validities

A

Trade-off between controlling a situation in a study and being able to generalize it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Informing Clients about Treatment

A

Intervention Research Issues

whether the treatment has been shown to be effective.

Interestingly, telling participants that a treatment is experimental and has not been shown to be effective may actually reduce the effectiveness of the treatment because hope and expectancy of change are not aroused in the client.

Participants should be told that a number of treatments are being given and that assignment to a particular treatment is random (when that is actually true). Only those who agree to these conditions will participate in the study, which may affect external validity. That is, those who agree may not be a representative group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Standardized Path Coefficients

A

Numbers in arrows on path diagrams that reflect the strength of the causal relationship between the latent variable and each item

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

In opportunity to learn

A

Threats to Fair and Valid Interpretations of Test Scores

Especially for achievement tests, this term refers to “the extent to which
individuals have had exposure to instruction or knowledge that affords 
       them the opportunity to learn the content and skills targeted by the test”
(p. 56). People who have attended schools that do not have adequate
resources may not have learned the material necessary for the test. In 
these cases, “the validity of inferences about student ability drawn from
achievement test scores may be compromised” (pp. 56-57).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Debriefing

A

“If there is any deception in the experiment or if crucial information is withheld, the experimenter should describe the true nature of the experiment after the subject is run.” The purpose of the research should be explained.

Purposes of debriefing are:

  1. To eliminate or minimize any negative of effects of participation in the study.
  2. To show the participant why the research is valuable.

The elements of debriefing are provided in writing.

There are rare situations where debriefing can be omitted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Systematic Sampling

A

take every Kth person from your list

problem can occur if there is a cyclical pattern in the list that coincides with the sampling interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Retrospective Case-Control Design

A

compares groups that differ on the particular characteristic being studied on other variables that occurred in the past

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Qualitative research - mixed methods

A

combine quantitative and qualitative research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

True Experiment

A

allow the most control over the IV

randomly assign each person to a condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Variations of Information

A

Type of Manipulation

Here the question is: did the participants receive, attend to, and believe the  
information?

How to check this: Give a questionnaire. The groups should differ on the information 
that they received.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Treatment integrity or treatment fidelity

A

Checking on whether a treatment was delivered as intended

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Pretest-Posttest Control Group Design

A

Type of true experiment design

R = random assignment A = Assessment or Observation X = Intervention

Group 1: R A1 X A2
Group 2: R A1 A2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

History

A

Internal Validity Threat

Any event inside (except the IV/intervention) or outside of the experiment that may account for the results but it has to be a plausible explanation of the results

Control group can help mitigate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Threats to External Validity

A
Multiple-Treatment Interference 
Sample Characteristics
Narrow Stimulus Sampling
Reactivity of Experimental Arrangements 
Reactivity of Assessment
Test Sensitization 
Novetly Effects
Generality across Measues, Setting and Time

Mneumonic: M. SNRRTNG

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Testing

A

Internal Validity Threat

Effects of repeated assessment

“practice effect”

group that receives pre and post without intervention can help rule this out

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Direct Observations of Behavior

A

Modality of Assessment

—“look at what the client actually does.” Samples of behavior.

Not totally simple because researchers need to develop codes and define the behavior to be observed, e.g. child getting out of seat in class.

Need to demonstrate reliability of the observations.

Can be done in natural or laboratory settings.

Issues:

  1. “decisions regarding what to observe could restrict interpretation and generality of the measure.” For example, what is emotional abuse?
  2. Observations are obtained at a particular time, and may not represent what the client does at other times. People may behave differently when they are aware that they are being observed. This concern can be reduced by the use of mobile devices to collect data.
  3. How a person behaves in a laboratory may be different from how they behave normally.
  4. Validity must be shown.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Multiple Comparisons and Error Rates

A

Data-Evaluation Validity

When multiple statistical tests are completed the likelihood of chance finding is increased

i.e. .05 alpha is only for one test. alpha goes above .05 when you do multiple tests a.k.a “experiment-wise error rate”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Cross-validation

A

is when you split the sample in half, develop the scale on one half, and then assess the validity of the scale on the other half as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Data-Evaluation Validity

A

Facets of the evaluation that influence the conclusions we reach about the experimental condition and its effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Criteria for Inferring a Causal Relationship between Variables

A
  1. A strong relationship between the independent variable and the dependent variable.
  2. Consistency or replication, although some inconsistency can occur when there is a moderator variable.
  3. The cause comes before the effect.
  4. A “does-response relation”: more of the IV is associated with greater change in the DV.
  5. A reasonable process that explains how the IV leads to the DV.
  6. Experiment: when the IV is altered, a change in the DV occurs.
  7. Existence of similar findings in other areas.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Quality of Life

A

Measures to Evaluate the Clinical Significance of Change in Intervention Studies

refers to the client’s evaluation of how she is doing in multiple spheres (e.g. work, friendships).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Alternative-Form Reliability

A

Correlation between forms of the same measure when the items of the two forms are considered to represent the same population of items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Different measures

A

Using multiple measures may lead to finding different results for different measures. This is OK. Three reasons why different measures may yield different results:

  1. Using multiple informants, e.g. parent, child, and teacher may respond differently.
    Getting different responses makes sense because different informants may see
    different behavior (e.g. behavior at home vs. at school).
  2. Many constructs have several facets. For example, when assessing depression, one
    might look at affect, behavior, and biological symptoms. So, “the lack of
    correspondence between measures is to be expected.”
  3. The different measures may or may not correspond, depending on the client’s level of
    the construct, e.g. the “magnitude of the client’s anxiety.”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Null hypothesis

A

specifies there is no differences between groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Instrumentation

A

Internal Validity Threat

Changes in how the DV is measured over time

Can occur when any of the following is not constant:

measuring instruments
observers, raters, or interviewers
remarks or directions form the experimenter
test conditions

Most common occurrence is where raters change the criteria they are using over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Qualitative Research

A

Qualitative research studies people’s experience in depth.

Qualitative research is inductive while quantitative research is deductive.

Qualitative research tends to involve open-ended responses to questions, while
quantitative research usually involves selecting from possible answers given.

While quantitative research tests theory, qualitative research generates theory.

Qualitative research can assess reliability and validity.

While quantitative research employs statistical analysis to analyze data, data analysis in
qualitative research often involves identifying themes and categories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Qualitative research - grounded theory

A

theory that is developed from observation and analysis. The theory comes out of the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Brief Measures and Short Forms

A

Brief measures can be used, but their reliability and validity must be demonstrated.

Short forms of longer scales can be used if their reliability and validity are OK, e.g.
Symptom Checklist has 90 questions, but shorter forms can be used. Sometimes use of a short form leads to having a possible range of scores that is restricted; thus it is harder to find group differences. Also, short forms “may not be appropriate if there are multiple subscales or characteristics in a measure.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Randomized controlled trails (RCTs)

A

When the IV involves an intervention, a true experiment becomes a RCT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Cause

A

One variable influences either diretly or through other variables the apprearance of the outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Deception

A

ranges from intentionally misleading the participant to not giving the participant certain information. Some deceptions are not harmful to participants, but others can be (e.g. Milgram experiment).

The decision of whether deception is justified in a particular experiment weighs the possible risks to participants against the potential knowledge the study may produce. Generally, the risks to participants are weighed against the benefits to society. “The safest way to proceed is to minimize or eliminate risk to the subject by not using active forms of deception.”

Deception in clinical psychology research generally does not involve actually misleading participants, but rather, withholding information from them. Often giving the participant in a psychology experiment all information (e.g. hypotheses) could change the results, raising validity issues.

If a researcher wants to deliberately mislead participants, he must show that this deception is necessary to achieve the goals of the research, specifically:

  1. that the deception is justified because the experiment will yield important information.
  2. that less deceptive methods would not produce the information.
  3. the aversiveness of the deception is justified.

Since individual rights are to be protected in research, a potential research participant must be given enough information about the study to be able to make an informed decision about whether to participate. If possible, it is better not to use deception.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Anonymity

A

Source of Protection

ensuring that the identity of the subjects and their individual performance are not revealed.” Accomplished by the researcher not getting identifying information (e.g. name) or keeping data coded and separate from names. [But in some cases, the researcher has a list of codes and corresponding names.]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Subjective Evaluation

A

Measures to Evaluate the Clinical Significance of Change in Intervention Studies

subjective evidence of clinical significance is provided is there is a large improvement. Can be assessed with the Reliable Change Index.Evaluation—by the client or those in the client’s life (e.g. family).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Effect size (ES)

A

magnitude of the difference between two (or more) conditions or groups

M1 - M2/SD

The smaller the variablity (the more we minimize error) the larger the effect size because SD is the denominator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Nonrandomly Assigned or Nonequivalent Control Group

A

—“help rule out specific rival hypotheses and decrease the plausibility of specific threats to internal validity,” like history, maturation, or testing. “Such a group may be used when a no-treatment control group cannot be formed through random assignment.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Cross-Sectional Case-Control Design

A

compares groups that differ on the particular characteristic being studied on other variables that exist currently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Competence (Informed Consent)

A

does the individual understand the information given and is he able to make a decision? Some groups are not considered competent to give consent—e.g. young children, people with cognitive deficits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Posttest-Only Control Group Design

A

Type of True Experimental Design

R = random assignment A = Assessment or Observation X = Intervention

 Group 1: 	R	X	A1
 Group 2:	R		A1
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Convergent validity

A

extent to which two measures that assess similar or related constructs correlate with each other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Reactivity of Experimental Arrangements

A

External Validity Threat

Issue of how partcipants’ knowledge that they are being studied (or in a special program or that a relationship is being examined between specific variables) changes their behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Clinical Significance or Practical Importance of Changes

A

did the treatment make a real difference in the client’s everyday life?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

Falling within Normative Levels of Functioning

A

Measure to Evaluate the Clinical Significance of Change in Intervention Studies

evidence of clinical significance is provided if clients move from outside the normative range before treatment to within the normative range after treatment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Choices in Using a Measure

A
  1. Using a Standardized Measure—does it really measure what the researcher wants to
    measure?
  2. Varying the Use or Contents of an Existing Measure—reliability and validity need to be demonstrated for the modified measure.
  3. Developing a New Measure—reliability and validity must be demonstrated.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Construct Validity

A

Extent to which the measure reflects the construct (concept, domain) of interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Checking manipulation of IV

A

The researcher needs to assess the impact of the experimental manipulation. This is done to make sure that the IV was as intended. This is done differently depending on the type of manipulation (variation of information, variation in subject tasks and experience, or variation of intervention conditions).

Checking on the manipulation can be helpful in interpreting the results.

A manipulation check may, with certain types of manipulations, increase the reactivity of the assessment. This problem can be avoided by doing a manipulation check in a pilot study.

One has to be careful when removing subjects who were not affected by the manipulation as intended.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Qualitative Assessment

A

Measures to Evaluate the Clinical Significance of Change in Intervention Studies

open-ended questions to provide an in-depth evaluation. Can include ways therapy did and did not help the client.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Efficacy research

A

conducting treatment in highly controlled conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Cohort Designs

A

Type of observational design

here the researcher follows and studies a group or groups of people over time. So, this is a prospective, longitudinal design. A cohort “is a group of people who share a particular characteristic such as being born” in a particular year.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Theory

A

conceptualization of the phenomenon of interest

provides a tentative explanation of how variables are related

To be a scientific theory it must generate testable hypotheses

organizes existing research in a way that guides further studies

Can explain the basis for change and give us an idea of which moderators to investigate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

<p>criterion validity</p>

A

<p>correlation of a measure with some other criterion (can encompass concurrent or predictive validity)</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Variation of Intervention Conditions

A

Type of Manipulation

e.g. participants are given different types of therapy

Here the question is: how well was the treatment delivered?
If a treatment and control group do not differ at the end of treatment, it could be
because the treatment was not properly given.

How to check: record sessions or segments of sessions and have these scored to  
see whether the procedures and tasks that define the type of therapy that was 
supposed to be given actually appear in the tapes.
77
Q

Oversample

A

take more participants of one group to be able to compare (ex: oversampling native americans to be able to compare them to other groups)

78
Q

Yoked Control Group

A

used to make the groups equal on some variable, like number of sessions, so that variable will not be a confounding variable

79
Q

Quota Sample

A

Accidental Sample where you are trying to meet certain groups proprotionate to the pop

Ex: interviewing x number of men and women

80
Q

Recovery

A

Measures to Evaluate the Clinical Significance of Change in Intervention Studies

functioning well in many areas, including health, home, purpose, and community.

81
Q

Test theory

A

A test or scale assesses a construct. A person’s responses to items on a scale are seen as being caused by an underlying construct or latent variable.

82
Q

Cultural Considerations for Selecting Measures

A

Also very important is that the measure(s) used in research are appropriate for, reliable, and valid for the population being studied, or, when the sample is diverse, are appropriate, reliable, and valid for all groups included in the research.

83
Q

Incremental Validity

A

Whether a new measure or measure of a new construct adds to an existing measure or set of measures with regard to some outcome

84
Q

Multiple-Treatment Interference

A

External Validity Threat

Do not know whether a particular treatment would have been effective by itself if it is administered along with another treatment or after another treatment

Also, when one task is given and then preceded by other tasks

Can refer to therapy or to different conditions in an experiment

85
Q

Low Statistical Power

A

Data-Evaluation Validity

Major threat to DEV

Power is likelihood of demonstrating an effect or group difference when in fact there is a true effect in the world

Can happen when samples are too small (not representative) or too large (too much variability)

86
Q

Treatment as Usual (TAU)

A

routine or standard treatment that is usually provided at a clinic

Advantages:

  1. People seeking treatment get it, so avoid an ethical issue.
  2. Likely to be less attrition than in a group not receiving treatment.
  3. Generally controls for common factors.
  4. More acceptable to therapists.
87
Q

Weaknesses of Case-Control Design

A

Weaknesses of case-control designs:

  1. When two variables are related, it may be unclear which one came first.
  2. Causation cannot be demonstrated.
  3. There may be sampling bias in how participants are selected. For example, when studying women who have been abused by their spouses, if one found their participants at domestic violence shelters, they would be leaving out the majority of women victims of domestic violence who do not go to shelters. Clearly, how participants are found is very important in this kind of study.
88
Q

In test context

A

Threats to Fair and Valid Interpretations of Test Scores

For example, lack of clarity in test instructions.

	Also, interaction of examiner and test taker can lead to construct-
	irrelevant variance when the two differ in race, ethnicity, gender, or
	cultural background.
89
Q

Fairness in Access to the Construct(s) Measured

A

“Accessibility can best be understood by contrasting the knowledge, skills, and
abilities that reflect the construct(s) the test is intended to measure with the
knowledge, skills, and abilities that are not the target of the test but are required
to respond to the test tasks or test items. …For example, a test taker with impaired
vision may not be able to access the printed text of a personality test. If the test
were provided in large print, the test questions could be more accessible to the
test taker and would be more likely to lead to a valid measurement of the test
taker’s personality characteristics” (p. 52).

90
Q

Test Sensitzation

A

External Validity Threat

Partcipants may respond differently to an intervention because the pretest shows them what the focus of assessment is

91
Q

Restricted Range of the Measures

A

Data-Evaluation Validity

A measure may have a very limited range (total score from high to low) and that may interfere with showing group differences.

Not enough variablity in scores to show difference in group

92
Q

Observed score

A

actual score on the test

used to estimate true score

93
Q

Solomon Four-Group Design

A

Type of true experimental design

Group 1: R A1 X A2 Design 1
Group 2: R A3 A4
Group 3: R X A5
Group 4: R A6 Design 2

Pros:

  1. Controls for the usual threats to internal validity.
  2. Can measure the effect of pre-testing (A4 vs. A6) because they differ only in that Group 2 got the pretest and Group 4 did not.
  3. Can measure the interaction effect of pre-testing and the intervention (A2 vs. A5).
94
Q

Latent Variable

A

Underlying construct that a scale attempts to quantify

95
Q

Pretest-Posttest Design (aka nonequivalent control group design)

A

Type of quasi

Group 1: nonR A1 X A2
Group 2: nonR A1 A2

96
Q

Confounds

A

factors that varied with the intervention and therefore could explain (be responsible for) the results

Confounds are threats to construct validity

97
Q

ABAB Design

A

Single-Case Experimental Research Design

there is a baseline condition (A), when there is no intervention; second an intervention condition (B); and then A and B are repeated. The effects of the intervention are clear if performance improves (e.g. less violent behavior) during the first B, reverts to close to original levels during the second A, and improves with the second B. If there were only the first A and B, the design would be pre-experimental because factors other than the intervention could not be ruled out as causing any change in behavior. (Graph)

Problems:

a. The return to baseline during the second A shows that the improvement was not lasting.
b. There are ethical questions about removing a treatment that is helpful and thereby making the client worse.

98
Q

Assent (Informed Consent)

A

For children participating in research, consent is usually given by a parent or guardian.
Assent, or willingness to participate in research, is also sought from children who are old enough to understand what is expected of them in the research, or from adolescents. There would be an assent form in addition to an informed consent form. Child assent may be waived.

99
Q

Response set

A

Participants also may adopt a response set when filling out a measure. A response set is “a systematic way of answering questions or responding to the measure that is separate from the construct of interest.” Four possible response sets are:

  1. Acquiescence—tendency to say “Yes” or “True” when responding to items.
  2. Naysaying—tendency to disagree or deny characteristics.
  3. Socially Desirable Responding—tendency to answer so as to make oneself look
    good.
  4. End Aversion Bias—tendency to not give extreme responses (e.g. not use the
    ends of a 1-7 scale).
100
Q

Effectiveness research

A

evaluates treatments in clinical settings with “real” patients and under conditions more routinely seen in clinical practice

101
Q

Participant Heterogeneity

A

Data-Evaluation Validity

when characteristics related to how particpants respond to the treatment are too varied which varies the results

Can be addressed with factorial design or by using a more homogenous sample

102
Q

Threats to Internal Validity

A
Statistical Regression
Maturation
Instrumentation 
Testing 
History
Selection Biases

Attrition
Diffusion or Imitation Treatment
Special Treatment or Reactions of Controls

Mneumonic: SMITHS ADS

103
Q

Invasion of Privacy

A

seeking or obtaining information of a personal nature that intrudes upon what individuals view as private.” For example, health, political views, tests that assess personality or psychopathology.

Threats to privacy have increased with the use of electronic records (e.g. in hospitals) and data collection over the Web (data can be hacked).

Invasion of privacy applies not only to individuals, but to communities and cultural and ethnic groups as well. Publication or other dissemination of results can “violate the privacy of a large easily identified group.” Example: the Inupiat community in Barrow, Alaska was studied to evaluate alcohol abuse. The researcher found that alcohol use and abuse were common there. “The report was criticized as denigrating, culturally imperialistic, and insensitive to the values of American Indian and Alaskan native culture.” “Reports of the findings were viewed by the community as highly objectionable and invasive.” Researchers need to consider how the information they report might be used.

Members of a particular group need to define what is invasive. Involvement of members of the cultural group being studied is necessary in all stages of research.

104
Q

Types of Validity (4)

A

Internal
External
Construct
Data-Evaluation

105
Q

Selection Biases

A

Internal Validity Threat

differences between groups before the intervention or experimental manipulation because of selection or assignment of groups

One form is using different selection methods for two groups

Often a problem when using groups that already exist (i.e. classes or hospital wards)

106
Q

Cluster Sampling

A

first sample groupings or clusters, then sample individuals from these

Can be random

used because it is difficult and expensive to get random or stratified random samples

Large-scale surveys often use this method

Aka multistage sampling because of the different stages involved

107
Q

Observational Designs

A

Researcher does not create the IV

variable of interest is studed by selecting subjects who vary in the characteristic or experience of interest

people come to the study with their level of IV (e.g. gender, ethnicity, etc.)

many variables that we study can only be researched in observational designs because it is not psosible and/or not ethical to assign people to levels of the IV (e.g. studying the effects of parental divorce on children)

108
Q

Unobtrusive Measures

A

Modality of Assessment

are those where the person who is being assessed is not aware of it. They avoid reactivity, which can occur in “almost all types of measures” where people know their performance is being assessed.

These include observation (which is not known to the subject), archival records (which existed before the research, e.g. hospital records), and physical traces (e.g. existing graffiti).

“Unobtrusive measures need to be corroborated with other measures in the usual way that assessment devices are validated.”

There can be an ethical problem with some unobtrusive measures because people have not given their consent to be assessed.

“Selecting several different measures, each with different sorts of problems, increases confidence that the …(construct) of interest in fact is being assessed.”

109
Q

No-Contact Control Group

A

no contact with the research project and do not know they are in a treatment

Assessment appear to them as a routine part of some other activity

difficult because violates informed consent

pretty rare

110
Q

Knowledge (Informed Consent)

A

is the potential participant given adequate information about the study and its potential risks and benefits? Is the information given in understandable language?

111
Q

Objective Measures

A

Modality of Assessment

like questionnaires and scales that have a fixed set of possible responses (e.g. 1-7, yes/no) and scoring keys. These are the most common type of measures in clinical psychology. “These measures require clients to report on aspects of their own personality, emotions, cognitions, or behavior.”

Issues:

  1. Responses “can be greatly influenced by the wording, format, and order of appearance of the items.” Beyond that, responses can vary by culture and ethnicity. Our results and conclusions are affected by the structure of the measure.
  2. People may not answer honestly. They may, for instance, try to present themselves in a good light. Social desirability “has been shown to be extremely pervasive on self-report measures.”
  3. People may adopt response sets, e.g. acquiescence.
  4. The hello-goodbye effect: before therapy, clients may exaggerate their symptoms to make sure that get treatment or that they get treatment quickly; after treatment, they may respond in a more socially desirable manner. Thus, people may seem to have improved when they have not.

Still, self-report measures have been well validated.

112
Q

Withholding the Intervention

A

Intervention Research Issues

ethical issue is raised when some people are assigned to no-treatment or waiting-list control conditions. The client’s condition may deteriorate, or even if it does not, the client must experience the discomfort that led him to seek treatment for a longer time.

A “flagrant ethical violation based on withholding treatment” is the Tuskegee Syphilis Study, conducted by the U.S. Public Health Service from the 1930s to 1970s. Its purpose was to explore the course of the disease over time. The participants were 399 African American men who were “denied effective treatment of syphilis so that researchers could study the progression of the infection.”

Because waiting lists are common at clinics, being in a waiting-list control group may not actually delay someone’s receiving treatment.

Giving another treatment or treatment as usual is an alternative to a waiting-list control group.

113
Q

Types of contruct validities

A

Single Operations and Narrow Stimulus Sampling
Experimenter Expectancies
Demand Characteristics
Attention and Contact Accorded the Client

Mneumonic: Single Experimenters Demand Attention
or SADE

114
Q

Psychobiological Measures

A

Modality of Assessment

—“refer to assessment techniques designed to examine biological substrates and correlates of affect, cognition, and behavior and the links between biological processes and psychological constructs.” Examples: heart rate, muscle tension, cortisol level (to assess stress), neuroimaging (like fMRI).

Imaging techniques can be used to “identify and distinguish different disorders (e.g. depression, schizophrenia),” for example, or show how “specific brain processes [are] altered by different interventions.”

Neuroimaging is being used increasingly in clinical psychology research.

Psychobiological measures have the advantage that they are not affected by socially desirable responding or intentional changes in responses. They also have their own issues, like “variation in software used to code neuroimages.”

Since measures of physiological states do not always correspond with one another, which measure is used in research can influence the conclusions.

115
Q

Special Treatment or Reactions of Control

A

Internal Validity Threat

control group gets special attention which can be an alternative explanation of the results

Ex: control group is given something so they won’t feel snubbed

Other examples:

  • Participants try harder because they know they’re in the treatment group
  • Control group tries harder to match Part.
  • Control group performs worse because they are let down that they are in control group
116
Q

Three major types of studies done in psychology

A
  1. True Experiments
  2. Quasi-experiments
  3. Observational designs
117
Q

Demand Characteristics

A

Construct Validity

Cues of the experimental situation that influence the results

aspects of the instructions, procedures, etc. that are part of the study but not the “active ingredient”

118
Q

Strengths of Cohort Designs

A
  1. Show that antecedent comes before outcome.
  2. Can be sure that the outcome did not bias assessments of the antecedents (because
    it had not occurred yet).
  3. Assessments can be made at different times to show the progression from
    antecedent to outcome.
  4. “Cohort designs are good for testing theories about risk, protective, and causal
    factors.”
119
Q

Volition (Informed Consent)

A

is participant giving consent free from coercion? Coercion is not necessarily force, but can be something like giving college students a choice between participating in two hours of research or writing a 20-page research paper. Another example would be offering poor people a lot of money to participate in research. Also, participants must be able to withdraw consent at any time.

120
Q

Case-control design

A

Type of observational design

investigate a variable (characteristic) by comparing those who have the characteristic with those who do not have the characteristic. These groups are compared on other variables in the present or in the past.

121
Q

Protective Factor

A

A characteristic or variable that prevents or reduces the likelihood of a deleterious outcome

negatively correlated with the onset of some later problem

protective factor decreases the likelihood of the outcome

122
Q

Purposive Samples

A

Pick cases that are judged to be typical of the pop of interest

Used to forecast elections

Ex: Picking a number of small election districts whose election returns in previous years have approximated the overall state returns.

123
Q

Types of reliability

A

Test-RetestAlternative-FormInternal Consistency Interrater

124
Q

Single-Group Cohort Design

A

identifies subjects who meet a particular criterion (e.g. …individuals released from prison…) and follows them over time.” A number of variables are assessed at the beginning of the study and often at other times, i.e. more than once, during the study. At the end of the study, the researcher examines which of the variables assessed predict(s) the outcome

125
Q

Novelty Effects

A

External Validity Threat

It is possible that the effect of an experimental manipulation or intervention might in part be due to its novelty

126
Q

Multigroup Cohort Design

A

follows two or more groups over time “to examine outcomes of interest.” “One group is identified because they have an experience, condition, or characteristic of interest; the other group is identified who does not

127
Q

Diffusion or Imitation of Treatment

A

some or all of the participants in the control group may inadvertently receive some or all of the treatment (e.g. kids in two classes talk about the treatment during recess)

Also can happen if some people in the intervention group do no receive the intervention

128
Q

Matching

A

grouping participants on a variable (or variables). Then participants at each level of the variable are assigned to each group, so that the groups end up equivalent.

129
Q

Types of sampling

A

Probability Sampling
Accidental Samples
Systematic Sampling
Stratified Sampling

Quota Samples
Purposive Samples
Cluster Sampling

Mneumonic: PASS QPC

130
Q

Computerized, Technology-Based, and Web-Based Assessment

A

use computers and other devices.

Have the advantage that administration is fast.

Web-based assessment allows for the collection of data from many people.

However, “The different conditions under which individuals complete the measure can introduce variability into the assessment process.” For example, work vs. home.

Cell phones are now being used for many health assessments.

Computerized assessments appear to be valid.

Assessments using technology have advantages, e.g. data can be collected many times per day, cheaper, large number of respondents possible.

131
Q

Initial Equivalence

A

Making sure the groups are as similar as possible so we can establish IV as cause of DV

We try to accomplish through random selection and random assignment

132
Q

No-Treatment Control Group

A

receives all of the assessments that the treatment group does but no intervention

133
Q

Path Diagrams

A

show causal relationships among variables. The latent variable is seen as causing the score on each item. Numbers in arrows on path diagrams reflect the strength of the causal relationship between the latent variable and each item. They are called standardized path coefficients.

134
Q

Single-Case Experimental Research Designs

A

Single-Case Experimental Research Design

usually one participant or a few participant but could also be “a classroom, a school, a business, an entire city, or state.”

In single-case designs, the effects of an intervention are assessed by comparing different conditions presented to the same client over time (for example, introducing and removing treatment).

Key requirements:

ongoing assessment: made before and during traetment

baseline assessment: behavior is observed for several days before intervention begins

Stability of performance (behavior): A stable rate of performance during baseline, characterized by no trend and minimal variation, is ideal, because it provides the best comparison for evaluating subsequent effects of treatment.

If the client shows a worsening trend during baseline, and the intervention reverses that trend, we have impressive evidence that the intervention is beneficial.

135
Q

Differential Test Functioning (DTF)

A

refers to differences in the functioning of tests (or sets of items) for different specially defined groups. When DTF occurs, individuals from different groups who have the same standing on the characteristic assessed by the test do not have the same expected test score” (p. 51).

136
Q

Modalities of Assessment Used in Clinical Psychology

A
Objective Measures
Global Ratings
Projective Measures
Direct Observations of Behavior
Psychobiological Measures
Computerized, Technology-based, and Web-based Assessment
Unobtrusive Measures
137
Q

Types of manipulations

A

Variations of information
Variations in Subject Tasks and Experience
Variation of Intervention Conditions

138
Q

Types of Data-Evaluation Validity

A

Low Statistical Power
Unreliability of the Measures
Multiple Comparisons and Error Rates
Participant Heterogeneity

Variability in Procedures
Errors in Data Recording, Analysis, and Reporting
Restricted Range of the Measures
Misreading or Misinterpreting the Data Analyses

Menumonic: LUMP VERM

139
Q

Cross-sectional studies

A

make comparisons between groups at a given point

140
Q

Weaknesses of Cohort Designs

A
  1. Take a long time.
  2. Expensive.
  3. Attrition can bias the sample.
  4. Possibility of cohort effects.
  5. If the outcome of interest is infrequent, sample size may end up being so small that statistical power is low.
141
Q

Construct Validity

A

what specific aspects of an intervention are responsible for observed change or an observed effect

Distinguish from contruct validity of a test

considered after threats to internal validity have been ruled out

142
Q

Psychometric properties

A

Reliability and Validity of a measure

143
Q

Maturation

A

Internal Validity Threat

changes that result from processes internal within the participants

Only a problem when the effects of maturation cannot be separated from the effects of the intervention

Often go together with threats due to history

Control group can help mitigate

144
Q

Threats to Fair and Valid Interpretations of Test Scores

A

Test content that favors one group over the others
In test context
In test responses
In opportunity to learn

145
Q

Variablity in the Procedures

A

Data-Evaluation Validity

Same as particpant heterogeneity but has to do with procedures, instructions, etc.

146
Q

Projective Measures

A

Modality of Assessment

—“attempt to reveal underlying motives, processes, styles, themes, personality, and other psychological process.” They “allow the client to freely ‘project’ onto the situation important processes within his or her own personality.”

The information presented is ambiguous and response alternatives are not limited.

Difficult for the person to intentionally distort their responses, unlike self-report measures.

Examples: the Rorschach and Thematic Apperception Test (TAT).

There are validity data for the Rorschach as it relates to distorted thinking, for example.

Issues:

  1. Can be difficult to administer and score.
  2. Objective measures have advantages over projective ones.
  3. Not clear that projective measures add anything to other measures.
147
Q

Fairness As Lack of Measurement Bias

A

“Characteristics of the test itself that are not related to the construct being
measured, or the manner in which the test is used, may sometimes result in
different meanings for scores earned by members of different identifiable subgroups” (p. 51).

148
Q

Multiple Baseline Design

A

Single-Case Experimental Research Design

a number of behaviors are examined and a baseline is established for each. The intervention (e.g. a reward) is applied contingently to one behavior at a time. If each behavior changes when the intervention is introduced, the effects can be attributed to the intervention rather than to extraneous variables. Could also be done with one behavior and a number of clients, or across different situations, settings, or times of day

Problems:

a. With different behaviors, treatment of one behavior may cause changes in other behaviors.
b. With different behaviors, the same intervention may alter some behaviors but not others.
c. With different clients, there is a problem with withholding the treatment while the investigator is waiting to get to the last person.

149
Q

Accidental (Available) Samples

A

take cases that are available until reach a specifed N (ex: first 100 people on the street)

150
Q

Posttest-Only Design

A

Type of quasi

Group 1: nonR X A1
Group 2: nonR A2

151
Q

Generality across Measures, Settings and Time

A

External Validity Threat

The extent to which the results extend to other measures, settings, or assesment occasions than those included in the study

152
Q

Changing-Criterion Designs

A

Single-Case Experimental Research Design

The effect of the intervention is demonstrated by showing that behavior changes gradually over the course of the intervention phase. For example, reinforcing a child for practicing a musical instrument. A criterion (amount of time) is initially specified for the child. As (s)he improves (i.e. practices more), the criterion is increased—i.e. more time must be spent to get the reward. The effects of the intervention are shown when the child practices more after each time the criterion is changed.

153
Q

Single Operations and Narrow Stimulus Sampling

A

Construct Validity

Sometimes a single set of stimuli, investigator or other facet of the study that the investigator considers irrelevant may contribute to the impact of the experimental manipulation

Also a type of external validity but in construct it’s not about generalizability but rather not being able to separate “active ingredient” from other variables

154
Q

Crossover Design

A

Type of Multiple-Treatment Designs

Group 1: R A1 X1 A2 X2 A3
Group 2: R A4 X2 A5 X1 A6

Pretest
Participants are randomly assigned to groups (i.e. orders) and the two groups
receive the two treatments (X1 and X2) in different order. The pretest is sometimes
omitted.

155
Q

Global Ratings

A

Modality of Assessment

try to quantify evaluations of general characteristics, e.g. overall adjustment, improvement in therapy. One example would be a 1-7 scale from no improvement to very much improvement. The GAF is an example.

Should be combined with other types of measures.

Issues:

  1. Can often have an instrumentation threat because the criteria one uses to make the global rating change over time.
  2. May lack sensitivity.
  3. Often lack reliability and validity data.
  4. The range of scores usually makes it difficult to identify group differences.
156
Q

Research Design

A

refers to the arrangement or ways to arrange conditions to evaluate the hypotheses

157
Q

Wait-List Control Group

A

Control group receives treatment after the final assessment has been made

time between pre and post assessments must be the same for the treatment group and for the wait-list control group

158
Q

Qualitative research - content anaylsis

A

refers to the identification of categories and subcategories in people’s responses. Quotes are used to help describe the categories. The categories “capture pertinent experiences and reactions.”

159
Q

Measures of clinical significance

A
Falling within normative levels of functioning
Magnitude of pre to post change
No longer meeting diagnostic criteria
Subjective evaluation
Clinical problem is no longer present
Recovery
Quality of life
Qualitative assessment
Social impact measures
160
Q

Probability Sampling

A

Aka random sampling

Each person in the population has equal chance of being choosen for the sample

161
Q

In test responses

A

Threats to Fair and Valid Interpretations of Test Scores

Test takers may give responses other than those expected by test
constructors.

Test takers may give long answers when short answers are expected, or
the opposite.
162
Q

Fairness As Validity of Individual Test Score Interpretations for the Intended Uses

A

“It is important to keep in mind that fairness concerns the validity of individual
score interpretations for intended uses…..It is particularly important, when
drawing inferences about an examinee’s skills or abilities, to take into account
the individual characteristics of the test taker and how these characteristics may
interact with the contextual features of the testing situation” (p. 53).

“However, group differences in outcomes do not in themselves indicate that a 
testing application is biased or unfair.
   In many cases, it is not clear whether the differences are due to real 
       differences between groups in the construct being measured or to some source 
       of bias” (p. 54).
163
Q

Statistical Regression

A

Internal Validity Threat

extreme scores tend to change towards the mean over time

no treatment control group helps mediate

protect against:

randomly assign participants to groups
use measures with high reliability and validity
-test participants twice before the intervention and select those who are high on both testings (rarely done)

164
Q

Correlate

A

two variables are related but one variable does not precede the other

165
Q

Factorial Designs

A
  1. Factorial designs allow investigation of two or more independent variables at the
    same time, i.e. in one study. Each variable has two or more levels (or conditions).

Four box table

166
Q

Multiple-Treatment Counterbalanced Design

A

Type of Multiple-Treatment Designs

	Order
	               1st	2nd	3rd	4th
Group (or
Sequence)	1	A	B	C	D
	               2	B	A	D	C
	               3	C	D	A	B
	               4	D	C	B	A

This design is called a Latin Square

167
Q

Test content that favors one group over the others

A

Threats to Fair and Valid Interpretations of Test Scores

That favors one group over others. For instance, a test should
not include words or expressions that would be more familiar to one
group than to others.

	 Or that is offensive or emotionally disturbing to some test takers.

	 It is common for test developers to have a diverse panel of experts 
	 review test content to avoid these problems.
168
Q

Types of validity

A

ConstructContentConcurrentCriterionConvergentPredictive IncrementalFaceDiscriminant

169
Q

Misreading or Misinterpreting the Data

A

Data-Evaluation Validity

Wrong stastical test was used to analyze the data or author went beyond what the results were showing in their interpretation

170
Q

Variations in Subject Tasks and Experience

A

Did the participants actually do what they were supposed to do?
Did the participants actually follow the instructions?

How to check this: Direct Observation, Self-report

Did the participants really experience the intended emotion/state?

How to check this: Self-report, Physiological measures
	e.g. could have participants rate how happy they feel on a 1-5 scale that goes
from “Not at All Happy” to “Very Happy.”
171
Q

Protected access to one’s records

A

Source of Protection

one’s right to control who sees information about her (e.g. concerning her physical and mental health and psychological problems) (HIPAA).

172
Q

Assessment during the course of treatment

A

Assessment during treatment means repeated evaluation of a client’s progress.

Such assessment has advantages:

Helps us understand “how and why changes occur….” so that “we can identify the critical ingredients and maximize change.”

“The study of mediators of change in therapy requires assessment during the course of treatment.”

Change occurs differently for different people. If one assesses only pre and post, one can miss when change occurs.

Example: therapeutic alliance.

“Ongoing assessment can establish the time line of proposed mediator and change….”
That is, when does the mediator change and when does the DV change?

173
Q

Control Groups and Treatments of Questionable Efficacy

A

Intervention Research Issues

In research on the effectiveness of psychotherapies, there is often a group that is not expected to improve. This situation raises ethical issues:

  1. Over the course of the study, people in a control group may stay the same or get worse.
  2. Participants in the group that does not receive an effective treatment may be discouraged from seeking psychotherapy in the future.

Unfortunately, research using control conditions that raise ethical issues is “fundamental to progress in understanding treatment.” To find out whether or not a treatment works, a no-treatment or waiting-list control group is necessary. To find out why a treatment works, a nonspecific treatment control group is necessary.

Some of the ethical issues can be addressed by providing treatment to participants in the control group after the study is over.

174
Q

Regulations, Ethical Guidelines, and Protection of Client Rights

A

Intervention Research Issues

Ethical guidelines are necessary because:

  1. “One cannot leave standards up to individual investigators.” They may see their own research as very important and/or may be under pressure to publish in order to keep their job.
  2. The difference in power between the researcher and the participants can lead to abuse of power by the researcher.
  3. No one person can really decide how to manage trade-offs and weigh risks and benefits in their own research.
  4. They can be revised to keep up with changes in the field (e.g. use of neuroimaging).

Ethical guidelines are needed “to help with decision making and to protect the interests of the client or participant.

175
Q

Accelerated, Multi-Cohort Longitudinal Design

A

include “cohorts who vary in age when they enter the study.” “The design is ‘accelerated’ because a longer period of development is covered by selecting cohorts at different periods and following them.”

For example, one could study the age range 5-14 years by selecting children at the ages of 5, 8, and 11, and following them for 4 years. It would take much longer to study a cohort of 5-year-olds until they were 14.

Advantage of this design is that it doesn’t take as long and can explore cohort effects

176
Q

Attention and Contact Accorded the Client

A

Construct Validity

The extent to which increased attention or contact to the client that is associated with the intervention could plausibly explain the effects attributed to the intervention

Placebo group can help mediate this

177
Q

Reactivity of Assessment

A

External Validity Threat

Participants’ awareness that they are being assessed can alter their responses

Focuses on the measures used and other measurement procedures

178
Q

Content Validity

A

content of the items reflects the construct or domain of interest. The relation of the items to the concept underlying the measure

179
Q

Cultural Influences on Research Methods

A

can occur in various decisions that are made in research, specifically:

  1. Deciding what question to ask
  2. Selecting a research paradigm
  3. Deciding who will be in the sample
  4. Deciding how to measure the variables, i.e. operationalization
  5. Selecting a setting for the research
  6. Creating the procedure
  7. Data analysis
  8. Drawing conclusions
180
Q

Social Impact Measures

A

Measures to Evaluate the Clinical Significance of Change in Intervention Studies

  1. reflect change on a measure that is important in everyday life, e.g. “arrest, truancy, hospitalization….”
181
Q

Experimenter Expectancies

A

Construct Validity

Unintentional effects the experimenter may have that influence the subject’s responses in the experiment

182
Q

Risk Factor

A

A predictor of some later outcome

Correlation where we know that one variable comes before the other

Risk factor is not a cause

Increases the likelihood of some outcome

183
Q

Mechanism

A

the steps or processes through which the intervention (or some IV) actually unfolds and produces the change

184
Q

True score

A

The purpose of the scale is to give us an idea of the magnitude of the latent variable in each person assessed. The actual magnitude is what we are trying to estimate by using a scale. This “actual magnitude” is called the true score.

someone’s score if there is no error

185
Q

Strengths of Case-Control Design

A
  1. “The designs are well suited to study conditions that are relatively infrequent,” for example, individuals diagnosed with DID.
  2. Less costly than following people over time, e.g., to see who develops a particular problem. Instead, people with and without the problem are compared.
  3. Since people are assessed at one point in time, attrition is not a problem. It is a problem when people are followed over time.
  4. These designs can go beyond showing that two variables are related and identify moderator variables.
  5. Can study variables that could not be studied experimentally (i.e. by assigning people to levels of the IV).
  6. Can match subjects on some variable(s).
  7. Can generate hypotheses about which variable caused which other variable.
186
Q

When and How Threats to Internal Valdity Occur

A
  1. A study is poorly designed and many threats are possible
  2. A study is designed well but conducted sloppily
  3. A study is designed well but attrition occurs
  4. A study is designed well but the results do not allow the conclusion that treatment led to change
187
Q

Differential Item Functioning (DIF)

A

occurs “when equally able test takers differ in their probabilities of answering a test item correctly as a function of group membership” (p. 51).

188
Q

Single-Case Experimental Research Design Limitations

A

The biggest concern raised about single-case designs is that results cannot be generalized, i.e., external validity threat. However, when comparing single-case designs with between-group experimental research, it is important to consider:

  1. Interventions that were developed through single-case research “have wide generality…across many human populations (e.g., from infant to the elderly).”
  2. In between-group research, we usually do not know how many people in the sample actually improved. Usually findings are reported for each group only.
  3. Between-group research rarely uses random sampling from the population of interest. Generality requires assurances that the sample represents the population.
    The key to generality is replication, which occurs in single-case research when there is more than one participant.