412 Research Flashcards

1
Q

what types of measures are there and which one do developmental psychopathologists use?

A
  • psychopathology/psychological Sx (interviews, rating scales, observation) *used by developmental psychopathologists
  • predictors, correlates, consequences of psychopathological Sx
  • Behavioural measures (rating scales, observations)
  • physiological (heart rate, skin conductance)
  • neural (EEG, MRI)
  • cognitive (tasks measuring memory, attention)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

why measure psychopathology?

A
  • clinical (diagnosis, Tx planning, Tx monitoring/progress)
  • research (etiology, correlates, course, treatment planning for various disorders, causal factors, mediators, moderators)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

unstructured interviews

A
  • clinicians rely on experience and intuition to arrive at a diagnosis by asking relevant questions
  • frequently used
  • less comprehensive (only asking questions that are relevant to the presenting problem, so may miss co-occurring problems or patient history)
  • potential for confirmatory biases, availability biases (basing decisions on examples that come to mind easily) so may miss diagnoses that would arise from other questions
  • combine information in idiosyncratic ways (not standardized or reliable)
  • client might only give information that is relevant to the questions they’re asked without elaborating
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

semi-structured interviews

A
  • interviewer has a validated and systematically ordered set of questions to be presented to the client in order to make a comprehensive diagnosis
  • clinician has flexibility in asking questions (can follow-up with things they consider to be important)
  • clinical judgment is involved in determining when a symptom is present
  • needs training to be administered, length of the interview makes it less feasible (less widely used despite being the gold standard), but still more reliable and valid than unstructured
  • totaling the number of symptoms to give a diagnosis
  • requires lots of data reduction (analysis or recoding of narrative responses)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

structured interviews

A
  • questions are very fixed and interviewer has very little flexibility (can be administered by computer)
  • requires lots of data reduction (analysis or recoding of narrative responses)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Kiddie Schedule of Affective Disorder & Schizophrenia (K-SADS)

A
  • type of semi-structured interview
  • good coverage of many sorts of disorders
  • starts with a screener that tells you what to follow-up on (not everyone completes the entire interview)
  • questions correspond to DSM criteria, potential follow-ups, and rating scale
  • possible to ‘skip out’ a section is participants aren’t endorsing the questions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

rating scales

A
  • people knowledgeable about the child answer questions about behaviours and feelings (parents, siblings, teachers, child)
  • often used to measure psychopathology continuously (number of symptoms) and used to make a categorical decision
  • shorter than interviews, but not as comprehensive (potential tradeoff between validity/reliability of interviews and feasibility of checklists)
  • generally use self-report in conjunction with interviews for a comprehensive assessment (elevation on a rating scale alone does not equal diagnosis)
  • efficient way to track treatment progress
  • don’t require much data reduction
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

observation

A
  • self-report interviews/rating scales rely on reporters who may not know what Bx is normal or clinically concerning
  • observation provides access to the circumstances in which the Bx occurs
  • naturalistic or structured (lab)
  • not always feasible
  • challenge to external validity (presence of an observer can change Bx)
  • may be difficult to see Bx of interest (low base-rate like physical aggression, covert like relational aggression)
  • data reduction depends on the complexity of the observation system
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what does a thorough assessment for ADHD look like

A
  • IQ testing
  • academic achievement testing (reading, writing, math) to rule out learning difficulties
  • ADHD rating scales from teachers, parents, and self-report
  • semi-structured clinical interview (K-SADS) with parents and child
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

disagreement among informants

A
  • often do not agree (correlations from .2 to .4)
  • behaviour changes based on context (situations in the environment may elicit certain symptoms - different demands)
  • parents might have a response bias (parents have their own lived experiences and personality - may interpret their child’s hyperactivity as normal because they’re hyperactive too)
  • there could be legitimate differences in the meaning of Bx across settings
  • teachers are exposed to a larger sample size of children so can make common vs. uncommon Bx
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

how to combine data from multiple informants

A
  • “or” rule: symptom is present if any informant says it is (will augment the number of symptoms endorsed)
  • “and” rule: symptom is present if all informants agree it is (will decrease the number of symptoms endorsed)
  • the rule you use depends on the presentation, how valid you feel the reports are (quality of reports), in what situation you think the Bx would present itself in (for uncommon Sx you might want to use the “and” rule to be able to endorse it)
  • inherently simplistic to combine reports, sometimes they should be evaluated separately because the discrepancies tell us something important about functioning in different contexts (for PDD, you would want to see that Bx manifested in multiple situations)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

reliability and validity

A
  • reliability: consistency
  • validity: are we measuring what we think we’re measuring
  • reliability is necessary for validity (you can’t have a valid measure that isn’t reliable, but you can have a reliable measure that isn’t valid)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

test-retest reliability

A
  • do we get the same answers on different measurement occasions
  • some constructs should vary over time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

inter-rater reliability

A
  • agreement between two people judging whether a construct is present (like diagnosis)
  • important for clinical interviews and observational measures like the strange situation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

parallel-form reliability

A
  • how associated are two similar versions of the same test
  • WAIS vs. Raven’s should result in similar IQ scores, both tests should be similar in difficulty
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

split-half reliability

A
  • correlations between two halves of the same test (score on the first half of the scale vs. the second half)
17
Q

reliability: consistency vs. stability

A
  • consistency: taking a measure once, reports within that measure should be consistent
  • stability: involves one person taking multiple tests (across time or across various versions)
18
Q

convergent validity

A
  • scores on a new measure should be correlated with other well-established measures/indicators of the same construct
19
Q

discriminant validity

A
  • scores on a new measure shouldn’t be correlated with measures/indicators of a different construct
  • there might be some comorbidity, but not highly correlated
20
Q

face validity

A
  • a measure appears to measure what it’s supposed to
21
Q

measurement invariance

A
  • fairness of a measure (biased?)
  • people in different groups with similar abilities should score similarly across items on a test
  • if a measure is systematically according higher/lower scores to one group, it has measurement non-invariance
  • if there is bias, we cannot compare across groups
22
Q

cross-sectional design

A
  • taking one snapshot/study at one moment in time and comparing people within that study (across different ages within one study)
  • easy to implement, but we can’t learn how people change over time
  • age effects are compounded by cohort effects (individual differences between different ages that act as a confound)
23
Q

longitudinal design

A
  • studying the same subjects over time (within-subject comparisons without cohort effects - how did people change over time)
  • drawbacks: subjects drop out, effects of repeated testing (familiarity with content), requires foresight and funding, time consuming
  • age effects compounded by time of measurement effects (the particular span of time that you measured had an effect on the people you’re studying)
24
Q

sequential design

A
  • combined longitudinal and cross-sectional (getting a cohort of 7 year-olds in 2007, a cohort of 7 year-olds in 2008, etc. and measure them over time)
  • disentangles age effects from cohort effects and time of measurement effects
  • very time-consuming and complex and expensive
25
Q

correlational designs

A
  • cross-sectional
  • longitudinal
  • sequential
26
Q

criteria for well-established treatments

A
  • a large series (at least 9) of single case-study designs demonstrating efficacy OR
  • at least 2 between-group design experiments
  • the above are very old criteria that have been critiqued
  • shifting toward a systematic review of the literature followed by a committee reviewing the evidence
27
Q

single-case experimental design

A
  • examine the effect of treatment on a single child’s behaviour
  • repeated measures of behaviour
  • replication of treatment effects
  • ABAB reversal design: baseline - intervention - return to baseline - reintroduce intervention
  • good internal validity, temporal ordering, causality (A changes B)
  • lacks external validity (single case generalization?)
  • difficult to interpret (second baseline isn’t the same - is that because of the treatment or changes over time?)
  • ethics: should we remove a treatment that appears to be working? we want to implement it in as many populations as possible
28
Q

randomized control/clinical trial (RCT)

A
  • therapy experiment with experimental and control conditions and random assignment
  • test of intervention efficacy and test of theory (can help establish causes
  • internal validity: is the intervention is causing the change in outcome
  • construct validity: what about the intervention is causing the change in outcome
  • type of control group impacts conclusions (no-treatment/waitlist, attention-only, treatment as usual, another effective treatment)
29
Q

RCT disadvantages

A
  • external validity (may not work in a real-life clinic - effectiveness?)
  • clinical trials often use WEIRD samples, people without comorbid disorders (external validity threat)
  • context in which therapy is occurring
  • efficacy: does it contribute to a more efficient use of resources?
  • drop out rates (low SES, higher severity = difficult to remain in the trial)
  • differential attrition
  • uses averages: within the treatment group (even if it shows change), some individuals won’t have improved
30
Q

WEIRD sample

A

White Education Industrialized Rich Democratic

31
Q

differential attrition

A

dropout rates differ systematically between the intervention and the control group

32
Q

nosology

A

classification of disease, organization of bx and emotional dysfunction into meaningful groupings

33
Q

categorical classification

A
  • someone with the disorder is fundamentally different from someone without it
  • DSM: you meet criteria or you don’t (also has dimensional classifications for severity)
  • different groups; separate distributions/populations = two modes
34
Q

DSM-5

A
  • outlines diagnoses and associated criteria
  • categorical system based on professional consensus
  • based on a medical model in which separate disorders have separate causes
  • labels help synthesize information and aid communication
  • people don’t always fit into categories cleanly (spillover categories “unspecified” to help capture this)
  • cutoffs may miss people who might have severe impairment but won’t have access to accommodations
  • current categories may be inadequate for genetic and neuroscience research
35
Q

dimensional classification

A
  • everyone has certain levels of everything, some people may be at higher degrees = more impairment
  • often use continuous measures in research (not just interested in diagnoses, but in symptoms)
  • allows us to preserve valuable information, provides a measure of severity (could use that as a cutoff)
  • but how do we know which dimensions to include (like in RDoc)
36
Q

Research Domain Criteria (RDoc)

A
  • assessing based on key dimensions instead of diagnostic categories
  • domains/constructs + subconstructs: characteristics of brain functioning
  • units of analysis: ways to measure brain function (genes, molecules, cells, circuits, physiology, behaviour, self-report, paradigms)
37
Q

domains and subconstructs in RDoc

A
  • negative valence system: response to aversive situations (fear, anxiety, loss)
  • positive valence system: positively motivational situations (reward seeking, reward learning, habits, etc.)
  • cognitive systems: memory, attention, language
  • social processes: responses to interpersonal settings (perception and interpretation of people’s actions)
  • arousal and regulatory systems: activation of neural systems as necessary for the situation (homeostatic regulation, circadian rhythms, sleep)
38
Q

internal consistency

A
  • if a measure is reliable, the answers to items within a measure should be related to each other