Exam 4 Flashcards

(134 cards)

1
Q

effect of an individual IV alone (potential for one for each IV); looking at different of each level of one IV, ignoring other IV; effect of an individual IV alone (for one IV, the levels of IV differ)

A

main effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

effect of IV differs at different levels of another IV; effect of an IV differs at different levels of another IV (effect we see in one IV depend on what’s going on in a different IV)

A

interaction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

effect of an IV differs at just one level of a second IV; can make interpreting a main effect misleading; often uninterpretable / difficult to interpret; still find an interaction and main effect, but is a simple main effect; effect of an IV differs at just one level of a 2nd Ievel can make interpreting a main effect misleading (i.e.: men have higher scores than women in treatment condition, but nothing is different in control condution)

A

simple main effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

an IV whose levels are not tested on each participant

A

independent factor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

an IV whose levels are tested on each participant

A

repeated factor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

IVs where some are tested on each participant and some are not

A

mixed factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

performance on DV reaches a maximum; one IV group all does relatively same on high end

A

ceiling effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

performance on DV reaches a minimum; one IV doe relatively same on low end

A

floor effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

the study of behavior and mental processes across the lifespan using a scientific approach

A

psychology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

defined by empiricism and appropriate skepticism; produces facts

A

science

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

an abstract concept that refers to ways in which questions are asked and the logic and methods used to gain answers; used by psychologists with empiricism and skepticism

A

scientific method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

claims based on evidence and evidence derived from observation and experimentation / emphasizes direct observation and experimentation as a way of answering questions; the most important characteristic of the scientific method; using this, psychologists focused on behaviors and experiences that could be observed directly

A

empiricism / empirical approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

skeptical of all types of claims, especially personal anecdotes, experiences, and/or gut intuitions (but not to the point of ignoring when evidence converges)

A

“appropriate” skepticism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

the spirit of the times; the trend of the time; reflects how people are thinking; attitude toward different things

A

zeitgeist

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Western, Educated, Industrialized, Rich, and Democratic countries; where most of the participants in psychological research come from; this skews research findings, and therefore, we need to be cautious about our interpretations of findings

A

WEIRDos

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

can occur when researchers fail to recognize when experiences and values of their own culture affect their interpretations of behavior observed in other cultures (eg: research involving Americans applied to other cultures leads to potential of this)

A

ethnocentric bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

our natural tendency to seek evidence that’s consistent with our intuitions and ignore or deny contradictory evidence; selectively accepting evidence that confirms an already held belief and dismissing evidence that counters that belief; must try to disprove “facts,” which is where the null hypothesis comes in; influences the choices people make and motivates them to avoid info that challenges them, even when doing so causes them to be wrong

A

confirmation bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

a concept or idea (intelligence, memory, depression, aggression, etc.); given meaning through an OD

A

construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

explains a concept solely in terms of the observable procedures used to produce and measure it; facilitates communication

A

operational definition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

tentative explanation for a phenomenon; often stated in form of a prediction for some outcome along with an explanation for the prediction
(starts with a grasp of the existing research; offers a relationship between variables; must be testable/constructs adequately defined; is not circular; is falsifiable / ideas are recognized by science)

A

hypothesis (and what makes a good hypothesis)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

points to same or similar conclusion ?

A

converging evidence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

reviews psychological research to protect the rights and welfare of human participants; ensures that researchers protect participants from harm and safeguard participants’ rights; must be composed of at least 5 members with varying backgrounds and fields of expertise, both scientists and nonscientists must be represented, and there must be at least 1 member who isn’t affiliated with the institution; has the authority to approve/disprove/require modification of the research plan prior to its approval of the research; has the ethical responsibility to review research proposals fairly by considering the perspectives of researchers, institution, and participants; is sponsored by the institution

A

Institutional Review Board (IRB)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

asks “Is it worth it?”; IRB members rely on a subjective evaluation of the first and the second both to individual participants and to society and ask “Are the greater than the ?”; research is approved when the second outweighs the first

A

risk / benefit ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

harm/discomfort participants may experience is not greater than what they may experience in their daily lives / during routine physical or psychological tests;

A

minimal risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
a person's explicitly expressed willingness to participate in a research project based on a clear understanding of the nature of the research, of the consequences for not participating, and of all factors that might be expected to influence that person's willingness to participate; 1) there must be a reasonable effort to respond to questions about research, 2) the dignity of participants must be respected, and 3) individuals have to be allowed to withdraw at any time
informed consent
26
expressed willingness to participate; must be obtained from participants themselves whenever possible; this is especially important in studies dealing with vulnerable populations
assent
27
can occur through omission or commission; considered by some to be completely unethical bc it hurts the relationship between the researcher and the participant, may hurt the perception of psychology as a whole, and hurts society bc it leads to distrust of experts; others suggest it is a "technical illusion" that should be permitted in the interest of scientific inquiry and that psychologists should be allowed to suspend the moral principle in interest of science, especially in order to obtain information that would be impossible to get otherwise
deception
28
necessary to explain to participants the need for deception, to address any misconceptions they may have had about their participation, and to remove harmful effects resulting from the deception; also has the important goals of educating participants about the research (rationale, method, results) and of leaving them with positive feelings about their participation; provides opportunity for participant to learn about their specific performance, helps researchers to learn how participants viewed the study, and enables researchers to identify any problems in procedures and provide ideas for future research
debriefing
29
submitted manuscripts are reviewed by other researchers who are experts in the specific field of research addressed in the paper under review; these reviewers decide whether the research is methodologically sound and whether it makes a substantive contribution to the discipline of psychology; these reviews are then submitted to a senior researcher who serves as editor of the journal; editor decides which papers warrant publication; the primary method of quality control for published psychological research
peer review process
30
alphanumeric string that identifies the content and electronic location of an article or other information source found on the internet; is usually found on the title page of a published article; should be included in references whenever is available
digital object identifier (DOI)
31
structure of APA report
- Title Page - Abstract - Introduction - Method - Results - Discussion - References - Footnotes - Tables and Figures - Appendices
32
research seeking to understand behavior and mental processes – “knowledge for own sake”; at some point, becomes more ----; doesn’t always start out that way
basic research
33
examination of psychology principles and treatments in real world settings (goal: to change lives for better); can involve case studies
applied research
34
intensive description and analysis of a single case; not experimental; may include manipulation, but not control (no basis for comparison); usually qualitative, but can include empirical data; often exploratory (studying new ideas and rare phenomena); cannot directly test hyps / theories but can sometimes can develop new ones and disprove old ones (only takes one counterexample)
case studies
35
(study of individual) research / more common approach (study of groups – averages) complements
idiographic / nomothetic approach
36
type of case study; not proof; at most, a single data point; if they are true, we still can’t generalize
testimonial
37
apply an experimental design to a --- or few individs (manipulate an IV w/one subject; examine changes in behavior (DV), control for other influences); AKA: Skinnerian analysis of behavior; small N designs; ---- ---- xpmts (type of name depends on type of research, journal, and time in history); can estb causal inferences for that single subject or few individuals (just can’t generalize it beyond specific individual(s); typically used when there’s an immediate effect(s) anticipated (though could be used at other times w/ more difficulty); need a stable baseline of DV to detect change
single-case experiments
38
needed to determine if intervention caused change; if not present, can’t tell if change is natural or because of intervention
stable baseline
39
return to baseline
reversal
40
establishes causality when behavior: 1) changes w/ first intervention 2) reverts to baseline or near when intervention w/drawn 3) changes again w/ second intervention; don't do analysis on it; works best when testing interventions w/an expected immediate effect
ABAB design (reversal design)
41
compares effect of intervention on multiple baselines (across multiple subjects, behavior, setting); two or more (usually 3) baselines estbd; introduce interventions for each baseline one at a time (don’t look for reversal but for one piece at a time to change); an be used in cases when reversal 1) does not occur 2) would be unethical
multiple baseline design
42
used to better understand a phenomena w/out experimentation; can still have IVs and DVs that can be OD’d but you don’t design an experiment; concerned with what’s associated with phenomena, which variables go 2gh; in what way the variables go 2gh; not effectively manipulating to establish causation
descriptive methods
43
manipulating IVs to examine impact on DVs; use control to try to establish causation
experimental methods
44
observing behavior while it occurs
direct observation
45
making observations indirectly as through examining evidence of past behavior using physical traces or archival records
indirect observation
46
direct observation of behavior in a ___ setting w/out any attempt by observer to intervene; observer acts as passive recorder of events as they occur ___ly
naturalistic observation
47
extent to which results of a research study can be generalized to different populations, settings, and conditions; established by examining the extent to which a study's findings may be used to accurately describe subjects, settings, and conditions beyond those used in the study
external validity
48
observers play a dual role; they observe people's behavior and they participate actively in situation they are observing; a) individuals who are being observed know the observer is present for purpose of collecting info about their behavior b) those who are being observed don't know they're being observed; helps with problem of reactivity
participant observation (a. undisguised, b. disguised)
49
occurs when people react to fact that they're being observed by changing their normal behaviors
reactivity
50
every action has an uncertain effect; aka the observer/experimenter effect; therefore, an observer, even if not actively engaging, has an effect on the environment and measurements of a sys can’t be made w/out effecting the sys; our observations change outcome and behavior of those involved in the show
Heisenberg Uncertainty Principle
51
cues given by the experimenter / observer that cause the participant to act in the way the participant believes researcher wants him/her to act
demand characteristics
52
causes participants to change their normal behavior because they want to look “good” and present themselves in “best possible light”)
social desirability
53
setting up a situation to observe a specific event; putting something in place, but not quite an experiment bc we’re not manipulating IV; often the observer intervenes in order to cause an event to occur or to "set up" a situation so that events can be more easily recorded
structured observation
54
when experiments manipulate 1 or more IVs in a field setting; both an experiment and observation w/ intervention bc is an experiment with application; procedure when a researcher manipulates 1(+) IVs in a natural setting in order to determine effect on behavior; most extreme form of intervention in observational methods
field experiment
55
converting observed behavior into quantitative data; the identification of units of behavior that are related to goals of study
coding
56
degree to which 2(+) independent observers agree; when observers disagree, we become uncertain about what's being measured and behaviors and events that actually occurred; determined using (# times 2 observers agree / # opportunities to agree) * 100%
interobserver / interrater reliability
57
remnants, fragments, and products of past behavior; consists of "use traces" and "products"
physical traces
58
public and private documents describing activities of individuals, groups, institutions, and governments; includes running, episodic, natural treatments, and media
archival records
59
take data from various studies, put it together, and re-analysis it; if we have more info / data, we are able to better look at differences between groups and have better representation
meta-analysis
60
go back into a study and reanalysis data for a new purpose; all data from NIH becomes publicly available, and people go back in and re-analysis it; researchers themselves didn’t necessarily collect the data
secondary data analysis
61
when all respondents complete the same items, verbally (interview) or in writing (questionnaire); used to obtain data about feelings, attitudes, preferences, symptoms, etc. of a specific pop of ppl; perhaps most common of all methods of data collection in pysch
survey
62
gives us idea about strength and direction of relationship; allows us to predict (strength of relationship [strong = more linear; ranges from -1.0 - +1.0] / positive = same direction; negative = opposite direction)
correlation (magnitude/direction)
63
set of all cases of interest
population
64
(exhibits distribution of characteristics in a population) subset of population actually drawn from sampling frame
(representative) sample
65
every element has an equal chance of being included in the sample
simple random sampling
66
pop is divided into sub pops / strata and random samples are drawn from each stratum
stratified random sampling
67
a sample chosen based on ease of collecting data from them
convenience sampling
68
when one or more samples are drawn from a population at one time; done all at once; a snapshot in time; pulling from different groups and comparing them at a single point on time; potential problems: 1) can’t assess change over time and 2) cohort effects (shared temporal / life experience that impacts one group and may not impact another)
cross-sectional design
69
same sample of respondents is surveyed more than once; advantage: can examine patterns of change for each individual over time; potential problems: 1) can be hard to identify causes of change 2) attrition (hard to keep participants engaged in study over time) 3)questioning over time (participants may try to be consistent / inconsistent on purpose and may be more sensitive to issue than general population)
longitudinal design
70
most serious disadvantage of LD bc as samples decrease over time, they're less likely to represent original population from which sample was drawn; occurs when not all of the original respondents continuously respond
attrition
71
measure should yield similar results each time; person’s place in distribution should be basically same each time they’re tested (doesn’t mean they are going to get same score)
reliability
72
does measure obviously measure the construct?; measures what it should
validity
73
graphing / stem-and-leaf plot
best way to get to know data
74
mean = median = mode; symmetrical; percentages indicate number of scores that fall within that part of the distribution; 68% data between 1SD above + below mean and 95% data within 2SDs
bell curve / normal distribution
75
median is better representation than mean bc mean will be pulled towards tail (mean is pulled towards right; median is more left; majority of values are on left end of curve and the right tail is longer ; mean is pulled left; median is more right; majority of values on right end of curve and left tail is longer)
skewness (positive; negative)
76
measure of peakedness; any extreme value of __ means we can’t perform same calculations as if data were just skewed
kurtosis
77
pointy like witch’s hat; (thin or slender) (positive score);
leptokurtosis
78
middle or in-between, value is close to zero
mesokurtosis
79
flat (broad/flat); platypus has a flat beak (negative score)
platykurtosis
80
can be same if data normally distributed (mathematical average; for population, (mu); see formula in book (add up all values and divide by number of items) middle value; more accurate if data are skewed most common value(s))
measures of central tendency (mean, median, mode)
81
importance: means may be same, but variability illustrates difference between two groups (lowest to highest values; more common to report min and max values instead every single value minus mean squared divided by n-1; pop is sigma sqd; sample is s sqd square root of above)
measures of dispersion (range, variance, standard deviation)
82
often referred to as margin of error, but really is that if we, over time, we took sample after sample, we will capture the true mean X% of time (not that there is a X% chance we are right); true parameter would be bracketed X% of time
confidence interval
83
measure, manipulate, measure; no control group; a bad choice because no control group = no definitive conclusions and all threats to internal validity may be present
one-group pretest-posttest
84
compare intervention to a “like” control group but w/out randomization (ex: Langer and Rodin); if groups are truly comparable, it controls for many threats to interval validity; vulnerable to additive effects of selection (ex: selection-maturation--one group may naturally change more)
nonequivalent control group design
85
comparison of baseline before and after intervention; requirements: 1) relatively stable baseline 2) abrupt discontinuity in the time series; better than pre-post bc it uses many observations b4 (baseline) and after the intervention; good for one time interventions where long-term effects are desired; no control group; threats to internal validity: 1) Instrumentation (new measures w/ new programs) 2) History (something specific about that time; maturation, testing, regression to mean controlled by obtaining a baseline
interrupted time series design
86
best option, if it will work; multiple pre and post tests; intervention plus control group; rules out history and instrumentation effects; still have selection effects; ex: cheating study, but w/ only one university
time series design with non-equivalent control group
87
uses manipulation and control, often in a lab setting and seeks to establish internal validity (covariation, time order relationship, eliminate confounds—randomization->only difference between groups is due to chance); control for confounds; in real world, still have problems w/ 1) obtaining permission from ppl in authority 2) obtaining participants (potential problems w/ self-selection) 3) randomization
"true" experiments
88
an alternative to a control group; won’t always work (medications, techniques); Group1: Treatment A and then B, Group2: Treatment B and then A
alternative treatments
89
event other than treatment produces change; solution: have a control group (event affects both groups, but program doesn’t)
effect of history (threat to internal validity)
90
people naturally change over time; solution: have appropriate control group (randomly assign to groups)
effect of maturation (threat to internal validity)
91
people grow better at test w/familiarity because of repeated testing; solution: space tests further apart; use different IQ measures; appropriate control groups
effect of testing (threat to internal validity)
92
mechanical: instrument itself doesn’t work right; change in scores bc things not measured right observer biases: interviewers get better/worse due to practice/boredom solutions: ensure reliability and validity of instruments; testing mechanical instruments; training observers appropriate control groups
effect of instrumentation (threat to internal validity)
93
extreme scores tend to move toward middle over time; can be mistaken for a treatment effect; solutions: don’t choose samples based on extreme scores on a pretest; use an appropriate control group
effect of regression to the mean (threat to internal validity)
94
participants are lost to experiment over time; final group may be systematically different in way that skews results; solutions: careful follow-up to keep participants; careful examination of initial sample and ending sample, statistical techniques for handling missing data
effect of attrition (threat to internal validity)
95
at the outset, differences exist in characteristics of groups; solution: randomization when possible
effect of selection (threat to internal validity)
96
a threat above applies to one group but not another and still skews results 1) selection w/ maturation 2) selection w/ history 3) selection and instrumentation
additive effects w/ selection (threat to internal validity)
97
when groups communicate; can lead to resentment, rivalry, and diffusion of treatment; solution: keep participant group separate
contamination
98
experimenter influences findings in expected direction; solution: double blind studies
experimenter expectancy
99
newness of treatment has an effect vs treatment itself; enthusiasm/disruption solution: novelty effects tend to wear off over time = delay data collection
novelty effects
100
behaviors change because someone is interested and paying attention; effect may not go away over time; we’re bad at putting in controls for this because it’s expensive and hard; solution: control group
Hawthorne effect
101
used to confirm whether IV has produced an effect in an experiment; used because of nature of control provided through random assignment
inferential statistics
102
an outcome is one that has only a small likelihood of occurring if the NH were true; the difference obtained in experiment is larger than would be expected if error variation (chance) were responsible for outcome
statistically significant
103
assumes that IV has no effect on DV / there's no difference between groups / there's no association between variables
null hypothesis
104
assumes IV does have effect on DV / there is difference between variables / there is association between variables
alternative hypothesis
105
NHST is a probability statistic, so we can't __ a hypothesis, only __ it
prove vs. support
106
probability of obtaining the observed effect if NH were true; finding a difference between groups if there's no real difference; probability of obtaining observed effect by chance
p-value
107
p-value threshold that needs to be crossed in order to reach statistical significance (.05 in psychology); chosen before we begin study to avoid experimenter bias; based on statistical probabilities
alpha level
108
NH is actually false and NH is rejected NH is actually true and NH is rejected NH is actually false and fail to reject NH NH is actually trie and fail to reject NH
hit false positive false negative miss
109
false positive; reject NH but NH is really true; saying there's an effect when there really isn't; most common cause of this is conducting too many statistical analyses; can reduce it with a stricter p-value; most common problem in psychology today
Type I Error
110
false negative; failing to reject NH when it's actually false; usually caused by a small sample size; something is there and we don't see it
Type II Error
111
an index of strength of relationship between variables / differences between groups; is mostly independent of sample size; many types for different statistical analyses; most common are Cohen's d and Pearson's r
effect size
112
probability that NH will be correctly rejected when it is false (a hit); the ability to detect statistically significant effects; 1 - TIIE; want it to be .80 greater; determined by significance level, effect size, and sample size; used to determine sample size
power
113
the way we determine sample size; tells you appropriate sample size you'll need for effect size to reach that level of statistical significance; based on 1) type of analysis to be conducted 2) estimated ES and 3) p-value / alpha level
power analysis
114
refers to research design; ability to detect effect of IV even if effect is small; likelihood that it will detect effect if IV does not have effect based on specific design
sensitivity
115
we are publishing results that have false hits (TIEs) 1) incentive structure 2) publication bias 3) confirmation bias
crisis in psychology and possible causes
116
involves manipulating variables and the researcher(s) having control; the goal is to establish causal relationships; research study that allows us to infer causality through manipulation and control
experimental method
117
the variable that gets manipulated
independent variable
118
the variable that is measured
dependent variable
119
degree to which diffs in DV can be attributed to IV vs another variable; extent to which you can make causal inference based on experiment; usually comes at cost to external validity; threatened by confounds, intact groups, attrition, participant reactivity, experimenter effects, and lack of appropriate controls
internal validity
120
requires 3 things
causality
121
DV value differs at different levels of IV; experimental and control score differently
covariation
122
temporal precedence; IV before DV
time-order relationship
123
confounds; any variable other than IV that could be affecting changes to DV
confounding variables / extraneous variables
124
used to give control to an experiment; established by random assignment because it averages individual differences across conditions; enables us to rule out alternative explanations due to any differences among participants
balanced groups / samples
125
groups are formed so they are similar on all important characteristics at the start of the experiment; used to account for individual differences among participants; used in a random groups design; most effective way to design an independent groups design
random assignment
126
used when trying to determine if one or more IVs have an effect on DV, examines “between group” differences for 2 or more groups; independent variables -> multiple groups; experimental aka treatment group(s) and control group(s); separate group of participants for each level of an IV (it’s not the same person in any of the levels); analysis: independent samples t-test (IV with 2 levels) or factorial ANOVA (IV w/ 3 or more levels)
independent groups (between subjects)
127
used to analyze data from an independent groups design when there are only two conditions
independent samples t-test (between groups)
128
used to analyze data from an independent groups design when there are more than two groups
factorial ANOVA (f-test)
129
analyze w/in group effects (tests a single group across multiple conditions); each participant completes all levels/conditions of IV; participants serve as own controls; requires fewer participants; increases sensitivity of research design, which gives us more power; equivalent of having a larger sample with fewer participants; small diffs between conditions are easier to detect bc other extraneous variables are balanced (since there is usually more variation between people than within people (therefore, less error variation)); may be necessary for study; problem is practice effects; analysis: paired samples t-test (2 conditions) or repeated measures ANOVA (more than two groups)
repeated measures (within subjects)
130
changes in performance over time bc of learning the task and boredom / fatigue; practice effects can go in either direction; may get better as they learn task or get worse bc of fatigue; time is a confound; solve problem by counterbalancing
practice effects
131
balance the order of conditions to average out practice effects (all conditions administered to each participant several times, using different orders each time; controls for practice effects for every participant so we can interpret data for any single participant; two types: ABBA and block randomization / all conditions administered to a participant only one; each condition must appear in each ordinal pos equally across participants; data for a single participant is confounded by practice effects, but practice effects are eliminated for the group; three types: all possible orders, Latin squares, and random starting order w/ rotation)
counterbalance (complete / incomplete)
132
occurs with counterbalancing in RMD; performance in one condition is dependent on condition that precedes it; solution is IGD; a comparison of RMD and IGD will indicate if --- is happening; we can also look at effect of first condition across participants (treat all As as one group, all Bs as another, etc as if they were IGD) and compare that way
differential transfer
133
used to analyze data from a repeated measures design when there are only two conditions
paired samples t-test (within subjects)
134
used to analyze data from a repeated measures design when there are more than two groups
repeated measures ANOVA (f-test)