final exam Flashcards

1
Q

Why can’t we say that we have “proven” anything?

A

because they use that term to refer to the result of a logical deduction. In this rigorous sense, scientific theories can never be proven; they can only be confirmed. (weight of evidence)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

can research explain all cases?

A

no, because research is probabilistic, but can explain a portion of the cases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

research is done on a sample, so… there is always some error

A

sampling error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

small chance that we made error, so we set probability to p

A

statistics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

something that varies/changes

A

variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

factor in an experiment that researchers manipulate so that they can determine its effect, factor in a controlled experiment that is deliberately changed; also called manipulated variable

A

independent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

the reaction to the independent variable changing

A

dependent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

does not change (only has one level) or is kept the same

A

constant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

carefully define concept at theoretical level (conceptual definition)

A

concept variable = construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

a variable of interest, stated at an abstract level, usually defined as part of a formal statement of a psychological theory

A

construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

to turn a conceptual definition of a variable into a specific measured variable or manipulated variable in order to conduct a research study

A

operationalize

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

reasonable, accurate, justifiable

A

validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

how consistent your results are

A

reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

you are ONLY able to make casual claims with a…

A

true experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what 3 criteria must you adhere to have a true experiment?

A

random sample, random assignment, and an IV with 2 levels at least

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

why do researchers use random assignment to treatment groups?

A

increases internal validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

used only in experimental designs to assign participants to groups at random (increases internal validity)

A

random assignment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

everybody has equal chance of being chosen (increases external validity)

A

random selection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A “false positive” result from a statistical inference process, in which researchers conclude that there is an effect in a population when there really is none

A

Type I error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

a “miss” in the statistical inference process, in which researchers conclude that there is no effect in a population when there really is one

A

Type II error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

a variable of interest, stated at an abstract, or conversational, level

A

conceptual variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what are three common measures?

A
  • self-report
  • observational measures
  • physiological measures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

concepts can be operationalized in lots of different ways so…

A

it is a good idea to use more than one concept to see if they correlate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

why are scatterplots used?

A

compile data after test (ex. IQ)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
how can we use correlation coefficient "r" to quantify reliability?
slope direction, strength of relationship
26
how do we interpret "r"?
r = +- 1.0
27
how do validity and reliability relate to one another?
something can be reliable and not valid - however, you can not have something valid that is unreliable too
28
what are the different types of validity?
face, content, criterion- known groups evidence; convergent and discriminant
29
how can question order impact someone's answers? what can be done about it?
questions asked earlier in the survey may influence the way a person answers a question later in the survey
30
how can you get people to respond honestly and/or accurately?
conduct pilot tests or focus groups early in the survey
31
a shortcut respondents may use to answer items in a long survey, rather than responding to the content of each item
response set
32
examples of response sets
non differentiation, acquiescence or yea sayers, nay sayers, fence sitting
33
observer bias- how can it be addressed?
systematic errors in observation that occur because of an observer's expectations
34
what are observer effects? (also known as expectancy effects)?
bright vs. dull maze rats, clever Hans-researcher subtly communicated to participants "how they should behave"
35
How do we prevent observer bias/expectancy?
use masked or blind designs, video or audio record, use more than one observer to assess inter-rater reliability, make sure coding/observing system is well thought out, observers well trained
36
What can be done about reactivity?
unobstrusive observations and deception
37
extent to which research results apply to a range of individuals not included in the study
generalizability
38
extent to which we can generalize findings to real-world settings
external validity
39
why are generalizability and external validity important for frequency claims?
frequency claims need random sample; when a random sample can't be used, assess whether potential bias will have impact on results; casual and association claims make external validity a lower priority
40
population vs. sample
whole set vs part of population
41
simple random sample, cluster samples, multistage samples, stratified random samples, oversampling, systematic sampling, weighting - combo of sampling techniques
probability sampling
42
convenience sampling, purposive sampling, snowball, and quota
non-probability sampling
43
what are the three criteria for a casual claim?
1. covariance 2. temporal precedence 3. internal validity
44
casual claims can only be made or based on what?
true experiments
45
bivariate correlation
calculates correlation
46
argues that one level of a variable is likely to be associated with a particular level of another variable
association claim
47
random assignment to treatment, random selection of a sample, and an IV with at least 2 levels
criteria to be considered an experiment
48
why can't we talk about cause and effect with casual claims?
can only be made or based on experiments
49
how do experiments differ from non-experimental designs we have talked about?
experiments are more powerful than non-experimental designs because they have two group designs (one receives IV and other group doesn't or IV has two levels - level 1 = IV level and level 2 = another level of IV)
50
experiment with one independent variable with two levels
simple experiment
51
know how to pick out IV and DV
example experiment
52
what is a control variable?
independent variable (you control it)
53
why are comparison groups so important?
don't know if your experiment is working or not
54
level of IV that is intended to represent no treatment or a neutral condition
control groups
55
group that receives the IV
treatment groups
56
when control group is exposed to an inert treatment such as a sugar pill
placebo groups
57
experimenters control...
temporal precedence
58
threats to internal validity but don't know what is causing the change
confounds
59
experimenter's mistake in designing IV; second variable happens to vary systematically along with intended IV and is an alternative explanation for results
design confound
60
systematic variability impact on an experiment?
levels coincide in a predictable way with experimental group membership that creates a potential confound; seriously jeopardize internal validity
61
unsystematic variability impact on an experiment?
levels of a variable fluctuate independently of experimental group membership that contribute to variability within groups (no confound)
62
selection effects - how can you control for it?
happens when participants in one level of IV are systematically different from those in the other or when experimenters let participants choose their groups; use random assignment (desystemize types of participants who end up in each group) or matched groups
63
match participants on a variable (other than the IV) that might otherwise impact how they behave in your experiment (your DV)
matched groups
64
independent groups design = between subjects design = random assignment design
synonyms; randomly assign participants to groups (or treatment levels); independent because there are no connections/ties between subjects; two basic forms: posttest only design and pretest/posttest design
65
assigned to IV groups are tested on DV once; testing after experiment
posttest only design
66
DV is measured more than one time (before and after treatment on DV), but there is only one administration of the IV whereas a within subjects design, the participants get exposed to multiple levels of an IV
pretest posttest only design
67
within groups design = repeated measures = within subjects or correlated groups design
synonymous; test the same participants in both treatment conditions (IV and DV); very powerful
68
strengths/weaknesses of within groups design
weaknesses: order effects, might not be possible or practical, people see all levels of IV and then change the way they would normally act. Strengths: ensures participants in the two groups will be equivalent
69
what is meant by power?
probability that a study will show a statistically significant result when an IV truly has an effect in the population
70
with within groups design there are order effects. what are examples of order effects?
practice or testing effects and carryover effects
71
long sequence might lead participants to get better at the task
practice or testing effects
72
some form of contamination carries over from one condition to the next
carryover effect
73
how can order effects be avoided?
counterbalancing
74
used to deal with order effects; sample is divided in half, with one half completing the two conditions in one order and the other half completing the conditions in the reverse order
counterbalancing
75
all possible condition orders are represented
full counterbalancing
76
only some of the possible condition order are represented
partial counterbalancing
77
technique for partial counterbalancing; a formal system to ensure that every condition appears in each position at least once
Latin square
78
create an alternative explanation for a study's results
demand characteristics
79
how do we use statistics?
to interpret data (graphs)
80
what does it mean if something is statistically significant?
unlikely to have been obtained by chance from a population in which nothing is happening
81
what are threats to internal validity?
``` maturation history regression attrition testing instrumentation ```
82
why is a one group pretest posttest only design a bad research design?
not enough between-groups difference
83
any study can suffer from...
observer bias; demand characteristics; good participant effect - prevent with masked, single or double blind
84
other problems that can make or break an experiment
not enough difference between groups - weak manipulations, insensitive measures, ceiling and floor effects
85
does too much within group variability make it harder to detect group differences?
yes, the less within-group variability, the less likely it is to obscure a true group difference
86
reason for high within-group variability; human or instrument factor that can inflate or deflate a person's true score on DV
measurement error
87
how does adding more participants help?
reduces the influence of individual differences within groups which enhances the study's ability to detect differences between groups
88
outcome if the IV did not make a difference in the DV; there is not covariance between the two
null effect
89
experiments with two or more IV
complex/factorial designs
90
why would a researcher opt to use a complex design? benefits?
levels not manipulated; to test whether an IV affects different kinds of people in the same way
91
how do we interpret results from a complex design?
collect data on a variety of DV- graphs
92
how are marginal means calculated?
sample sizes are equal = simple average. sample sizes unequal = computed using the weighted average counting larger sample more; hep you eye ball data to determine if there is a main effect (need to know inferential statistics to verify - ANOVA)
93
how are factorial designs used to test theories?
study how variables interact by combining them in a factorial and measure whether the results are consistent with the theory
94
one IV is manipulated as independent-groups and the other is manipulated as within-groups
mixed factorial
95
there are more complicated factorial designs...
ex. 2 x 2 x 2
96
what is the difference between a true experiment and a quasi-experimental design?
quasi-participants cannot randomly be assigned to an IV
97
what are three types of small N designs?
- stable baseline design - multiple baseline design - reversal design
98
what are the benefits of quasi-experiments?
real world applicability and external validity and excellent for things you can't ethically vary an IV
99
change in behavior that emerges more/less over time
maturation threats
100
affects most members of treatment group at same time as treatment itself, making it unclear whether the change is caused by treatment received
history threats
101
group mean is usually extreme at one time then next time it is measured, it is less likely to be extreme
regression threats
102
when attrition is systematic, certain kind of participant drops out
attrition threats
103
change in participants as a result of taking a test more than once
testing threats
104
measuring instrument changes over time
instrumentation threats
105
a preexisting variable that is often a characteristic inherent to an individual, which differentiates the groups or conditions being compared in a research study. Because the levels of the variable are preexisting, it is not possible to randomly assign participants to groups
quasi-independent variable