Research methods Flashcards

(66 cards)

1
Q

Explain experimental design

A

Causes

1) select population
2) operationalize the independent and dependent variables
3) carefully select control and experimental groups
4) randomly sample from the population
5) randomly assign individuals to groups
6) measure the results
7) test hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Independent variable

A

The variable manipulated by the research ream

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Dependent variable

A

The variable that is measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Operational definition

A

A specification of precisely what they mean by each variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Experimental Group

A

The group of participants that received treatment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Control Group

A

Acts as a point of reference and comparison

Control group must be homogenous to the experimental group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Extraneous/Confounding Variables

A

Variables other then the treatment that could potentially explain an experimental result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Placebo effect

A

Just believing that treatment is being administered can lead to a measurable result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Double bind

A

Neither the person administering treatment nor the participants truly know if they are assigned to the treatment or control groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Sampling Bias

A

If it is not equally likely for all members of a population to be sampled

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Selection Bias

A

More general category of systemic flaws in a design that can compromise results

Purposely selecting which studies to evaluate in a meta analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Meta analysis

A

Big picture analysis of many studies to look for trends in the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Attrition

A

Participants dropping out of the study

If the reason that participants are dropping out is non-random this might introduce an extraneous variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Randomized block technique

A

Researchers evaluate where participants fall along the variables they wish to equalize across experimental and control groups

They randomly assign individuals from these groups so that the treatment and control groups are similar along the variables of interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Reliability

A

Produce stable and consistent results, measure what they’re suppose to and repeated measurements lead to similar results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Construct validity

A

Measure what they are supposed to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Replicability

A

Repeated measurements lead to similar results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Psychometrics

A

Study of how to measure psychological variables through testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Response bias

A

The tendency for respondents to not have perfect insight into their state and provide inaccurate responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Between subjects design

A

The comparisons are made between subjects from one group to another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Within subject design

A

Compare the same group at different points in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Mixed methods research

A

Use both between subjects and within subjects design techniques and or mix up qualitative and quantitative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Type 2 vs Type 1 errors

A

It is better to incorrectly conclude that there is no effect

  • type 2 error
  • false negative

Then it is to falsely suppose the veracity of a result that does not actually exist

  • type 1 error
  • false positive
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Null Hypothesis

A

Assume that there is no causal relationship between the variables and any effect that is measured if there is on is due to chance

Then using evidence from the experiment to determine that the null hypothesis is true or false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Experimental hypothesis
The proposition that variations in the independent variable cause changes in the dependent variables
26
Significant difference
A measured difference between two groups that Is large enough that’s it is probably not due to chance
27
P-Value
A number from 0-1 that represents the probability that a difference observed in an experiment is due to chance Lower value means stronger relationship
28
What is the usual minimum number of participants to use in order to accurately calculate the significant differences
30 or more | Larger the better
29
External validity
Flaw or limitation when applying the conclusion to the real world can lead to a flaw in external validity
30
Internal validity
Limitation of the study that the experiment is not well done leaving doubts about the conclusion because of some inherent flaw in the design High when confounding variables have been considered and minimized and the casual relationship between the independent and dependent variable can be established by the way the experiment was set up
31
Demand characteristics
The tendency of participants to consciously or subconsciously act in ways that match how they are expected to behave can also threaten internal validity
32
Predictive validity
Does the test tell us about the variable of interest
33
Threats to internal validity
Impression management Confounding variables Lack of reliability Sampling bias Attrition effects Demand characteristics
34
Threats to external validity
Experiment doesn’t reflect real world Selection criteria Situational effects Lack of statistical power
35
How could impression management cause a threat to internal validity
Participants adapt their responses based on social norms or perceived researcher expectations Self fulfilling prophecy Methodology is not doubly blind Hawthorne effect
36
How could confounding variables cause a threat to internal validity
Extraneous variables not accounted for in the study Another variable offers an alternative explanation for results Lack of a useful control
37
How could lack of reliability cause a threat to internal validity
Measurement tools do not measure what they purport to, lack consistency
38
How could sampling bias cause threats to internal validity
Selection criteria is not random Population used for sample does not meet conditions for statistical test (population is not normally distributed)
39
How could attrition effects cause threats to internal validity
Participant fatigue Participants drop out of study
40
How could demand characteristics cause threats to internal validity
Participants interpret what the experience to is about and subconsciously respond in ways that are consistent with the hypothesis
41
How could the experiment not reflecting the real world cause threats to external validity
Laboratory setups that don’t translate to the real world Lack of generalizability
42
How could selection criteria cause threats to external validity
Too restrictive of inclusion/exclusion criteria for participants (ie sample is not representative)
43
How could situational effects cause threats to external validity
Presence of laboratory conditions changes outcome | Eg pre test and post test, presence of experimenter, claustrophobia in an MRI machine
44
How could lack of statistical power cause threats to external validity
Sample groups have high variability Sample size is too small
45
Disclosure
An outline given to participants before the experience not begins that clarified incentives and expectations while reminding them of their right to terminate the experiment at any time
46
Debriefing
Participants are told after the experiment exactly what was done and why the experiment was conducted
47
What is a correlational study
Measures the quantitative relationship between two variables
48
What are the strengths and weaknesses of correlational studies
Strength - great preliminary technique - usually easy to conduct Weakness - does not establish causality - may not pick up nonlinear relationship
49
What is an ethnographic study
Deep lengthy qualitative analysis of a culture and it’s characteristics
50
What are the strengths and weaknesses to an ethnographic study
Strengths - provides detailed analysis and comprehensive evaluation Weaknesses - researchers presence may affect individuals behaviour - heavily dependent on the researcher conducting the study, difficult to replicate and objectivity may be compromised
51
What is a twin study
Analysis of heritability through measuring characteristics of twins
52
What are the strengths and weaknesses of twin studies
Strengths - offers insight into how nature and nurture might interact to lead to various characteristics Weaknesses - difficult to find participants who meet the criteria - difficult to analyze the complex variables involved and how they interact
53
What is a longitudinal study
Long term analysis that intermittently measured the evolution of some behaviour or characteristic
54
What are the strengths and weaknesses of longitudinal studies
Strengths - scientists can understand how trait of interest changes over time Weaknesses - logistically demanding, expensive and difficult to implement - high attrition rate
55
What is a case study
Deep analysis of a single case of example
56
What are the strengths and weaknesses of case studies
Strengths - offers comprehensive details about the single case Weaknesses - results may not be generalizable - does not offer points of reference or comparison
57
What is a phenomenological study
Self observation of a phenomenon by researcher or small group of participants
58
What are the strengths to phenomenological studies
Strengths - introspection can provide insight into behaviours and occurrences that are difficult to measure Weaknesses - lacks objectivity due to results coming from self analysis - difficult to generalize results to other circumstances or individuals
59
What are survey studies
Use of a series of questions to allow participants to self report behaviours or tendencies
60
What are strengths and weaknesses to survey studies
Strengths - easy to administer - can provide quantitative data that can be compared to large participant pools Weaknesses - self reporting creates limitations in objectivity
61
What are archival studies
Analysis of historical record for insight into a phenomenon
62
What are the strengths and weaknesses of archival studies
Strengths - provide insight into events from the last that are unique from everyday behaviour Weaknesses - quality of analysis subject to the quality and integrity of records - difficult to conduct follow ups - data are unlikely to be comprehensive leaving ambiguity and unanswered questions
63
What are biographical studies
Exploration of all the events and circumstances of an individuals life
64
What are the strengths and weaknesses of biographical studies
Strengths - comprehensive knowledge of all the details of an individuals life Weaknesses - limitations in objectivity - difficult to generalize observations
65
What are observational studies
Broad category that includes any research in which experimenters do not manipulate the situation or results
66
What are the strengths and weaknesses of observational studies
Strengths - naturalistic observation of circumstances as they are Weaknesses - difficult to tease out the complex interplay of many variables