Midterm 2 Flashcards

(77 cards)

1
Q

What are ethics?

A

Moral principles or

Rules of conduct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What came from the Nuremberg Code? What led to this code?

A

Nazis subjected inmates to freezing temperature

Led to discussions of informed consent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What were the Tuskegee experiments?

A

399 Black males who had syphilis
BUT… the doctors didnt tell them they had it!!
Doctors simply watched them go untreated to study the effect of the disease as it progresses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What happened in the “Tearoom” situation?

A

Male / Male meetings in Public Restrooms
Researcher played “lookout” and secretly took notes
In disguise, he went to their homes for interviews

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What was Milgram’s major experiment?

A

What was Milgram’s major experiment?
Student in the next room doing a word association task
Teachers were asked to shock the student when they made a mistake (and they heard the student scream in pain)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an IRB?

A

Institutional review boards

  • Concerned with Participant and Data-related ethics
  • They evaluate research proposals, are separate and not tied to the research
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What issues exist regarding participants?

A

Anonymity & / or confidentiality

Risks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Consent vs. Assent

A

Consent:
Voluntary Informed Consent
Participant has read a form that details the research

Assent:
Minors provide “assent” (agreement) to participate
Guardians provide “proxy consent”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What should be covered in a consent form?

A

-Purpose
-Procedure
-Voluntary & Confidential
Risks
Benefits
Informant’s Statement (Signature)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Conceptual vs. Operational Definitions

A

Conceptual Definitions:
Dictionary definition, may contain abstract terms

Operational definitions
Describes the concept in terms of its observable, measureable characteristics
How can the concept be observed in actual practice
Necessarily imperfect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is operationalization?

A

Identifying and determining how to measure the observable, or empirical, characteristics of whatever concepts or variables researchers wish to study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Nominal

A

Levels of Measurement

Nominal scales are used for labeling variables, without any quantitative value. “Nominal” scales could simply be called “labels.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Example of Operationalization for Race

A

Conceptual: One of the groups into which the world’s population can be divided based upon culture, upbringing, & / or geography

Possible operational: Self-reporting on survey

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Ordinal

A

Levels of Measurement

Rank Ordered Categories
With ordinal scales, it is the order of the values is what’s important and significant, but the differences between each one is not really known.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Interval

A

Levels of Measurement

Interval scales are numeric scales in which we know not only the order, but also the exact differences between the values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Ratio

A

Levels of Measurement

ultimate nirvana when it comes to measurement scales because they tell us about the order, they tell us the exact value between units, AND they also have an absolute zero–which allows for a wide range of both descriptive and inferential statistics to be applied.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Items

A

Types of Measures

Single Question: single number entry
“Teeth” 1-7; Hillary =3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Scale / Index

A

Types of Measures

Several items used together to measure some concept
Teeth (3) + Hair (4) + Clothing (5) ; Hillary = 12

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Factor

A

Types of Measures

A single dimension (ie aspect) of a more complex construct
NOT a measure but part of the concept being measured
Multiple “teeth” item +Multiple “hair” items +etc…
“Teeth” = One factor; “Hair” = one factor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Likert

A
Evaluate a statement; 5-point response options
Strongly agree (5), Agree (4), neither (3), disagree (2), Strongly dis (1)

Strictly an Ordinal Measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Likert-type

A
Any variation of the Likert Scale (E.g., 7 points; different anchors)
Not mentioned (0) mentioned only (1); additional info (2), major emphasis (3), primary focus (4)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Semantic differential

A

Polar opposites; usually 7 points

ie. CNN is
Unbiased ::::: Biased
Accurate ::::: Inaccurate
Current ::::: Dated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Summated index

A

Add up the item scores to a number

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Reverse coding

A

Changing or reversing the numeric poles of an item

Bigger Number: More of what you are measuring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Define Reliability
All about Consistency Can a measure be sued consistently either… By multiple people OR at multiple points in time Also… are multiple items consistent with each other Validity (2 types) External: Can we generalize the findings? Ie, sampling stuff Internal: Was the study designed & constructed in a way that leads to accurate results? Measurement (really… an important internal validity issue) -Does a particular instrument adequately capture the concept it is intended to mean
26
Assessing Reliability
Multiple-administration Interobserver/ Interrater / Intercoder Single - Administration
27
Interobserver/Intercoder/Interrater
Usually a single-administration test to see if ‘coders’ agree Example stat: Percent Agreement; Cohen’s Kappa
28
Single vs. Multiple Administration Reliability testing
Single: Internal measurement consistency Do the measures really work together to measure my concept? ie , credible = unbiased + accurate + factual Chronbach’s Alpha Multiple: ?
29
Define Validity
?
30
External Validity
Validity of generalized (causal) inferences in scientific research, usually based on experiments as experimental validity. In other words, it is the extent to which the results of a study can be generalized to other situations and to other people.
31
Internal Validity
Internal validity refers to how well an experiment is done, especially whether it avoids confounding (more than one possible independent variable [cause] acting at the same time). The less chance for confounding in a study, the higher its internal validity is.
32
Content Validity
refers to the extent to which a measure represents all facets of a given construct. content validity, the criteria are the construct definition itself -- it is a direct comparison. In criterion-related validity, we usually make a prediction about how the operationalization will perform based on our theory of the construct.
33
Criterion-related Validity
Concurrent (Convergent and Divergent) Compare Old Measure to your New One is the extent to which a measure is related to an outcome. Based off whatever criteria you choose to view the study by
34
Construct Validity
If trying to determine appearance and taste, and just questions of taste are asked, then it construct validity is poor.
35
Threats to Internal & External Validity
Internal: How the Study was Conducted Due to Participants Due to Researcher External: Replication - can it be reproduced in other studies? Is the study setting like the “real–world” Sampling
36
Hawthorne Effect
Threat to Internal Validity | Participants know they are being observed
37
Basic stratified vs. proportional stratified sampling?
Basic: Use some important characteristic to sort the sample Then randomly sample each strata equally Do this when you want to compare same-size groups Proportional:
38
Why is Random sampling typically used?
Randomization is an attempt at eliminating researcher bias
39
Cluster Sample
Used to save time… user pre-existing divisions Randomly select clusters Randomly select or census from within the selected clusters
40
Stratified Sample
Use some important characteristic to sort the sample Then randomly sample each strata equally Do this when you want to compare same-size groups
41
Snowball Sample
Also Network Sample; Participants are asked to refer others who “fit” the study
42
Purposeful Sample
Purpose Sample Stratified WITHOUT the random; researcher includes everyone who has the important characteristic Sampling is done when SATURATION is reached
43
Experiments
Systematic investigations to assess the causal effects of IVs & DVs conducted under tightly controlled conditions It’s all about CONTROL If done well, it is the ONLY way we can assess cause and effect ie surveys, textual analyses cannot assess causation
44
3 rules of cause & effect
1. IV must precede the DV - --- ie Which came first? TV Violence or aggression 2. IVs & DVs must covary - --- Must be related in a meaningful way - --- Spurious Relationship 3. Changing in the DV must be the result of changes in the IV & not some other variable - --- Third factor - --- Alternate Hypothesis
45
Spurious relationships
Experiments a correlation between two variables that does not result from any direct relation between them but from their relation to other variables
46
Third-factor/variable
Experiments Variables that the researcher failed to control, or eliminate, damaging the internal validity of an experiment.
47
Alternate hypothesis
Experiments it is a hypothesis of "a difference" stating that changing of the independent variable will produce the predicted difference in the population.
48
Conditions
Experiment Groups Random Assignment or Manipulations
49
Random Assignment
Experiments technique for assigning participants to different groups (treatment or control)
50
Manipulations
Experiments Treatment vs. Control (comparison) groups
51
Full vs. Quasi experiments
Full: Random Assignment of Participants Manipulated IV Quasi: - No random assignments (key difference) Uses Pre-existing groups - Manipulated or observed IV Ie, observed = “course start time” Ie, observed = “course instructor”
52
Between group vs. Within group
Between Group: Each Participant is in only ONE group Ie., 50 people are divided into 2 groups of 25 Within Group: Each Participant is in the treatment AND control Ie., 50 people get the treatment AND the control
53
Posttest Only
Experimental Designs Assess DV at the end (i.e., after treatment) After exposure, are the groups still the same? Can NOT talk about change / differences from beginning to end in a single group
54
Pretest-Posttest
Experimental Designs Assess DV before and after treatment CAN talk about changes / differences from beginning to end in a single group What threat to internal validity might occur in this design Sensitization
55
Solomon 4-group
Experimental Designs A combination of pretest-posttest & posttest only 4 groups... 2 groups (treatment & control) pre and posttest 2 groups (treatment and control) posttest Only
56
Survey Research
A method to assess beliefs, attitudes, & behaviors or large groups
57
What are surveys good for?
Great for understanding large populations, good for DESCRIPTION Great for Correlation, but NOT for causation
58
Political Polls
Applied Survey Who is ahead at a specific time? Exit polls: who did you vote for?
59
Market Research
Applied Survey Who is our audience / customer? What do they like / not like / want / don’t want?
60
Evaluation Research
Applied Survey Assess the effectiveness of product Formative Research - before production to aid in product development
61
Applied Surveys
Political Polls Market Research Evaluation Research
62
Respondents in Surveys
people who take your survey
63
Sampling/List frame in Surveys
Ideally… stratified, Cluster… probability samples | Unfortunately, typically Convenience or Volunteer methods are used to recruit participants
64
Cross-sectional vs. Longitudinal Survey Research
Cross-section is at 1 point in time Good for “still shots” of the populatio Longitudinal Longitudinal looks at people at multiple points in time
65
Open- vs. Closed-ended Survey Research
Open: No set list of response options Usually free writing in a blank space Closed: Fixed response options
66
5 attributes of good survey questions
Good survey questions are: 1. Clear 2. should ask about only one issue 3. don't lead people to respond in certain ways 4. avoid emotionally charged terms 5. avoid double-negatives
67
Contingency & Matrix Questions
Contingency Whether you are asked a particular question is contingent upon your answer to another question. Matrix: Set of Qs with the same response set
68
Textual Analysis
Methods used to describe and interpret characteristics of a recorded or visual message
69
How / why are Textual Analysis' useful?
Types of texts that can be analyzed are infinite! TA provides means for systematic analysis of texts unbiased, comparable, & RELIABLE Can describe patterns
70
3 Types of Textual Analysis
Rhetorical Criticism Qualitative Quantitative Content Analysis
71
Rhetorical Criticism
Textual Analysis Describing, analyzing, interpreting, & evaluating the persuasive force of messages in some artifact
72
Qualitative Content Analysis
Analyzing data for linguistic themes / patterns ie ; How do democrats and republicans address / describe immigration in public speeches Existing Messages vs. Interviews & Focus Groups We can analyze pre-existing messages OR messages we capture ie ; recorded interviews require textual analysis
73
Quantitative Content Analysis
Identify, Enumerate (quantify), Analyze specific messages &/or messages characteristics Run Statistics Data analysis
74
Unitizing
Identifying the unit of analysis Unitizing is identifying the appropriate message unit TV episode, character, news article, sentence Syntactical, Character Units, Thematic
75
What is a codebook?
A codebook describes the contents, structure, and layout of a data collection List of the measures in your study Includes response options Also includes operationalizations for your measures
76
4 Major Qualitative methods
Participant Observation; Ethnography Interviewing Focus Groups
77
Main strengths & Main weakness of Qualitative research
Strengths: Deeper Understanding Flexibility Better Validity Weak on Reliability