Week 4 Flashcards
(22 cards)
test-retest method
way to measure reliability by testing the sample group twice at different times Example: A psychology test measuring anxiety is given to participants twice, two weeks apart, to check if the results remain stable.
composite variable
: a variable created by combining multiple indicators or measures into one. Example: “Socioeconomic status” can be measured as a composite variable that includes income, education level, and occupation.
Census
when data is collected from every member of a population instead of just a sample.
This provides the most accurate picture but can be costly and time-consuming. Example: The U.S. Census collects data from all residents every 10 years to determine demographic changes.
Convenience sample
A type of non-probability sampling where researchers select participants based on ease of access. It is quick and inexpensive but can be biased. Example: A professor surveys their own students about study habits instead of randomly selecting students from different schools.
snowball sample
A non-probability sampling method where existing participants recruit others from their network. Useful for studying hard-to-reach populations, but can lead to bias since participants may be similar. Example: Research on underground music scenes where initial participants recommend others in the same community.
sampeling distrabiution
It helps estimate how much a sample’s results might differ from the true population value. Example: If you take multiple random samples of people’s heights and plot the average height from each sample, the pattern of those averages forms a sampling distribution
representative sample
s a sample that accurately reflects the key characteristics of the larger population it is meant to represent. This ensures that study findings can be generalized to the entire population.
Example:
If a university wants to study student opinions on campus policies, a representative sample would include students from different years (freshmen, sophomores, juniors, seniors), majors, and backgrounds in similar proportions to the university’s overall student body. If the sample only included business majors, it would not be representative.
simple random sample
Every individual in the population has an equal chance of being selected.
Ensures no bias in selection.
Example: Assigning every student at a university a number and using a random number generator to select 200 students for a survey.
Cluster sample
The population is first divided into groups (clusters), then some clusters are randomly selected, and all or a sample of individuals within those clusters are surveyed.
Used when it’s too expensive or difficult to sample individuals directly.
Example: To study high school students in the U.S., researchers first randomly select high schools (clusters) and then randomly select students within each chosen school.
Stratified sample
The population is divided into subgroups (strata) based on a characteristic (e.g., gender, income, age), and then individuals are randomly selected from each group.
Ensures that all key groups are represented in the sample.
Example: If a researcher wants to study voting behavior and knows that age influences voting, they could divide the population into age groups (18-29, 30-49, 50+), then randomly sample people from each age group to ensure fair representation.
Cluster vs. stratified sampling
Cluster sampling selects entire groups and then samples within those groups (not all groups are included).
Stratified sampling ensures that each group is represented in the final sample by selecting from every group.
Population vs. sample
Population: group where social scientists attempt to make generalizations
Sample: small subset of the population selected for the study
Goal is to use the sample to understand and observe the broader populatio
Validity
refers to how well a measure captures the concept it is intended to reflect. If a measure is valid, it accurately represents what it is supposed to measure.
Internal Validity
The extent to which a measure accurately captures the concept within the sample being studied.
Example: If a survey is measuring job satisfaction, does it truly measure job satisfaction, or is it picking up on unrelated factors (e.g., stress levels)?
External validity
The extent to which findings from the sample can be generalized to a larger population or different contexts.
Example: If a study on social media usage is only conducted on college students, can the results be applied to older adults?
Face validity
Whether a measure looks and sounds right at face value.
A basic, subjective check of validity.
Example: A math test that includes only word problems may lack face validity if people expect a math test to include numerical equations.
Concurrent validity
Whether a measure correlates with pre-existing measures that are already considered valid.
Example: A new IQ test should produce similar scores as an already established IQ test.
predictive validity
Whether a measure correlates with future outcomes that it should logically predict.
Example: SAT scores should be able to predict college GPA. If students with high SAT scores perform well in college, the SAT has predictive validity.
content validity
Whether a measure captures all aspects of the concept being studied.
Example: If measuring “mental health,” a survey should include both emotional and physical well-being, rather than just one aspect.
construct validity
The extent to which a measure truly reflects the underlying concept it is intended to measure.
Example: If developing a new measure of intelligence, how do we know it actually measures “intelligence” and not just memory or problem-solving skills?
Construct validity is difficult because it requires deep theoretical and empirical testing to ensure that the measure truly captures the abstract concept.
intercoder reliability
Intercoder Reliability
Measures how much agreement there is between multiple coders when they analyze the same data.
Ensures that the coding process is not subjective and that different researchers interpret the data in a similar way.
Example: If two researchers analyze interview transcripts for themes, high intercoder reliability means they identify the same themes in the same text.
robustness checks
A method to test the reliability and validity of results by seeing if they hold up when assumptions or data are slightly altered.
Ensures that findings are not dependent on one specific method.
Example: In an economic study, a robustness check might involve using a different statistical model to see if the results still hold. If the results change drastically, the original findings may not be reliable.