PSYC 523- Statistics Flashcards
ANOVA
Analysis of variance
A parametric statistical technique used to compare more than two experimental groups at a time.
Determines whether there is a significant difference between the groups, but does not reveal where that difference lies.
Clinical example: A group of psychiatric patients are trying three different therapies: counseling, medication and biofeedback. You want to see if one therapy is better than the others. You will gather data and run an ANOVA on the three groups- counseling, medication, and biofeedback- to see if there is a significant difference between any of them.
Clinical v. statistical significance
Clinical significance refers to the meaningfulness of change in a client’s life.
Statistical significance is calculated when p < .05, meaning the likelihood that your results are due to chance is less than 5%. Statistical significance indicates that it is unlikely you have made a Type I error. Must calculate effect size to truly evaluate the meaningfulness of a result.
Construct validity
Part of: research design
Construct validity is the degree to which a test or study measures the qualities or the constructs that it is claiming to measure.
- Convergent validity: does test correlate highly with other tests that measure the concept
- Divergent validity: does test correlate lowly with tests that measure different constructs
Clinical example: A group of researchers create a new test to measure depression. They want to ensure that the test has construct validity, in that it is actually measures the construct of depression. To do this, they measure how much the test correlates with the Beck Depression Inventory and how much it does not measure another concept like anxiety.
Content validity
Part of: research design
Content validity is the degree to which a measure or study includes all of the facets/aspects of the construct that it is attempting to measure. Content validity cannot be measured empirically but is rather assessed through logical analysis.
Validity=accuracy
Clinical example: A depression scale may lack content validity if it only assesses the affective dimension of depression (emotion related- decrease in happiness, apathy, hopelessness) but fails to take into account the behavioral dimension (sleeping more or less, eating more or less, energy changes, etc).
Correlation v. causation
Part of: research design and statistical analysis
Correlation means that a relationship exists between two variables.
- Can be positive or negative; coefficient will fall between -1.00 and +1.00.
- Correlation does not indicate causation.
Causation means that a change in one variable affects a change in the other variable.
- Determined via controlled experiments, when dependent variables can be isolated and extraneous variables controlled.
Clinical example: A study found that minutes spent exercising correlated with lower depression levels. This study was able to show that depression levels and exercise were correlated, but could not go so far as to claim that one causes the other.
Correlational research
Research method that examines the relationships between variables.
- Does not establish causal factors.
- Produces correlation coefficient ranging from 1.0 to -1.0 depending on strength/direction of relationship between the two variables.
- Statistical tests include Pearson, Spearman, & point-biserial
PROS: inexpensive, produces wealth of data, encourages future research, precursor to experiment determining causation
CONS: cannot establish causation or control for confounds
Clinical example: Shelia’s patient Donna suffers from illness anxiety disorder. She brings Shelia an article claiming that eating out of plastic containers causes cancer. After reading the article, Shelia explains that the study referenced in the article is a correlational study, which only shows that there is a relationship between eating out of plastic containers and cancer, but it does not prove that eating out of plastic containers causes cancer.
Cross-sectional design
A type of research design that samples different age groups to look at age group differences across a dependent variable.
- Used in online surveys.
- Quasi-experimental (participants selected based on age, not randomly)
- Advantages include collection of large amounts of data in a short amount of time & low cost
- Drawbacks include inability to infer causation
EXAMPLE: George was looking to study the difference in peer relations and self-esteem in various age groups. He decided to use a cross-sectional design comparing 6 year-olds, 12 year olds, 18 year olds, and 25 year olds.
Dependent t-test
Statistical analysis that compares the means of two related groups to determine whether there is a statistically significant difference between these means.
- Sometimes called a correlated t-test because the data are correlated.
- Used when the design involves matched pairs or repeated measures, and only two conditions of the independent variable
- It is called “dependent” because the subjects carry across the manipulation–they take with them personal characteristics that impact the measurement at both points—thus measurements are “dependent” on those characteristics.
Clinical example: A researcher wants to determine the effects of caffeine on memory. They administer a memory test to a group of subjects have the subjects consume caffeine then administer another memory test. Because they used the same subjects, this is a repeated measures experiment that requires a dependent t-test during statistical analysis.
Descriptive v. inferential
Descriptive statistics are those which are used to describe and summarize the sample or population.
- includes measures of central tendency and variance
- can be used with any type of data (experimental and non-experimental)
Inferential statistics allow inferences to be made from the sample to the population.
- Sample must accurately reflect the population (importance of random sampling)
- Infer causality
- Limited to experimental data
- Techniques include hypothesis testing, regression analysis.
- The statistical results incorporate the uncertainty that is inherent in using a sample to understand an entire population.
EXAMPLE: A researcher conducts a study examining the rates of test anxiety in Ivy League
students. This is a descriptive study because it is concerned with a specific population. However, this study cannot be generalized to represent all college students, so it is not an inferential study.
Double-blind study
A type of experimental design in which both the participants and the researchers are unaware of who is in the experimental condition and who is in the placebo condition. (In contrast to a single-blind study, where only the participants are unaware of who is in the experimental condition.)
- Double-blind studies eliminate the possibility that the researcher may somehow communicate (knowingly or unknowingly) to a participant which condition they are in, thereby contaminating the results.
Example: A study testing the efficacy of a new SSRI for anxiety is using a double-blind study. Neither the experimenter nor the participants are aware of who is in the treatment group and who is receiving a placebo. This setup ensures that the experimenters do not make subtle gestures accidentally signaling who is receiving the drug and who is not, and that experimenter expectations could not affect the studies outcome.
Ecological validity
The applicability of the findings of a study to the real world. Experiments high in ecological validity tend to be low in reliability because there is less control of the variables in real-world like settings.
EXAMPLE: A researcher wants to study the effects of alcohol on sociability, so he administers beer to a group of subjects and has them interact with each other. To increase their ecological validity, he decides to carry out the study in an actual bar.
Effect size
Part of: statistical analysis
A measure of the strength of a significant relationship; the proportion of variance accounted for. Indicates if findings are weak, moderate, or strong. Also called shared variance or the coefficient of determination.
Why: Quantifies the effectiveness of a particular intervention, relative to some comparison; commonly used in meta-analyses.
Example: A researcher conducts a correlational research study on the relationship between caffeine and anxiety ratings. The study produces a correlation coefficient of 0.8 which is considered a large effect size. The effect size reflects a strong relationship between the caffeine and anxiety.
Experimental research
Part of: research design
What: An independent variable is manipulated in order to see what effect it will have on a dependent variable. Researchers try to control for any other variables (confounds) that may affect the dependent variable(s). Establishes causation.
Example: A researcher conducts an experimental research study to examine the relationship between caffeine intake and anxiety ratings. The study administers various levels of caffeine (the independent variable) to the low, high, and no caffeine groups. The participants are then asked to report their anxiety levels (the dependent variable). They found that those who had more caffeine reported feeling more anxious.
Hypothesis
Part of: research
What: a formally stated prediction about the characteristics or appearance of variables, or the relationship between variables, that acts as a working template for a particular research study and can be tested for its accuracy.
- Essential to the scientific method
- Hypotheses help to focus the research and bring it to a meaningful conclusion.
- Without hypotheses, it is impossible to test theories.
EXAMPLE: A famous hypothesis in social psychology was generated from a news story, when a woman in New York City was murdered in full view of dozens of onlookers. Psychologists John Darley and Bibb Latané developed a hypothesis about the relationship between helping behavior and the number of bystanders present, and that hypothesis was subsequently supported by research. It is now known as the bystander effect.
Independent t-test
Statistical analysis that compares the means of two independent groups, typically taken from the same population (although they could be taken from separate populations).
- Determines if there is a statistical difference between the two groups’ means
- We make the assumption that if randomly selected from the same population, the groups will mimic each other; the null hypothesis is no difference between the two groups
EXAMPLE: Fred is analyzing the best treatment options for his patient Harold. He reads a study comparing two different types of therapies. After utilizing an independent t-test, the researchers found that there was not a statistically significant difference between the treatment options. Harold decides that both are good options for his patient and he decides to think about his client’s person variables that might make one better than the other.
Internal consistency
Part of: research design
What: a type of reliability that measures whether several items that propose to measure the same general construct produce similar scores and are free from error.
- usually measured with Cronbach’s alpha.
EXAMPLE: Patient comes in with symptoms of PTSD. You decide to search for a psychological test that is designed to help you to detect and diagnose PTSD. You come across the Posttraumatic Stress Diagnostic Scale (PDS). The test manual indicates that the PDS is a valid measure of PTSD. You look in the test manual of the PDS and find that Cronbach’s alpha is 0.91. This indicates that the PDS has strong internal consistency.
Internal validity
Part of: research design
What: The extent to which the observed relationship between variables in a study reflects the actual relationship between the variables. Control for confounding variables can increase internal validity, as well as a random selection of participants.
EXAMPLE: Researchers investigated a new tx for depressing using tight controls in terms of who could be a participant. For instance, they did not allow anyone with comorbidity to participate. This increased the study’s internal validity. It did, however, jeopardize the ecological validity of the research.
Interrater reliability
Part of: research design
What: a type of reliability that measures the agreement level between independent raters.
- useful with measures that are less objective and more subjective.
- used to account for human error in the form of distractibility, misinterpretation or simply differences in opinion.
EXAMPLE: Three graduate students are performing a natural observation study for a class that examines violent video games and behavior in a group of 9 year old boys. The students rated the behavior on a scale of 1 (not aggressive) to 5 (very aggressive). However, the responses were not consistent between the observers. The study lacked inter-rater reliability.
Measures of central tendency
Part of: statistical analysis
What: Tendency of the data to lump somewhere around the middle across the values on X; provides a statistical description of the center of the distribution.
- Three main measures are used: the mean, mode and median.
- Mean is the arithmetic average of all scores within a data set.
- Mode is the most frequently occurring score.
- Median is the point that separates the distribution into two equal halves.
- Median and mode are the most resilient to outliers.
EXAMPLE: A researcher is studying the frequency of binge eating in a group of girls suffering from binge eating disorder. To better understand the data that was gathered, they start by calculating the measures of central tendency: the most frequently occurring number of episodes in the group, the average number of episodes, and the number of episodes in the middle of the set. In other words, the mode median and mean.
Measures of variability
In statistics, measures of variability are how the spread of the distribution vary around the central tendency. Three primary measures: range, variance and standard deviation.
- Range is obtained by taking the two most extreme scores and subtracting the lowest from the highest.
- Variance is the average squared deviation around the mean
- Standard deviation is the square root of the variance and is highly useful in describing variability.
Why: Helps determine which statistical analyses you can run on a data set.
EXAMPLE: A researcher is studying the frequency of binge eating in a group of girls suffering from binge eating disorder. After calculating the measures of central tendency, they decide that they want to know more about the distribution of number of episodes. They decide to calculate the measures of variability. This includes the range, variance, and standard deviation
Nominal/ordinal/interval/ratio measurements
These are four types of measurements seen in statistics.
- Nominal data: dichotomous, only two levels, such as male and female, or categorical, such as Republican, Democrat, Independent.
- Ordinal data (numbers) indicate order only (1st born, 2nd born)
- Interval data: true score data where you know the score a person made and you can tell the actual distance between individuals based on their respective scores, but the measure used to generate the score has no true zero (temperature, F or C, SAT scores)
- Ratio data: interval data with a true zero (age, height, weight, speed)
EXAMPLE: A researcher is creating a questionnaire to measure depression. They include nominal scale questions (“what is your gender?”) ordinal scale questions (“rank your mood today from 1-very unhappy to 5-very happy”) and ratio scale questions (“how many hours of sleep do you get on average?”)
Normal curve
Part of: statistics
A normal curve is a normal distribution, graphically represented by a bell-shaped curve.
- A frequency where most occurrences take place in the middle of the distribution and taper off on either side
- All measures of central tendency are at the highest point of the curve
- Symmetrical, extremes are at the tails
- Divisible into deviations
- Fits any set of data where n=infinity
EXAMPLE: A researcher is developing a new intelligence test. After obtaining the results, they found that the scores fell along a normal curve: most participants scored in the middle range with very few obtaining either the highest or lowest scores (scores were normally distributed).
Probability
A mathematical statement indicating the likelihood that something will happen when a particular population is randomly sampled, symbolized by (p).
The higher the p value, the more likely that the phenomenon or event happened by chance. Probability is based on hard data (unlike chance); p is between 0 and 1
EXAMPLE: Researchers are conducting a study on the heritability of bipolar disorder. They find that there is a strong genetic link, meaning there is a greater probability of an individual having the disorder if one of their parents also has it.
Parametric v. nonparametric statistical analyses
Parametric statistical analyses: inferential procedures that require certain assumptions about the distribution of scores.
- usually used with scores most appropriately described by the mean
- based on symmetrical distributions
- robust procedures with negligible amounts of error.
- greater statistical power and more likely to detect statistical significance than nonparametric analyses.
Nonparametric statistical analyses involve inferential procedures that do not require stringent assumptions about the parameters of the raw score population represented by the sample data
- usually used with scores most appropriately described by the median or the mode.
- Nonparametric data have skewed distributions.
EXAMPLE: Researchers sets up a study to determine if there is a correlation between hours of sleep per night and ratings of happiness. Because they used a very small sample, they cannot assume the data are symmetrically distributed and therefore must use a nonparametric test.