PSYC 523 - Statistics Flashcards
ANOVA
Analysis of variance - A parametric statistical technique used to compare more than two experimental groups at a time. (1 variable with more than 2 independent groups) Determines whether there is a significant difference between the groups, but does not reveal where that difference lies. You can look at variations between the groups and variations within the groups. You are looking to test a hypothesis - if the variance are equal, then there is a null hypothesis. This is better than running multiple t-tests - error control. **Measured using an f-ratio, which is the between/within group variance. **
Clinical example: Karen is interested in doing a research project on the amount of meditationtime and how it affects anxiety. She devises an ANOVA with three groups (15 minutes, 25 minutes and 35 minutes) and tests to see the variance of these groups and their scores on a anxiety scale.
Clinical v. statistical significance
Clinical significance refers to the meaningfulness of change in a client’s life and everyday functioning/symptom reduction (not determined by mathematical procedure). Statistical significance is calculated when p < .05, meaning the likelihood that your results are due to chance is less than 5%. Statistical significance indicates that it is unlikely you have made a Type I error (false-positive). Statistical significance does not necessarily mean there is clinical significance, and vice versus.
EXAMPLE: Tony comes to therapy suffering from PTSD. He’s mentioned a new form of therapy that you’ve never heard of, so when you research it, the studies show that is has shown clinical significance by reducing symptoms of PTSD patients, but was not statistically significant in it’s mathematical procedures. You discuss this with Tony and decide to go ahead with the treatment.
Construct validity
Part of: research design
Construct validity is the degree to which a test or study measures the qualities or the constructs that it is claiming to measure. Two parts: Convergent validity: does test correlate highly with other tests that measure the concept. Divergent validity: test does not correlate with tests that measure other constructs. To have high construct validity, test must correlate with measure of same construct and NOT correlate with measures of other constructs.
Clinical example: Amy comes into your office with symptoms of depression. She says she has taken a test that shows she could have MDD. When you research this test, you see that this test shows construct validity (convergent) with other tests like the Beck Depression Inventory.
Content validity
Part of: research design
Content validity is the degree to which a measure or study includes all of the facets/aspects of the construct that it is attempting to measure. Content validity cannot be measured empirically but is rather assessed through logical analysis. Validity=accuracy Construct underrepresentation (doesn’t include all facets) and construct** irrelevant** variance (includes irrelevant aspects that influence score) Important because all symptoms or aspects must be represented to test accurately. Clinical example: A depression scale may lack content validity if it only assesses the affective dimension of depression (emotion related- decrease in happiness, apathy, hopelessness) but fails to take into account the behavioral dimension (sleeping more or less, eating more or less, energy changes, etc).
Correlation v. causation
Part of: research design and statistical analysis
Correlation means that a relationship exists between two variables. Can be positive or negative; coefficient will fall between -1.00 and +1.00. Correlation does not indicate causation - mediating variables may have caused relationship, can be bi-directional. Causation means that a change in one variable affects a change in the other variable. Determined via controlled experiments, when dependent variables can be isolated and extraneous variables controlled. Why: Important to be able to interpret research appropriately and not assume causation when there is none.
Clinical example: A study found that minutes spent exercising correlated with lower depression levels. This study was able to show that depression levels and exercise were correlated, but could not go so far as to claim that one causes the other.
Correlational research
Research method that examines the relationships between variables. Does not establish causal factors. Produces correlation coefficient ranging from 1.0 to -1.0 depending on strength/direction of relationship between the two variables. Statistical tests include Pearson, Spearman, & point-biserial PROS: inexpensive, produces wealth of data, encourages future research, precursor to experiment determining causation CONS: cannot establish causation or control for confounds.
Clinical example: Shelia’s patient Donna suffers from illness anxiety disorder. She brings Shelia an article claiming that eating out of plastic containers causes cancer. After reading the article, Shelia explains that the study referenced in the article is a correlational study, which only shows that there is a relationship between eating out of plastic containers and cancer, but it does not prove that eating out of plastic containers causes cancer.
Cross-sectional design
A type of research design that samples different age groups to look at age group differences across a dependent variable. Used in online surveys. Quasi-experimental (participants selected based on age, not randomly) Advantages include collection of large amounts of data in a short amount of time & low cost. Drawbacks include inability to infer causation or show changes over time.
EXAMPLE: George was looking to study the difference in peer relations and self-esteem in various age groups. He decided to use a cross-sectional design comparing 6 year-olds, 12 year olds, 18 year olds, and 25 year olds.
Dependent t-test
(Paired-sample t-test) Statistical analysis that compares the means of two related groups to determine whether there is a statistically significant difference between these means. Sometimes called a correlated t-test because the data are correlated. Used when the design meet requirements for parametric test and involves matched pairs or repeated measures, and only two conditions of the independent variable It is called “dependent” because the subjects carry across the manipulation–they take with them personal characteristics that impact the measurement at both points—thus measurements are “dependent” on those characteristics.
Clinical example: A researcher wants to determine the effects of caffeine on memory. They administer a memory test to a group of subjects, have the subjects consume caffeine then administer another memory test. Because they used the same subjects, this is a repeated measures experiment that requires a dependent t-test during statistical analysis.
Descriptive v. inferential
Descriptive statistics are those which are used to just describe and summarize the sample or population - includes measures of central tendency and variance (mean, standard dev)- can be used with any type of data (experimental and non-experimental). Inferential statistics allow inferences to be made from the sample to the population. Sample must accurately reflect the population (importance of random sampling) - Infer causality - Limited to experimental data. Techniques include hypothesis testing, regression analysis. The statistical results incorporate the uncertainty that is inherent in using a sample to understand an entire population. EXAMPLE: A researcher conducts a study examining the rates of test anxiety in Ivy League students. This is a descriptive study because it is concerned with a specific population. However, this study cannot be generalized to represent all college students, so it is not an inferential study.
Double-blind study
A type of experimental design in which both the participants and the researchers are unaware of who is in the experimental condition and who is in the placebo condition. (In contrast to a single-blind study, where only the participants are unaware of who is in the experimental condition.) Double-blind studies eliminate the possibility that the researcher may somehow communicate (knowingly or unknowingly) to a participant which condition they are in, thereby contaminating the results. Example: A study testing the efficacy of a new SSRI for anxiety is using a double-blind study. Neither the experimenter nor the participants are aware of who is in the treatment group and who is receiving a placebo. This setup ensures that the experimenters do not make subtle gestures accidentally signaling who is receiving the drug and who is not, and that experimenter expectations could not affect the studies outcome.
Ecological validity
Extent to which the experimental situation approximates the real-life situation being studied. The applicability of the findings of a study to the real world. Experiments high in ecological validity tend to be low in reliability because there is less control of the variables in real-world like settings. More generalizable. A type of external validity. EXAMPLE: A researcher wants to study the effects of alcohol on sociability, so he administers beer to a group of subjects and has them interact with each other. To increase their ecological validity, he decides to carry out the study in an actual bar.
Effect size
Part of: statistical analysis
A measure of the strength of a significant relationship; the proportion of variance accounted for. Indicates if findings are weak, moderate, or strong. Also called shared variance or the coefficient of determination. Why: Quantifies the effectiveness of a particular intervention, relative to some comparison; commonly used in meta-analyses.tell us more about the meaningfulness of our statistics - how much of the variance was explained in the difference in the variables. Used alongside the p-value. Cohen’s d.
.75 -1- substantial
.50 -.74– moderate
.25-.49 - weak
Below - not meaningful
Example: A researcher conducts a correlational research study on the relationship between caffeine and anxiety ratings. The study produces a correlation coefficient of 0.8 which is considered a large effect size. The effect size reflects a strong relationship between the caffeine and anxiety.
Experimental research
Part of: research design What: An independent variable is manipulated in order to see what effect it will have on a dependent variable. Researchers try to control for any other variables (confounds) that may affect the dependent variable(s). Establishes causation (not correlation) - stronger evidence. Example: A researcher conducts an experimental research study to examine the relationship between caffeine intake and anxiety ratings. The study administers various levels of caffeine (the independent variable) to the low, high, and no caffeine groups. The participants are then asked to report their anxiety levels (the dependent variable). They found that those who had more caffeine reported feeling more anxious.
Hypothesis
Part of: research What: a formally stated prediction about the characteristics or appearance of variables, or the relationship between variables, that acts as a working template for a particular research study and can be tested for its accuracy. Essential to the scientific method. Hypotheses help to focus the research and bring it to a meaningful conclusion. Without hypotheses, it is impossible to test theories. EXAMPLE: A famous hypothesis in social psychology was generated from a news story, when a woman in New York City was murdered in full view of dozens of onlookers. Psychologists John Darley and Bibb Latané developed a hypothesis about the relationship between helping behavior and the number of bystanders present, and that hypothesis was subsequently supported by research. It is now known as the bystander effect.
Independent t-test
Statistical analysis that compares the means of two independent groups (different groups of people). Used when scores meet requirements of parametric test, when there are independent samples, and when there are only two conditions in the independent variable (groups). Dependent variable will be interval or ratio.
Determines if there is a statistical difference between the two groups’ means. We make the assumption that if randomly selected from the same population, the groups will mimic each other; the null hypothesis is no difference between the two groups.
EXAMPLE: Tom is a student and tells his therapist that he found a study comparing test scores from students who listened to music they enjoyed prior to their exam or listened to Mozart, showing that those who listened to their favorite music did better.
Memory of those who drink alcohol compared to nondrinkers.
Internal consistency
Part of: research design What: Type of reliability - Extent to which different items on a test measure the same ability or trait (testing once) - measures whether several items that propose to measure the same general construct produce similar scores and are free from error. usually measured with Cronbach’s alpha by calculating all possible split-half configurations (scores ranging from 0-1) - also measured with split-half and KR-20.
EXAMPLE: you are doing research on a measurement that will assess for Bipolar disorder. You want yo make sure your test questions all have internal consistency or produce the same results and measure the same construct. You test using cronbach alpha and find that your assessment does indeed have high internal consistency.
Internal validity
Part of: research design What: The extent to which the observed relationship between variables in a study reflects the actual relationship between the variables. ability to draw inferences from results based on level of control exhibited in the study - dictated by how well controlled your study is. Control for confounding variables can increase internal validity, as well as a random selection of participants and large sample size (attrition, test bias, historical events, confounding variables). types of internal validity = content, construct, face, criterion.
EXAMPLE: Researchers investigated a new tx for depressing and wanted to be sure they had internal validity. They set strict rules for the implementation of the tx and made sure their sample size was large.
Interrater reliability
Part of: research design What: a type of reliability that measures the agreement level between independent raters. useful with measures that are less objective and more subjective. used to account for human error in the form of distractibility, misinterpretation or simply differences in opinion. Uses ethogram as key for behavior observed - kappa statistics. EXAMPLE: Three graduate students are performing a natural observation study for a class that examines violent video games and behavior in a group of 9 year old boys. The students rated the behavior on a scale of 1 (not aggressive) to 5 (very aggressive). However, the responses were not consistent between the observers. The study lacked inter-rater reliability.
Measures of central tendency
Part of: statistical analysis What: Tendency of the data to lump somewhere around the middle across the values on X; provides a statistical description of the center of the distribution. help to summarize the main features of a data set and identify the score around which most scores fall. Three main measures are used: the mean, mode and median. Mean is the arithmetic average of all scores within a data set. Mode is the most frequently occurring score. Median is the point that separates the distribution into two equal halves. Median and mode are the most resilient to outliers. Makes results easier to compare to one another.
EXAMPLE: A researcher is studying the frequency of binge eating in a group of girls suffering from binge eating disorder. To better understand the data that was gathered, they start by calculating the measures of central tendency: the most frequently occurring number of episodes in the group, the average number of episodes, and the number of episodes in the middle of the set. In other words, the mode median and mean.
Measures of variability
In statistics, measures of variability are how the spread of the distribution vary around the central tendency. Three primary measures: range, variance and standard deviation. Range is obtained by taking the two most extreme scores and subtracting the lowest from the highest. Variance is the average squared deviation around the mean.
Standard deviation is the square root of the variance and is highly useful in describing variability.Why: Helps determine which statistical analyses you can run on a data set.
EXAMPLE: A researcher is studying the frequency of binge eating in a group of girls suffering from binge eating disorder. After calculating the measures of central tendency, they decide that they want to know more about the distribution of number of episodes. They decide to calculate the measures of variability. This includes the range, variance, and standard deviation
Nominal/ordinal/interval/ratio measurements
These are four types of measurements seen in statistics. Nominal data: dichotomous, only two levels, such as male and female, or categorical, such as Republican, Democrat, Independent. Ordinal data (numbers) indicate order only (1st born, 2nd born) Interval data: true score data where you know the score a person made and you can tell the actual distance between individuals based on their respective scores, but the measure used to generate the score has no true zero (temperature, F or C, SAT scores) Ratio data: interval data with a true zero (age, height, weight, speed)
EXAMPLE: A researcher is creating a questionnaire to measure depression. They include nominal scale questions (“what is your gender?”) ordinal scale questions (“rank your mood today from 1-very unhappy to 5-very happy”) and ratio scale questions (“how many hours of sleep do you get on average?”)
Normal curve
Part of: statistics A normal curve is a normal distribution, graphically represented by a bell-shaped curve. A frequency where most occurrences take place in the middle of the distribution and taper off on either side. All measures of central tendency are at the highest point of the curve. Symmetrical, extremes are at the tails. Divisible into deviations. Important for parametric statistics.
EXAMPLE: A researcher is developing a new intelligence test. After obtaining the results, they found that the scores fell along a normal curve: most participants scored in the middle range with very few obtaining either the highest or lowest scores (scores were normally distributed).
Probability
A mathematical statement indicating the likelihood that something will happen when a particular population is randomly sampled, symbolized by (p) or a %.The higher the p value, the more likely that the phenomenon or event happened by chance. Probability is based on hard data (unlike chance); p is between 0 and 1. A p-value of less than .05 is considered statistically significant (5% chance you made a type 1 error). EXAMPLE: Researchers create a study comparing depression and the efficacy of CBT versus ACT. The result show a p value of <0.05, which means that your results are statistically significant.
Parametric v. nonparametric statistical analyses
Parametric statistical analyses: inferential procedures that require certain assumptions about the distribution of scores. usually used with scores most appropriately described by the mean. based on symmetrical distributions (normal bell-curve). greater statistical power and more likely to detect statistical significance than nonparametric analyses. Nonparametric statistical analyses involve inferential procedures that do not require stringent assumptions about the parameters of the raw score population represented by the sample data. usually used with scores most appropriately described by the median or the mode. Nonparametric data have skewed distributions (not normal curve). EXAMPLE: Researchers sets up a study to determine if there is a correlation between hours of sleep per night and ratings of happiness. Because they used a very small sample, they cannot assume the data are symmetrically distributed and therefore must use a nonparametric test.