chapter 11 Flashcards

(47 cards)

1
Q

descriptive statistics

A

refers to a set of techniques for summarizing and displaying data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

distribution

A

the way the scores are distributed across the levels of that variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

histogram

A
  • a graphical display of a distribution
  • it presents the same information as a frequency table but is even quicker and easier to grasp
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

distribution shapes

A
  • symmetrical: left and right halves are mirror images of each other
  • outlier: an extreme score that is much higher or lower than the rest of the scores in the distribution
  • central tendency a distribution is its middle—the point around which the scores in the distribution
    tend to cluster (average)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

mean

A
  • the sum of the scores divided by the number of scores
  • it is an average
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

median

A
  • is the middle score in the sense that half the scores
    in the distribution are less than it and half are greater than it
  • find it by organizing the scores from lowest to highest and locate the score in the middle
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

mode

A

the most frequent score in a distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

variability

A

the extent to which the scores vary around their central tendency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

range

A

the difference between the highest and lowest scores in the distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

standard deviation

A

the average distance between the scores and the mea

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

percentile rank

A

the percentage of scores
in the distribution that are lower than that score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

z-score

A

the difference between that
individual’s score and the mean of the distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

differences between groups

A

usually described in terms of the mean and standard deviation of each group or condition or effect sizes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

cohens d

A

the difference between the two means divided by the standard deviation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

standard error

A
  • the standard deviation of the group divided by the square root of the sample
    size of the group
  • the standard error is used because, in general, a difference between group means that is greater than two standard errors is statistically significant
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

planned analysis

A

test a relationship that you expected in your hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

exploratory analysis

A
  • an analysis
    that you are undertaking without an existing hypothesis
  • these analyses will help you explore your data for
    other interesting results that might provide the basis for future research
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

null hypothesis testing

A

is a formal approach to
deciding between two interpretations of a statistical relationship in a sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

null hypothesis

A

the hypothesis that there is no significant difference between specified populations, any observed difference being due to sampling or experimental error

20
Q

alternative hypothesis

A

the idea that there is a relationship in the population and that the relationship in the sample reflects this relationship in the population

21
Q

p-value

A
  • p-values indicate how incompatible the data is with a specified statistical model (null hypothesis)
  • low p value means that the sample or more extreme result would be unlikely if the null hypothesis were true and leads to the rejection of the null hypothesis
  • p-value that is not low means that the sample or more extreme
    result would be likely if the null hypothesis were true and leads to the retention of the null hypothesis
  • p-value, or statistical significance, does not measure the size of an effect or the importance of a result
  • by itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis
22
Q

α (alpha)

23
Q

statistical significance

A

helps determine if observed results in a study are likely due to a real effect or just random chance, with a p-value of 0.05 or less often indicating significance

24
Q

practical significance

A

the importance or usefulness of
the result in some real-world context

25
one-sample t-test
used to compare a sample mean (M) with a hypothetical population mean (μ0) that provides some interesting standard of comparison
26
test statistic
a statistic that is computed only to help find the p value
27
two-tailed test
- where we reject the null hypothesis if the t score for the sample is extreme in either direction - this test makes sense when we believe that the sample mean might differ from the hypothetical population mean but we do not have good reason to expect the difference to go in a particular direction
28
one-tailed test
- where we reject the null hypothesis only if the t score for the sample is extreme in one direction that we specify before collecting the data - this test makes sense when we have good reason to expect the sample mean will differ from the hypothetical population mean in a particular direction
29
dependent samples t-test (paired-samples t-test)
- used to compare two means for the same sample tested at two different times or under two different conditions - this comparison is appropriate for pretest-posttest designs or within-subjects experiments - this test can also be one-tailed if the researcher has good reason to expect the difference goes in a particular direction
30
independent-samples t-test
- used to compare the means of two separate samples (M1 and M2) - the two samples might have been tested under different conditions in a between-subjects experiment, or they could be pre-existing groups in a cross-sectional design
31
analysis of variance (ANOVA)
when there are more than two groups or condition means to be compared ANOVA is the most common null hypothesis test
32
one-way ANOVA
- used to compare the means of more than two samples (M1, M2…MG) in a between- subjects design - one estimate of the population variance is called the mean squares between groups (MSB) and is based on the differences among the sample means - the other is called the mean squares within groups (MSW) and is based on the differences among the scores within each group
33
post-hoc comparisons
- after the data have been collected - we can estimate how large our power is post-hoc after the data have been collected - we use the sample statistic statistic value as a proxy for the true population value so we can determine a location for the test statistic distribution when the null hypothesis is false
34
a priori
- a probability or distribution that is established before any data is collected or evidence is considered, based on prior knowledge or assumptions - estimate how large the expected sample statistic will be using standard effect size - with an expected effect size and a chosen significance level we can calculate how large our sample needs to be to obtain the power - if the resulting sample size is impractically large we can choose to increase alpha, employ a stronger manipulation or select a more homogenous sample
35
repeated measures ANOVA
- basics of the repeated-measures ANOVA are the same as for the one-way ANOVA - the main difference is that measuring the dependent variable multiple times for each participant allows for a more refined measure of MSW
36
factorial ANOVA
- the basics of the factorial ANOVA are the same as for the one-way and repeated-measures ANOVAs - the main difference is that it produces an F ratio and p value for each main effect and for each interaction
37
type 1 error
- rejecting the null hypothesis when it is true - means that we have concluded that there is a relationship in the population when in fact there is not - false positive - the probability of a type 1 error is equivalent to significance level α 0.05
38
type 2 error
- means that we have concluded that there is no relationship in the population when in fact there is a relationship - failing to reject the null - false negative
39
statistical power
the probability of rejecting the null hypothesis given the sample size and expected relationship strength
40
criticisms of null hypothesis testing
- researchers’ misunderstanding: the p value is widely misinterpreted as the probability that the null hypothesis is true - the strict convention of rejecting the null hypothesis when p is less than .05 and retaining it when p is greater than .05 makes little sense - even when understood and carried out correctly it is simply not very informative
41
confidence intervals
a range of values that is computed in such a way that some percentage of the time (usually 95%) the population parameter will lie within that range
42
one-sided test
- more power then a two-sided test - the critical value in a one sided test is closer to the mean of the test statistic distribution, making it more likely that a test statistic in the expected direction will fall in this critical region
43
parametric test
involves assumptions about the shape and parameters of the population distribution is usually more powerful than a nonparametric test
44
effect size
- refers to the strength of association between variables - reporting effect size provides a scale of values that is consistent across all types of studies to examine the strength of the treatment
45
different effect sizes
- small effects near r = .15 - medium effects near r = .30 - large effects above r = .40
46
squared correlation coefficient
- squared value of the coefficient r² - it converts the r value into a percentage - tells us the percent of shared variance between the two variables - the percent if shared variance is important when examining effect sizes
47
non-significant results can be attributed to:
1. procedures: not working 2. alpha level: more likely to overlook an effect in a population when our significance level is very low 3. sample size: small small it is harder to detect a relationship between variables 4. effect size: the true effect size being very small is another reason for non-significant results