Chapter 6/7 - Key Words Flashcards

(27 cards)

1
Q

RANDOM SAMPLING

A

Making sure that everyone in the population has an equal chance of getting selected for your sample.

Theoretically possible, but practically impossible

We just have to do our best, and admit that our efforts often have limitations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

PROBABILITY

A

The likelihood of something happening.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

REPRESENTATIVE SAMPLE

A

A sample which accurately reflects the characteristics of the population.

For example: A sample of 100 residents in Bakersfield that has 94 of the residents being Caucasian is not representative. According to the latest US Census the number of Caucasians in the sample should be around 57.

“Okay, so what if I get a sample that has 62 Caucasians? Is that still too many?” The answer is on the next slide.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

“IT DEPENDS”

A

There are some statistics tools that help answer that question with more certainty than we would get by eyeballing it and guessing. We’ll talk about one at the end of the semester.

Most researchers do their best to gather a random sample, then they acknowledge in their writing that their sample (just like everyone else’s) has its imperfections.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

“IT STILL DEPENDS”

A

Often, it also depends on what your study’s variables are. If I’m studying helping behaviors and my sample is almost all Asian-Americans and Caucasians, then I’m either ignoring other race/ethnic groups (that’s usually bad) or implicitly claiming they don’t show helping behaviors (that’s worse). If my study is about computer science graduate students and the overwhelming majority of these students are in one of these two race/ethnic categories, then it’s okay for my sample to reflect that.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

SAMPLING ERROR

A

”Hey, we were just talking about this!”

Sampling error is when your randomly-drawn sample STILL doesn’t accurately reflect the population.

You will always have at least some sampling error.

YOU CAN REDUCE SAMPLING ERROR BY (1) GETTING A LARGER SAMPLE and (2) CHECKING YOUR SAMPLE FOR FLAWS BEFORE IT’S ALL COLLECTED.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

INTRODUCTION TO HYPOTHESIS TESTING

A

In statistics, we may have to test our sample to see if it accurately represents the population.

We also have to test multiple samples against each other to see if our manipulation of an independent variable produced “significant” changes.

We do this through “hypothesis testing.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

REGION OF REJECTION

A

The section of a distribution so far away from the hypothesized estimates that if you get sample statistics (e.g. mean, correlation coefficient) from there, you can conclude they are significantly different from the originally-hypothesized estimates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

CRITICAL VALUE

A

The value that marks the boundary of the region of rejection.

You’re probably wishing I would just tell you exactly where that boundary is, but each study sets its own boundary according to their own data analysis plan.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

TWO-TAILED TEST

A

A hypothesis test that does not predict whether scores will be higher or lower.

Most tests are two-tailed tests. In fact, two-tailed tests are so overwhelmingly used that I won’t test you on one-tailed tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

ONE-TAILED TEST

A

A hypothesis test that predicts the direction of the score’s change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

NULL HYPOTHESIS

A

A hypothesis that “NO SIGNIFICANT DIFFERENCE” will be found between the sample and the population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

ALTERNATIVE HYPOTHESIS

A

A hypothesis that the sample’s scores will be significantly different from the population’s.

(That’s actually a really choppy definition, but it works for the time being)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Non Rejection Region

A

Let’s take a closer look at this image. In the bottom center you see a symbol that looks like the letter “u.” This is the symbol for a
population mean. Your null hypothesis is that the mean from your sample is equal to the population mean. If your sample mean
is close to the population mean then it will be in the “nonrejection region” and you’ll “fail to reject the null hypothesis.” If your
sample mean is far lower or far higher than the population mean then it may be in the “rejection region” shown in blue. If this
happens, you will “reject the null hypothesis.” The critical value is the boundary between the nonrejection region and the rejection
region. Here, there are two boundaries because the researcher used a two-tailed test. Most of our tests are two-tailed, because
we’re looking to see if our sample mean is significantly higher OR significantly lower than the population mean. In a one-tailed
test the researcher only has one rejection region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Z-TEST

A

A statistical test that compares a sample mean to a known population mean and standard deviation.

This test is rare, and I will not be going over it. It is extremely uncommon in statistics to have a known population mean. Most population means are estimates based off of the means of large samples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

ALPHA

A

The probability value or “p-value” that marks the boundary of your region of rejection

The most common alpha we use is .05. You often see this written out as
𝛼𝛼=.05

This means that any p-value result that is less than .05 is in our rejection region, and we will reject the null hypothesis.

17
Q

SIGNIFICANT

A

“Significant” means having a p-value below your alpha value.

Here’s another way to understand it: When your sample’s data is so far away from the population’s data estimate then we say that the probability of getting a sample result this far away from the population estimate (or p-value) is even lower than the probability level that we’re using as a boundary for rejecting the null hypothesis (or alpha value).

When that happens, we call the result “statistically significant.”

18
Q

NONSIGNIFICANT

A

“Nonsignificant” means having a p-value equal to, or above your alpha value.

19
Q

TYPE I ERROR

A

Rejecting the null hypothesis when it’s actually true.

Basically, this means saying “Hey, I’ve discovered something!” when you’ve actually just received a random result.

20
Q

TYPE II ERROR

A

Failing to reject the null hypothesis when it’s actually false.

This means that you’ve said “My result is just random, it doesn’t mean anything” when you’ve actually discovered something.

21
Q

TYPE I ERROR VS. TYPE II ERROR STORY TIME

A

A few years ago I read a statistics question that asked “Which is worse, type I or type II error?” I’m pretty sure I know what the question’s author would say, but the truth is that it depends on your situation. If I completed an experiment and the data indicated that low-sodium diets reduce depression symptoms, then I would report those results. This would create a huge mental health story among the other researchers, and they would copy my experiment and try it out with their own samples (that might sound like plagiarism or theft, but this process called “replication” is key to improving research).

22
Q

TYPE I ERROR VS. TYPE II ERROR STORY TIME 2

A

Now, let’s say that 7 different researchers copied my experiment design with their own samples, and all 7 of them received sample results that showed no significant change in depression symptoms or levels. It now looks like I made a type I error; I rejected a null hypothesis that was actually true. I did that because my data genuinely indicated a significant result, so if I could show my original data to other researchers and statisticians then they would understand that it was just a fluke result due to sampling error (which is inevitable) and I wouldn’t be blamed for anything.

23
Q

TYPE I ERROR VS. TYPE II ERROR STORY TIME 3

A

Now, let’s see a type II error story. If I was researching which diets could reduce autism symptoms and I used 5 different samples and gave each sample a different diet for 3 months, then found that none of the diets significantly reduced symptoms, then I would report those results. The researchers would see my results and probably think “Yeah, it makes sense that none of these diets reduced autism symptoms, because we’ve never had a truly successful diet-based treatment for that disorder.” The researchers would go back to whatever projects they were working on before.

24
Q

TYPE I ERROR VS. TYPE II ERROR STORY TIME 4

A

But let’s say that one of those 5 diets actually DID work, but only after staying on it for at least 6 months. I would have committed a type II error because I failed to reject the null hypothesis when it was actually false the whole time. There’s a chance that no researcher would ever try out that diet with a sample of patients with autism for 6 straight months, so we missed out on a chance to develop an effective treatment for the disorder.

25
TYPE I ERROR VS. TYPE II ERROR STORY TIME 5
If you let those two stories sink in a bit, you’ll understand why the idea that “type I error is always worse than type II error” is inaccurate. I mean, which is worse? telling everyone that you saw a UFO when it was just a test aircraft from Edwards AFB? Or actually seeing a UFO but telling yourself “I bet it’s just one of those rockets that the rich Tesla® guy always shoots off around here.” Again, it just depends on the situation. By the way, for those of you still wondering if it’s true that no diet has successfully reduced autism symptoms, the truth is that some small studies have claimed significant differences, but no major study has backed up those claims. https://pediatrics.aappublications.org/content/pediatrics/139/6/e20170346.full.pdf There’s the most widely accepted research article on the subject.
26
POWER
The probability that you’ll correctly reject the null hypothesis and avoid a type I error. Of course, power can never be a perfect proportion of 1.00 or 100%. If that happened, then your p-value would need to be a positive number less than zero in order to reject the null hypothesis. Obviously, this causes serious problems with mathematics.
27
“MY HEAD HURTS”
So, like I’ve told you in class, most of these concepts are easier to understand once you start performing the calculations. The book talks about the z-test, but I hate the z-test because it’s so unrealistic to use in the real world. I’ve read over hundreds of experiments and correlational studies and I’ve NEVER seen a z-test being used. Why does the book teach it? Because if you’re interested in the theory behind statistics, it’s a useful stepping stone toward later tests.