Final Flashcards

(84 cards)

1
Q

the way we measure something determines the …

A

type of data we get

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

levels of measurement include

A

nominal, ordinal, interval/ ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

type of data include

A

qualitative, ranked, quantitative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

qualititative categories

A

nominal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

ranked

A

ordinal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

puts observations on a scale w zero

A

interval/ ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

words or codes that represent category

A

qualitative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

that indicate order/ standing

A

ranked

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

indicate amount or count

A

quantitative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what summarizes, organzie data

A

descriptive stats

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

examples of descriptive stats

A

count, central tend, variability, correlatioin

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

use small sample to est. something and depends on quality and uses hypothesis testing

A

inferential

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what tests are usually used for inferential

A

anova and t-test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

isolated number seperated by groups

A

discerte

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

no restrictions and constant change

A

continuous variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

continous variables rounded

A

approximate numbers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

manipuated by experimenter

A

independent var

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

something believed to be influenced by independent

A

dependent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

variable that experimenter have failed to account for that compromise and inter of a study

A

confounding variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

a comprehensive group

A

pop

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

a subgroup that we are using to infer/ est things about a pop

A

sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

want to know middle (mean, median, mode)

A

central

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

variable and standard deviation

A

varability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

sum of all divided by n, very susceptive to skew

A

mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
data into 2 halves, somewhat susceptive to skew
median
26
most comon no susceptible to skew and is binned
mode
27
simplest measure of var.
range
28
look at difference between values and the mean
variance
29
square root of variance
standard variance
30
deviation from mean , squared them, add together is
sum of squares
31
not affected by outliers
interquartile range
32
symbol or observatioins
n
33
one datum
X
34
mean
x bar
35
variance
s squared
36
SD
s
37
standard normal curve numbers percentages in order
0.1, 2.1, 13.6, 34.1, 34.1, 13.6, 2.1, 0.1
38
standard normal curve has mean of
0
39
standard normal curve has a SD of
1
40
can distributions be skewed and if so how
yes neg or pos
41
distributions can be peaky or taily
kurtosis
42
very flat, with long tails
platykurtic
43
point/peaky
leptokurtic
44
just right
mesokurtic
45
kolmogorov- smirnov and sapiro-wilk are tests of
normality
46
q-q plot is good visual method for doouble checking data especially for
large n
47
is K-S or S-W better
S-W
48
describes a relationship between two variables
correlation
49
Positive relationships are ones
where an increase in one variable predicts an increase in the other
50
Negative relationships are ones
where the an increase in one variable predicts a decrease in the other
51
most effective way of presenting relationship data
scaterplots
52
relationships best described by lines
linear relationship
53
best described w curves
curvilinear
54
pearson correlation varies from
-1 to 1
55
pearson correlation
Uses two variables Variables are both quantitative* Variable relationships are linear Minimal skew/no large outliers Must observe the whole range for each variable
56
parametric analysis
peason
57
nonparametic is
spearmeans rank, kendalls tau-b, eta
58
Random samples are not casual or haphazard, getting truly random samples requires care
sampling
59
is the property of a dataset having variability that is similar across it’s whole range
homoskedasticity
60
opposite of homoscedastic
heteroskedastic
61
is used when surveying, to obtain a “snapshot” of the population
random sampling
62
is a process used in an experiment to minimize bias in your experimental groups
random assignment
63
“Regardless of the shape of the population, the shape of the sampling distribution of the mean approximates a normal curve if the sample size is large enough”
the central limit theorem
64
type 1 error is
false alarm/ false positive
65
type 2 error
miss/ false negative
66
assumptions for binomial test
* All cases are mutually independent * All samples have the same distribution * You know the probability of the population
67
level of confidence is often what percentages
95 and 99
68
what percetnage for level of confidence is weaker
99
69
are t test parametic or nonparamentric
parametric
70
When you want to compare a sample mean to some known or hypothesized value
one-sample t test
71
If you want to compare two groups to one another
independent samples t-test
72
If you want to see how a group changes over time
repeated measures t-test
73
degrees of freedom for one sample t-test
n-1
74
degrees of freedom for independent samples t-test
n-2
75
degrees of freedome for paired samples t-test
n/2-1
76
he exact significance is calculated from all potential distributions. It is very computationally intensive but works well with a small N
exact sig
77
calculated using an estimated curve; inaccurate for small N but approaches the exact as N grows
aymptotic sig
78
uses a random process to estimate the significance using areas under the curve. Less computationally intensive than exact at high N, but not perfectly consistent
monte-carlo sig
79
the f-ration is the variability between groups divided by
variability within groups
80
degrees of freedom for one factor ANOVA total df is
numbers of scores-1
81
degrees of freedom for one factor ANOVA between groups df
number of scores -1
82
degrees of freedom for one factor ANOVA within groups df
number of scores- number of groups
83
rejection of the null in an anova only means that all the population means are not
equal
84
when are post hocs done
after main analysis