Introduction Flashcards

1
Q

Definition of falsifiable

A

Can be proved to be false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Parsimonius

A

simple model to explain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Definition of theory

A

Set of principles that explain a topic on which a new hypothesis can be made

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Descriptive statistics

A

Summarise a collection without inferences made

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Inferential stats

A

Draws inferences about a population from estimation or hypothesis testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Quantitative

A

Measured on interval/ratio scale or ordinal data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Qualitative

A

Assign objects into labelled groups without natural ordering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Interval variables

A

Equal intervals e.g. age

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Ratio variables

A

Equal intervals with a clear 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Binary

A

2 categories

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Nominal

A

More than 2 categories

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Average used for nominal data

A

Mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Ordinal data

A

More than 2 categories with an order (e.g. 1st, 2:1, 2:2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Average used for ordinal data

A

Median

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

why does a variable type matter?

A

Alters tests that can be used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Measurement error

A

Discrepancy between actual number and one recorded

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Systematic variance

A

Due to dependent variable

18
Q

Random error

A

Random variance

19
Q

What is validity

A

Measure of how well it measures what it’s supposed to measure

20
Q

Problem with hypothesis testing

A

encourages all or nothing thinking, just because null hypothesis is rejected doesn’t mean it’s true

21
Q

One-tailed when to reject H0

A

Reject null hypothesis if in extreme 5%

22
Q

Two tailed when to reject H0

A

Reject is in either 2.5%

23
Q

Type 1 error

A

Rejection of true null hypothesis, incorrectly preduct that variance is accounted for by the model, accepted p

24
Q

Type 2 error

A

Fails to reject null hypothesis, incorrectly predict that too much variance is unaccounted for by the model, acceptable p=.2 at beta level

25
Benefit of one tailed (error)
Lower chance of type 2 error and more power BUT only in one direction have to be sure of result
26
Meaning of effect size
degree to whihc the mean of H1 differs from mean od H0 in terms of SD OR how much variance is explained
27
Ways to reduce type 1 error
Look at effect size (standardises results and not reliant on sample size)
28
Calculate cohen's d
Mean1 - Mean2 / pooledSD
29
Calculate pearson's r
Cov(xy) / SxSy
30
Pooled SD
Squarert ((SD12 + SD22)/2)
31
What does more power mean in terms of error
Reduces type 2 error, better chance of correctly rejecting null hypothesis with bigger sample
32
Ideal power
.8
33
Big r
.5
34
Big d
.8
35
Linearity
Variables are linearly related
36
Addivity
Several predictors combined effects best described with addition
37
Normality
Sampling distribution should be normal check with Kolmogorous-Smirnov (non-sign = normal) or Kurtosis (0=normal)
38
Homogenity of variance
Samples should have similar variances and outcome stable across predictor check (also referred to as heteroscedasticity) Levene's test (non sig = variance assumed) or variance ratio (2 = assumed)
39
Independence
Errors should not be related
40
Checking for heteroscedasticity
Check scatterplots, want random pattern of standardised values
41
How to reduce bias
trimming data (or standardised), windowinising (replace outliers with highest value), bootstrapping or transforming (e.g. logs)