Introduction Flashcards

1
Q

Definition of falsifiable

A

Can be proved to be false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Parsimonius

A

simple model to explain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Definition of theory

A

Set of principles that explain a topic on which a new hypothesis can be made

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Descriptive statistics

A

Summarise a collection without inferences made

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Inferential stats

A

Draws inferences about a population from estimation or hypothesis testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Quantitative

A

Measured on interval/ratio scale or ordinal data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Qualitative

A

Assign objects into labelled groups without natural ordering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Interval variables

A

Equal intervals e.g. age

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Ratio variables

A

Equal intervals with a clear 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Binary

A

2 categories

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Nominal

A

More than 2 categories

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Average used for nominal data

A

Mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Ordinal data

A

More than 2 categories with an order (e.g. 1st, 2:1, 2:2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Average used for ordinal data

A

Median

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

why does a variable type matter?

A

Alters tests that can be used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Measurement error

A

Discrepancy between actual number and one recorded

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Systematic variance

A

Due to dependent variable

18
Q

Random error

A

Random variance

19
Q

What is validity

A

Measure of how well it measures what it’s supposed to measure

20
Q

Problem with hypothesis testing

A

encourages all or nothing thinking, just because null hypothesis is rejected doesn’t mean it’s true

21
Q

One-tailed when to reject H0

A

Reject null hypothesis if in extreme 5%

22
Q

Two tailed when to reject H0

A

Reject is in either 2.5%

23
Q

Type 1 error

A

Rejection of true null hypothesis, incorrectly preduct that variance is accounted for by the model, accepted p

24
Q

Type 2 error

A

Fails to reject null hypothesis, incorrectly predict that too much variance is unaccounted for by the model, acceptable p=.2 at beta level

25
Q

Benefit of one tailed (error)

A

Lower chance of type 2 error and more power BUT only in one direction have to be sure of result

26
Q

Meaning of effect size

A

degree to whihc the mean of H1 differs from mean od H0 in terms of SD OR how much variance is explained

27
Q

Ways to reduce type 1 error

A

Look at effect size (standardises results and not reliant on sample size)

28
Q

Calculate cohen’s d

A

Mean1 - Mean2 / pooledSD

29
Q

Calculate pearson’s r

A

Cov(xy) / SxSy

30
Q

Pooled SD

A

Squarert ((SD12 + SD22)/2)

31
Q

What does more power mean in terms of error

A

Reduces type 2 error, better chance of correctly rejecting null hypothesis with bigger sample

32
Q

Ideal power

A

.8

33
Q

Big r

A

.5

34
Q

Big d

A

.8

35
Q

Linearity

A

Variables are linearly related

36
Q

Addivity

A

Several predictors combined effects best described with addition

37
Q

Normality

A

Sampling distribution should be normal check with Kolmogorous-Smirnov (non-sign = normal) or Kurtosis (0=normal)

38
Q

Homogenity of variance

A

Samples should have similar variances and outcome stable across predictor check (also referred to as heteroscedasticity) Levene’s test (non sig = variance assumed) or variance ratio (2 = assumed)

39
Q

Independence

A

Errors should not be related

40
Q

Checking for heteroscedasticity

A

Check scatterplots, want random pattern of standardised values

41
Q

How to reduce bias

A

trimming data (or standardised), windowinising (replace outliers with highest value), bootstrapping or transforming (e.g. logs)