Issues in research methods Flashcards

(46 cards)

1
Q

Who created the hypothetico-deductive method?

A

Karl Popper

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the 8 stages of the hypothetico-deductive method?

A

Theory -> hypothesis -> operationalisation of concepts -> selection of participants -> survey studies/experimental designs -> data collection -> data analysis -> findings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What did Popper (1972) argue about how we know what to test?

A

Research begins with the identification of a particular problem/issue and further suggested that there are 2 possible sources of research ideas: casual observation and previous research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is casual observation in the generation of research ideas?

A

A researcher spots a new phenomenon for the first time and decides it is worthy of investigation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How does previous research inform new reserach ideas?

A

Can motivate new research projects for a number of reasons e.g. replication/generalisation/original findings/testing ideas in a new context/zooming in to look at underlying processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What do surveys measure?

A

Variables as they naturally occur e.g. physical activity and wellbeing in general public
- variables measured, not manipulated
- Dependent on sampling, may be generalised to wider population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the function of experiments?

A

Manipulating variables to isolate their effects to establish a causal relationship through randomisation, and holding other factors constant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is systematic variation?

A

Variation that can be explained by model (statistic)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is unsystematic variation?

A

Variation that cannot be explained by model (statistic)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the equation for a test statistic?

A

Variance explained by model (systematic) / variance not explained by model (unsystematic) = test statistic

Effect / error = tests statistic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is H1?

A

The effect you expect to find (alternative hypothesis) - related to systematic variation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is H0?

A

Null hypothesis - no evidence of effect - related to unsystematic variation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the process of hypothesis testing?

A
  1. pose hypotheses (one or two tailed)
  2. analyse data (test model - error:effect ratio)
  3. calculate probability of getting result if null hypothesis is true (p > .050, result is significant)
  4. When to reject/fail to reject null hypothesis (don’t accept alt/experimental hyp, just infer evidence of effect)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a type 1 error?

A

Incorrect rejection of null hypothesis
False positive
Conclude there is an effect when there is none

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a type 2 error?

A

Incorrect rejection of alternative hypothesis
False negative
Conclude there is no effect when there is one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are 2 limitations of relying on hypothesis testing (significance)?

A
  1. focus on whether or not result is significant statistically, but not necessarily in broader sense
  2. does not give indication of size of statistical effect
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What does the alpha level predict?

A

Probability of making a type 1 error - saying there is an effect when there isnt (typical value = 0.05)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What does the beta level predict?

A

Probability of making a type 2 error - saying there is no effect when there is one (typical value = .20)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What do effect sizes attempt to address?

A

Type 1 errors - can be significant but too small to be practically meaningful

21
Q

What are effect sizes also referred to?

A

Magnitude of statistical effect found

22
Q

What do effect sizes allow us to investigate?

A

How big a difference / relationship has been found
To compare findings across studies to gauge real-world importance

23
Q

What does the type of effect size used depend on?

A

The type of test used

24
Q

What are the values for small, medium, and large effects for Cohen’s d?

A

Small = .2
Med = .5
Large = .8

25
What are the values for small, medium, and large effects for Pearson's r?
Small = .1 Med = .3 Large = .5
26
What are the values for small, medium, and large effects for omega ω?
Small = .1 Med = .3 Large = .5
27
What are the values for small, medium, and large effects for Eta-squared η2?
Small = .01 Med = .059 Large = .138
28
What do power analyses attempt to address?
Control for type 2 errors - tells us statistical power associated with particular test
29
What are the 2 approaches to power analyses?
1. A priori - before collecting data, determines sample size needed to reliably find an effect if there is one to find 2. Post-hoc - power estimated after collection and calculating inferential statistics
30
What are the 10 steps of model building?
1. Pose hypotheses 2. Calculate a priori power analysis to determine sample 3. Collect data 4. Calculate descriptive statistics 5. Identify whether the data meet the assumptions of parametric statistics 6. Conduct the test (e.g., obtain t-value and p-value) 7. Identify whether the result is significant (p < .05) 8. calculate effect size 9. Reject or fail to reject the null hypothesis (based on p-value) 10. Write it up (discussion/conclusions can be framed by effect size)
31
How is the mean calculated?
𝑥 ̅ = Σx/N Add up all the scores (Σx) and divide by the number of scores/number of participants (N) This gives an indication of the central tendency of a data set
32
How is the variance calculated?
s2 = Σd2 / N-1 The sum of all the squared differences (Σd2) divided by the number of scores/number of participants minus 1 (N-1) This provides an indication of the spread in the data. The higher the variance, the more spread, i.e., the mean may be a poor representation of the data. The smaller the number, the less spread – so the mean is more reliable.
33
How is standard deviation calculated?
s = √(Σd2 / N-1) The sum of all the squared differences (Σd2) divided by the number of scores/number of participants minus 1 (N-1). This is then square rooted. The same as the variance, just we square root the variance. Reduces the value down, making it easier to interpret.
34
What are parametric tests?
Make assumptions about the data Normally distributed Homogeneity of variance Usually only for ratio/ interval data Used for e.g., group differences in equally sized groups
35
What are non-parametric tests?
Make no assumptions about the data Violation of normality assumption (e.g., if data are very skewed / sparse) Used if you have ordinal data Or sometimes where you have small group sizes
36
What is the publish or perish issue of research culture?
Career progression dependent on research success
37
What is the file drawer problem of research culture?
Non-significant results less likely to be published
38
What does the research culture lead to?
Cheatign - focusing too much on finding a significant result can lead to problematic research practices such as p-hacking
39
What can problematic research practices lead to?
- a lack of trust in science - Over-inflated effects reported in literature - Effects that do not replicate (replication crisis)
40
What are the 4 core principles of research integrity?
Honesty Accountability Professional courtesy and fairness Good stewardship
41
What is the replication crisis?
Reports on an attempt to replicate 100 psychological studies and found… - Only 36% of studies could be replicated (i.e., found significant results as reported in the original study). - The average effect size of the replications was smaller than the original study (on average half the size of the original effect). - More ‘surprising’ findings were less likely to be successfully replicated. - Social psychology findings were less likely to replicate than those in cognitive psychology.
42
Why don't studies replicate?
1. publication bias 2. failure to control for bias 3. p-hacking 4. low statistical power 5. poor quality control
43
What are the 7 principles of open science?
1. Assessment (comment/peer review; determine impact of research output and researchers) 2. Preperation (define and crowdsource research priorities; organise project, team, collaborations; get funding/contract) 3. Discovery (search lit/data/code; get access; ger alerts/recommendations; read/view; annotate) 4. Analysis (collect, mine, extract data / experiment; share protocols/notebooks/workflows; analyse) 5. Writing (write/code; visualise; cite; translate) 6. Publication (archive/share publications, data, and codes; select journal to submit to and publish) 7. Outreach (Archive/share posters and presentations; tell about research outside academia; researcher profiles/networks)
44
What is p-hacking?
the inappropriate manipulation of data analysis to enable a favoured result to be presented as statistically significant
45
What is HARKing?
Hypothesizing After Results are Known, is a questionable research practice where a researcher presents a hypothesis developed after observing the results of a study as if it were a pre-existing hypothesis, one developed before the study began
46
How can pre-registration help questionable research practice?
- Pre-registration encourages more “up-front” thinking about a study. - Researchers have to carefully consider each stage of the research process, from generating hypotheses to data analysis. - The aim is to reduce the risk of questionable research practices (QRPs) and biases during the research process (e.g., HARKing and P-hacking). - It doesn’t stop anyone from cheating, but does try to ensure that the research process is transparent, well thought through, and ensures that the inferential statistics are truly confirmatory (planned in advance). - It can be time consuming, as the process requires researchers to think through every stage of the process, and consider how they will deal with issues if they occur (e.g., data that do not meet the assumptions of their chosen analysis). But this in itself often leads to better quality research.