Psyc7006 - Video gaming - wk4 Flashcards

1
Q

Define: inferential statistics

A

A test to tell us whether a difference we find between conditions than we expect based on random variation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define: P-Value

A

probability that we would get our results just based on chance if there were no real differences between conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define: Alpha

A

The cutoff we use for deciding whether or not an effect is due to chance, 5% (.05) is the convention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define: Type 1 error

A

false positive: we find a significant effect even though there isn’t a real difference. We decide an effect is real/significant, even though its actually due to chance (5% of the time if the alpha is .o5)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define: Type 2 error

A

false negative: we fail to find a significant effect even though there really is a difference between conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define: Power (and what determines it)

A

Sample size. The probability of finding a significant effect in our data if there really is a difference between conditions: 1 minus the probability of type 2 error (e.g., 10% chance of type 2 error = 90%)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why replicate a study?

A

(schmidt, 2009)

To control for type 1 errors

  • Maybe the original finding was statistical error (5% chance)
  • Analysis errors, selective reporting, and “fishing” can increase the risk of type 1 error (more than 5% chance) i.e., our true alpha may be higher than nominal alpha (.05)

To control for fraud
* Maybe the original researchers lied.

To control for bad research
* Maybe the original study was confounded or poorly designed

To generalise results to a new population or setting
* Maybe the original finding was idiosyncratic to the population or circumstances where it was tested.

To provide a new test of the underlying theory
* Maybe the original finding wasn’t a strong enough test, can we design a different experiment to test the same theory?

(*last three goes beyond simple repetition of the experiment)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Reason to avoid a direct replication?

A
  • Results might be difficult to publish - a successful replication is considered boring, a failed replication is assumed to be a mistake. - only successful replications + extensions tend to get published.
  • Costs may be prohibitive or resources inaccessible - larges studies may be very expensive to repeat, specialised populations might not be available.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What explains a failure to replicate?

A
  • original finding was type 1 error
  • original finding was fraud
  • original finding was the result of a confound
  • origin finding does not generaliseWhat
  • original finding was based on a mistaken theory
  • REPLICATION ITSELF MAY NOT HAVE HAD ENOUGH STATISTICAL POWER
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Three ‘types’ of direct replication are? (give definition and examples)

A
  • Direct Replication - we try to recreate the original procedure as closely as possible (provides heterogeneity of irrelevancies)
  • Replication and extension - we repeat all or part of the original procedure and add new conditions to extend the findings.
  • Conceptual replication - we test the hypothesis of the original study using a new research method. E.G., …..
    1. Bargh et al., 1996 - young adults who are primed with words about getting old walk more slowly when they leave the lab.
      1. Jostmann et al., 2009 - people holding a heavy clipboard make decisions more carefully.

where the linking concept = the notion that unconscious thinking affects our behaviour.

Advantage of conceptual replication = broad generalizability of the phenomenon

Disadvantage of conceptual replication = are the experiments really similar enough to prove or disprove one another? (people have tried to replicate Bargh’s experiment unsuccessfully)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is there doubt regarding the findings/robustness of a relationship between video games and perceptual cognitive gains?

A

(hint: statistics, null finding bias, methodology, demand, correlation, lack-of-blindness –hypothesis guessing, overt recruitment, lab using repeat subjects which is not true replication.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Design a video game replication study that helps control for the methodological flaws that are common in most video game studies

A

Cross-sectional, subvert recruitment (no mention of video game experience - subject-blindness +researched blindness)…object or spatial attention task (or attentional blink)…video game questionnaire AFTER task. Could include video-game play questionnaires in a large generic survey study and later on recruit those people without telling them the study is about video gaming.

Longitudinal study is more difficult! - control cannot simply be a ‘no-game’ condition, has to be something equally believable e.g., …a game like tetris versus a action FPS like unreal tournament.

Recruit participants that have not been involved in other video gaming studies…….

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why might video game replication fail?

A

Original results due to poor method - demand effects, hypothesis guessing, practice effects from using same participants over-again. (population differences)

Original results due to fraud.
Original results due to type 1 error (unlikely/unexpected findings often type 1 error, more likely to be reported due to ‘interest’…null-finding publication bias)

New findings due to type 2 error (e.g., due to not enough statistical power)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly