Lecture 2: Doing Science Flashcards Preview

Positive Psychology > Lecture 2: Doing Science > Flashcards

Flashcards in Lecture 2: Doing Science Deck (13)
Loading flashcards...
1
Q

What is the Correlational approach and the problems associated with it?

A
  • The correlational approach focuses on determining whether, and how strongly, two things are
    linked.
  • Directionality problem (hard to determine which variable is the cause and which is the effect. Correlation does not equal causation). But timing helps (longitudinal, ESM)
  • Third variable problem (something that effected the results but was not included)
  • Often more naturalistic (no random assignment or manipulation)
  • Textbook example: link between spending money on yourself vs. others and happiness
2
Q

What is the experimental approach?

A
  • The experimental approach differs from the correlational
    approach because it manipulates something, rather than merely observing it (e.g., being instructed to spend money on others or yourself, random assignment to conditions)
  • Confident causal direction (big advantage)
  • ‘confounds’ like third variable problem ( a single experiment is often ambiguous due to confounds)
  • Often artificial (lab settings. Text example: given 20$ to spend, told to either spend on yourself or others. But how often do strangers give you 20$ in the morning and tell you how to spend your money)
  • Generalizable?
  • Cohen’s d (differences between groups are not usually large, there is usually overlap)
3
Q

Replication: a brief (recent) history

A
  • Around 2011:
  • Some high profile fraud
  • Multi study paper on ESP (extra sensory perception) in top journal (Bem- well known social psychologist. 8 or 9 different studies on how people could feel/sense the future before its happened. He collected data, did conventional statistics but this study was not replicable)
  • Some attention to failed replication studies
  • Doesn’t mean the original finding is wrong, could be due to biases
  • Longer history of ‘gossip’ (There was an informal way of discussing failed replications or findings)
  • More questions about p-values (dance of the p’s, Unreliability much worse with smaller samples)
  • “false positive psychology” and “p-hacking”
    – False positive means that you think you found something significant but its actually due to chance
    Suggests using confidence intervals. If you have more people in your study the width of your confidence interval shrinks and p is more reliable
4
Q

What are some questionable research practices?

A
  • Multiple (unreported) dependent variables
  • Adding statistical controls (depending on p, post results)
  • Adding participants (depending on p, add people after the fact)
  • Simulations say doing all of these can create false positive results (p < .05) 60% of the time
  • Small samples make false positives more likely
5
Q

What are some examples of researcher incentive to find statistical significance?

A
  • Pressure/rewards for publication (bias)
    • More is better
    • New/novel is better
    • Faster is better
    • Null results not welcome
    • Replications seen as boring
  • Jobs, funding, status at stake
  • All this to say there is a lot of incentive to find statistical significance
  • [and few things to curb bad practice]
6
Q

What is an exact (direct) replication?

A
  • With direct (or exact) replications a new study attempts to repeat the procedures of an original study as closely as possible. The purpose of a direct replication is to determine whether or not specific procedures
    reliably produce the same results.
  • Do same procedures produce same result?
7
Q

What is a conceptual replication?

A
  • Conceptual replications re-test the basic idea of an original study, but intentionally/strategically change the procedures in some meaningful way. The purpose of the procedure change is to ensure that the original finding is not due to idiosyncratic features of the particular methods.
  • Is the underlying idea supported with new procedures?
8
Q

What is the reproducibility project?

A

The reproducibility project (science, 2015):

  • Select representative sample of studies (2008)
  • Pair study with new researcher
  • Consult original researcher for details
  • Publicly record detailed plans
  • Collect new data and analyse results
  • Goal: to estimate the rate of reproducibility in psychology.
9
Q

What were the results of the reproducibility project?

A
  • When all the results were collected, the project found that about one third to one-half of the studies successfully replicated. These results were generally disappointing.
  • Effects were about 50% of original reports
  • Cognitive psych appeared more replicable than social psychology
  • This is not unique to psychology
10
Q

What explains non-replication?

A
  • Original result was fraudulent (infrequent)
  • Original result due to chance (p < .05)
  • Original result inflated, (partly) due to questionable practices
  • Mistake or bias in a replication attempt
  • Different context, culture, psychological situation, or other boundary condition
  • Falsifiability still important
11
Q

What is the credibility revolution?

A
  • The credibility revolution describes dramatic changes in how (psychological) scientists conduct, report, and evaluate research, with reforms aimed at increasing the confidence of findings. It raises broad questions about how much we can trust the research record in positive psychology; yet I hope you will remain optimistic about our trajectory – better methods produce better science which feeds forward to better applications.
12
Q

How can all of this be positive?

A
  • Quantifies/estimates reproducibility rate (But what rate is ideal?)
  • Suggest value in reform
  • Motivates better practices
  • Is very scientific
13
Q

What are the solutions or the ways forward?

A
  • More cautious view of published findings
  • Test potential moderators in new studies ( i.e., discover what the result ‘depends on’)
  • More open science practices (Make materials, procedures, data available. Pre-register study & analysis plans. Registered reports: publish blind to results)
  • Better methods; systematic research (E.g., larger sample sizes, multi site collaborations)
  • Reward replication efforts (Funding, prizes, status. Most of this is happening now)