Section 7: Research Methods Flashcards

(66 cards)

1
Q

What are the five ethical guidelines from the BPS?

A
  1. Informed consent
  2. Deception
  3. Protection from physical/psychological harm
  4. Debriefing / Right to withdrawal of data
  5. Confidentiality
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is informed consent?

A

BPS guidelines ppts shd give informed consent,
Being told aims and nature of study before agreeing.
(Shd know can withdraw at anytime)
Parents give consent for under 16s.

Consent unobtained in naturalistic observation studies.
It’s acceptable if done in public place where ppl would expect to be observed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What’s deception

A

If ppts been deceived (omission, conf.s) , cannot give informed consent.
Sometimes withhold info as ppts wont behave naturally (if knew aim.)
- guidelines say only acceptable if strong scientific justification, w. no alternate procedure
- Here are given general details, and may feel deceived. (Prior general)

  • presumptive consent is when ppl of target pop are asked if wd object to study
  • if wouldn’t, it’s done with naive ppts - even tho may have diff ops.
  • prior general consent is where ppts are asked if prepared to take part
  • in study where they might be deceived abt true purpose
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the BPS guideline on protection from harm

A

The risk of harm should be no more than that faced in reality
—> hard to accurately access, as some may face risks at work (soldier)
- but they cannot be exposed to risks in research.

Physical and psychological harm includes distress, discomfort, fear and embarrassment too.
Researchers don’t always know what’s distressing for ppts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What’s debriefing/right to withdrawal?

A

Debriefing before or after returns ppts to state they were in before research.
—> especially important if any severity of deception used

It’s where researchers fully explain what research involved and what results show.
Ppts are given right to withdraw data anytime
- retrospective consent is where ppts give consent after debriefing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What’s confidentiality

A

No ppts should be identifiable from reports produced.
- ppts should be warned if data isn’t anonymous,

But some groups/ppl may be easily identifiable from characteristics
Especially if report says where and when study was, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What’s social desirability bias

A

Where ppl behave the way they think they should when observed.
In the best possible light, like being more socially acceptable, eg. Giving more to charity.
—> unnatural and reduces validity / accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What’s mundane realism

A

It’s an evaluation point
About whether the task reflects experiences in real world.
(Eg. Peterson and petersons’ study doesn’t show mundane realism)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what’s internal, external and ecological validity

A

Internal validity
- extent to wch the test/method
- really measures what it claims to/what the aim is
(Eg, Peterson and peterson’s shows more internal validity with distractor task)

External validity (similar to ecological)
- extent to wch the study (not task) applies to reality
(Eg, bahricks study has high external validity)

High mundane realism = (usually means) ecological validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are aims

A

—> statement of study’s purpose.
Aims are stated beforehand so it’s clear what study intends to investigate

Eg. Asch’s was “to study majority influence in an ambiguous task”
States cause and effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What’s a null hypothesis

A
  • It’s predicting there’s no relationship/difference between variables in a study.
    And correlations are merely by chance.
  • or saying there’s no difference in score between differing conditions of environment

“There’s no significant difference in exam grade between those using flashcards and who don’t”

Any data collected will either back this up or it won’t.
If doesn’t support your null hypothesis, go with alt. hypothesis instead

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What’s an alternative hypothesis, and also directional and non directional hypotheses?

A

Alternative hypotheses say variables are linked.
Usually just say correlation / relationship between variables
(will be significant difference/positive relationship between..)

Directional:
It states which group or condition will do better
“Students using a revision guide will significantly get higher grades with fcs than those who don’t”
“There will be significantly higher reaction time if coffee drank before, than if not”
—> when previous research’s says where it’ll go, compared to other condition
(.. significantly higher DV if higher IV than other IV)

Non directional:
Would predict a difference, not saying wch condition does better tho
“There’s a significant difference in exam grades between using flash cards and those not”
—> used with less/mixed previous research

ALWAYS DIRECTIONAL!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is operationalisation, and EVs/CVs

A

Extraneous variables are factors that MAY be affecting the DV other than IV (chance of control)
Will take place/affect

Confounding Variables ACTUALLY influence DV (no chance of control)
Did take place/affect

..

Variables must be operationalised (describing how variables are measured)
- some easy to operationalise (eg. Height)
- some are hard (eg. Mother’s love for her child)
To operationalise hard variables,use a scale.
Eg. Pain on scale from 1-10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the five sampling methods

A

The sample should be representative of ppl in target pop
So results can be generalised to whole target group.
An unrepresentative sample is biased and can’t reliably be generalised to whole target pop.

Main ways..
- random sampling
- opportunity sampling
- volunteer sampling
- systematic sampling
- stratified sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is random sampling, and its evaluation

A

—> every member of target group has an equal chance of selection for sample.
- done manually (each assigned number, wch all put in hat.)
- or by computer (computer picks random number)
..

+ it’s fair, as everyone has an equal chance of selection
Sample is unaffected by researcher bias

— it doesn’t guarantee a representative sample (chance that subgroups aren’t picked)

— If target pop is big, may not be possible to assign everyone a number.
Why this isn’t always used, as an impractical method.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Opportunity sampling? And its evaluation

A

—> researcher samples anyone available and willing to be studied
- since many researchers work in unis, most opportunity samples are students
..

+ a quick and practical method of obtaining samples

— samples are unrepresentative of target population
Findings cannot be confidently generalised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Volunteer sampling and it’s evaluation

A

—> people actively volunteer by responding to a request for ppts advertised (eg, noticeboard)
- the researcher will select only those suitable for the study
..

+ if an ad is placed prominently (eg. National newspaper), a large number of ppl may respond
allowing a more in depth analysis and accurate statistical results, due to more ppts

— only ppl who heard of ad will volunteer; no one else
Meaning only ppl in the study will have a cooperative nature
Making sample unrepresentative of target pop

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Systematic sampling and its evaluation

A

—> every nth name from a sampling frame (record of all names in target pop) is taken
- eg, every 3rd name in a register

..
+ a simple and effective way of generating a sample with a random element
Meaning smaple of pop is more likely to be evened out than other methods

— subgroups may be missed , meaning not representative
If pop isn’t put into a random pattern , samples won’t be representative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Stratified sampling and its evaluation

A

—> important subgroups in a pop are identified
- and a proportionate number of each is randomly obtained

..
+ will create a representative sample
- can reduce bias in other types of sampling (random, systematic)

— can take lots of time and money to do
Often hard to identify all traits and characteristics practically

— some subgroups will be missed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are pilot studies

A

A small scale test to find flaws in the methodology
of research (eg. Studies, questionnaires, etc)
—> so they can be corrected for the final full scale study

Used for example,
- to see if ppts understand instructions
- if they understand wording
- if method finds targeted behaviours
- if recording equipment is suitable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are demand characteristics

A

Human ppts will normally be aware they’re being studied
Meaning they don’t show a true response (data = unreliable/invalid)

Demand characteristics..
Aspects of studies allow ppts to have an idea of its purpose
If they think they know what response researcher expects
They may show that response to please them (or deliberately do opposite)

—> invalid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Single ands double blind designs?

A

Single..
- ppts don’t know true aim of investigation
(eg. Wch condition, or if there are conditions at all)
- controls confounding effects of demand characteristics

Double..
- ppts don’t know aim and researcher doesn’t know wch condition is wch group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How to make blind designs ethical

A

If ppts don’t know aim, can’t give informed consent
Here are ways to deal

-general consent
Giving a list of potential research they may be taking part in (dk wch one)
So they know that they don’t know aim
Then asking if they consent

-debriefing
At end, giving all ppts all info and asking if happy for data to be shared

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What’s repeated measures design

A

The same people are used in all conditions
Performances in different conditions are compared
Eg. Reaction time tested in music condition and silent condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Evaluation of repeated measures
+ participant variables eliminated + fewer ppts needed .. — order effects - performance in second condition may be affected from performing in first - may perform better (practice effect) - may be worse (ppts are tired/bored so fatigue effect) —> controlled by counterbalancing (half do A then B, other vice versa) — ppts may guess aim - may behave differently as have an idea of what they expect to find (suffer demand characteristics) — necessary to have 2 diff sets of materials for each condition - hard as have to be same difficulty
26
What is independent groups design
Different ppl take part in each condition Their performances compared (sometimes only design that can be used when IV is gender, age) Ppts should be randomly allocated to conditions To avoid researcher bias - give everyone a number and drawing out first 10 numbers - important aspect of control
27
Independent groups design evaluation
+ no order effects like practice or fatigue (one condition only) + less likely to figure out aim (more naive, so should behave naturally) .. — ppt variables could confound results/reduce control as ppl differ — more ppts needed so costs more money and less practical
28
What’s the matched pairs design
Different ppl take part in each condition But are matched in ways that matter for the experiment (like characteristics) - matching two groups in terms of age, gender, iq So ppts variables are less likely to affect results
29
Matched pairs design evaluation
+ ppt variables effects are minimised by matching process + no order effects (one condition) + less likely to find out aim (don’t suffer demand characteristics) — matching is difficult and time consuming — ppts variables aren’t fully eliminated
30
What’s a laboratory experiment (experimental methods)
These are conducted in a carefully controlled environment Experimenters do this by conducting experiments in one specific place/room IV is manipulated by experimenter Setting is controlled by experimenter
31
Whats a field experiment
Takes place in a natural environment (not artificial) But ppts don’t know they’re taking part IV manipulated by researcher Setting is a natural environment (not manipulated by researcher)
32
Whats a quasi experiment
Researcher doesn’t manipulate IV so And can’t allocate ppts into conditions —> not considered a true experiment as random allocation and control are missing IV is not manipulated by researcher (It is naturally occurring and always has existed - gender) Setting sometimes controlled be researcher
33
What’s a natural experiment
A type of quasi experiment where researcher Is able to take advantage of a change taking place And devises a study around it —> IV is something that changes naturally and gradually (also is temporary) Eg. A natural distaster on health (IV = before and after disaster) IV isn’t manipulated by by researcher (for ethical or practical reasons) Setting sometimes controlled by researcher
34
Laboratory experiment evaluation
+ high control over variables + can infer/suggest cause and effect + standardised procedures/easy replication + ethical ; know they’re participating — artificial situation - less mundane realism and ecological validity
35
Field experiment evaluation
+ natural behaviour as environment is natural (Less chance of demand characteristics) + greater mundane realism/ecological validity .. — less control over variables — unethical (may not know of taking part)
36
Natural experiment evaluation
+ allows research where IV can’t be manipulated for ethical/practical reasons + enables investigations on real problems (insight) so allows generalisation Due to increased ecological validity / mundane realism ,. — no random allocation so confounding variables likely — only used where conditions vary naturally — DV may be a artificial task, reducing ecological validity
37
Quasi experiment evaluation
+ allows comparison between types of ppl (and insight) — can’t demonstrate gradual relationships as IV not directly manipulated (same with natural ex) — no random allocation (more confounding ) — DV may be artificial task (less ecological validity)
38
What are observational studies useful for
Useful for studying certain types of behaviour And certain groups of ppts (Eg. Social behaviours (crowd interaction) can be only studied in the environment they take place in) (Eg. Children get intimidated by formal experiments, making observations better) There are two types of observations (natural and controlled) - within these observation types you can add variations to enhance observations - covert or overt - participant or non-participant
39
What are natural and controlled experiments
Natural - ppts are observed in the setting the target behaviour normally occurs Controlled - carried out in fully control laboratory conditions They often take place in special viewing rooms; observers see room with a one way glass
40
Natural and controlled observations AO3
Natural + high validity (it’s real behaviour and no social desirability bias can take place) - no control over EVs - can’t be replicated as procedure isn’t standardised Controlled - low validity (it’s not real behaviour; social desirability bias can take place) + control over EVs + can replicate; procedure is standardised
41
Participant and non participant observations and evaluations
Participant researcher takes part in the action, allowing for close observation and so the observer can see what events lead up to action +more insight (fully immersed means detailed data) - observer effect may change course of events (less valid), leading to artificial behaviour Non Participant researcher doesn’t take part in the action, watching from a distance - less insight (data can’t be contextualised, so no reasons for actions = CORRELATIONAL) + no artificial behaviour as environment is natural - may be ethical issues
42
Overt and covert observations (and evaluations)
Overt - ppts know they’re being observed + it’s ethical - may lead to artificial behaviour (eg. Social desirability bias) - observer effect Covert - ppts DON’T know they’re being observed + it’s unethical (no informed consent) - no observer effect, so behaviour’s natural
43
What are structured and unstructured observations (and evaluations)
Unstructured - Writing everything down they see —> appropriate for small scale observations - doesn’t allow interobserver reliability as observers won’t be looking for same behaviours (unreliable/likely biased result) + can be done quick + rich in detail due to qualitative data .. Structured Simplifying target behaviours the researchers search for So they only write behaviours observed in the predetermined list —> these are behavioural categories (breaking target behaviour into observational categories) + allows for interobserver reliability as observers searching for same behaviours (hence reliable) - time consuming (eg coming up with list) - lacks detail due to quantitative data (but easy to analyse)
44
What are sampling methods (for recording target behaviours) in observations
— continuous recordings Noting every instance of target behaviour as it occurs Unsuitable for recording ongoing behaviours — time sampling Total observation time is divided into time intervals At each time interval (eg. Min) behaviour is noted if taking place — event sampling Counting number of times a particular behaviour for each behaviour occurs in target individual/group
45
What’s interobserver reliability
Observer bias is where the observer’s beliefs/expectations Leads them to be biased. (Eg. Expect boys to be more violent, affecting the way an action from a boy is interpreted) To avoid, two or more observers observe instead of one Recording target behaviours separately, but are side by side —> tested using corelational analysis - if two sets of observations are similar, will produce high positive correlation - hence consistant/is reliable how the I servers are interpreting the data
46
What are questionnaires (and types of questions asked)
Used in surveys to find about people’s behaviour or opinions —> useful for studying people’s behaviour/opinions that can’t be directly observed Closed - fixed number of optional answers Producing quantitative data (easy to summarise, lack depth so not true? = less validity) Hence aren’t ppt friendly, causing frustration as few suitable options Open Can answer in anyway they like Qualitative data (likely to be true = greater validity, hard to analyse like categorising) Ppt friendly as aren’t forced into frustration
47
What are the ways of designing questionnaires
Avoid double barrelled questions, double negatives and emotive language They need to be possible to understand and answer Types of closed questions used.. - likert scales (indicate agreement on scale of ~5 points) - rating scale (to identify value that represents strength of feeling) - foxed choice (list of possible options)
48
AO3 questionnaires
+ - large amounts of data collected quickly - highly replicable - no ethical problems (know they’re participating) — - self report, so subjective - may not give true answers (present themselves as socially desirable) - response set may occur, failing to read all and just saying yes
49
What are interviews and the different types
Can be face to face /over the phone Notes are made at the time or after the interview. A recording can be made and data analysed later • structured Interviewer asks preprepared questions in fixed order No chance for extra questions Fully structured interviews have fixed option answers for interviewees • unstructured Interviewer starts off with an aim and interviewee discusses specific topic Interviewer listens, comments, and prompts to expand (Often used by clinical psychologists in case studies) • semi structured A list of questions worked out in advance But follow up questions are asked (Job interviews)
50
How are interviews designed
Most have an interview schedule —> list of questions they intend to cover This should be standardised for each ppt to reduce contamination effect of interviewer bias Interviewer must consider how many they’re interviewing at once and rapport built. The more comfortable the interviewee, the better the interview.
51
Interviews AO3
+ - insightful as thoughts/opinions can be studied wch can’t be observed - large amounts of data - structured can be quantitative, so summarised/analysed easily - focus of interview maintained - possible to replicate (checking results are reliable) - unstructured are qualitative (detailed and unprocessed means high validity) - behaviour understood in context (reasons given) - interviews are tailored (ppts less intimidated) .. — - self report methods = subjective (opinion based) - don’t tell causes of behaviour - may not be honest reducing validity - structured are frustrating as interesting issues can’t have follow up qs - more formal; less comfortable - unstructured is hard to compare to, and can lose focus covering different contents
52
What are correlations
Statistical technique that is used for analysing data Where two sets of numerical scores are obtained for each ppt Plotted next on a scatter gram (to see any association between two variables/covariables) Correlations produce a correlation coefficient (tells us two things about an association between covariables) - STRENGHTH of an association between covariables as a number between (-1 and +1) - DIRECTION of association between covariables either negative or possible (-/+)
53
Positive and negative correlations
Positive.. - one variable increases, the other increases A perfect correspondence gives a correlation coefficient of +1 The closer the correlation coefficient to 1b(eg. 0.8) the stringer the association Negative.. - one increases, other decreases. A perfect correspondence gives correlation coefficient -1 Closer to -1, the stronger association Zero correlation.. No correlation, so if one person as a score on one covarible, we can’t predict score on other. Correlation coefficient of 0
54
AO3 correlations
Strengths - lets us see if two variables are related Once we know, we can carry out other types of studies To investigate relationship further Limitations - don’t show cause and effect May be other factors uncontrolled contributing the relationship - lack of control Correlations involve measurement of two variables being investigated Without EV control
55
Difference between experiment and correlation
E - has IV and DV - shows cause and effect - DVs controlled C - covariables - NOT cause/effect - no EV control
56
What’s reliability, internal and external, and how we can improve it?
It refers to the consistency of a test or experiment. — *internal* reliability is where smith is consistent within itself —> eg. Consistency in a method, like the task in a study measures same thing each time —> or a ruler measures 1-2cm and 8-9cm; each unit measures the same thing — *external* reliability is where smths consistent from one use to another —>Eg. between uses of method like over time —> like a mark scheme will produce same results over time on same exam paper .. To improve.. Standardising instructions Carry out pilot studies (find ways to improve procedures/materials)
57
What are the two ways of assessing reliability
— interobserver reliability Refers to the extent to wch diff observers give consistent estimates/scores Of the same phenomenon/behaviour - observers need to construct behavioural categories together And observe side by side, not together Then a correlation coefficient can assess the degree of reliability (+0.8 high positive) .. — test retest reliability Measures stability of eg a test or interview over time It involves giving the same test to same ppts, on diff occasions —> involves correlating the results of results between both times (strong positive)
58
What’s validity (internal and external )
It refers to whether a test, measurement, etc measures what intends to -> is the test accurate/trustworthy — *internal* validity Is the extent to whether the researcher measures what it claims to - ppt effects and confounding variables affect this — *external* validity Extent to which an experimental effect (result) can be generalised - to other settings (ecological validity) - people (population validity) - over time (temporal validity)
59
What are ways of assessing validity
— face validity It’s the extent to which the test looks like its meant to measure — concurrent validity It’s having a strong positive correlation on a test Compared to another similar test already recognised as valid
60
What are peer reviews?
Before research can be part of an academic journal (where research is publicised) It must be subject to peer reviews. Peer review process involves all aspects of the written investigation Inspected/scrutinised by a small group of experts (peers) in that field - experts should be objective + unknown to researcher - they then report back to editor, highlighting Weaknesses and suggestions for improvement There are four options for experts to respond 1. Accept work unconditionally 2. Accept it as long as researcher improves in certain ways 3. Reject, but suggest revisions and resubmission 4. Reject outright
61
What are peer review main aims
1. To validate the quality and relevance of research All research elements are assessed for quality + accuracy.. - formulation of hypotheses - methodology - statistical tests used - conclusions 2. To suggest amendments Reviewers may suggest minor revisions of the work And so improve report or May conclude work is inappropriate for publication
62
What are the peer review issues
- critics argue peer reviews aren’t as unbiased as claimed - research occurs in a narrow social world And social relationships in that affects objectivity and impartiality - - in obscure research areas, it may not be possible to find reviewers That are sufficient in knowledge to carry out - some scientists that are reviewers don’t have enough Ability to consider researcher in an unbiased manner - it’s compromised by them being funded by organisations With vested interests in the areas - some reviewers haven’t accepted research so own Can be published, and some even plagiarise - it’s a slow process, taking months or years - sometimes false research can be accepted as true, causing Other researchers to base own research off it
63
Whats thematic analysis
A method used to turn qualitative data into reduced qualitative data - by identifying ideas, the themes, in data (Doesnt use numbers) - the process is to look at the media/transcripted or photo data repeatedly, identifying codes as they occur to the researcher - then they look at codes for emergent themes linked to investigated behaviour >> this stage is repeated many times Theme emergent themes are the reduced qualitative data
64
Thematic analysis evaluation
+ repeated analysis allows coding flexibility as more codes can be added at any point + easier to manage with still insightful data - the interpretation if material is subjective
65
Whats meta analysis
Studies that only use secondary data
66
What is operationalisation definition
Its the clear definition of observable behaviours to be recorded > to allow behaviours to be measured objectively