6 Research Methods Flashcards

(161 cards)

1
Q

Define self-report technique

A

Any method were a person is asked to state or explain their own feelings, opinions, beliefs, behaviours and/or experiences related to a specific topic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define Social desirability bias

A

Participants’ behaviour is distorted as they modify this in order to be seen in a positive light

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define Demand characteristics

A

When the participants try to make sense of the research and act accordingly to support the aim of the research
(They may also try to disrupt the aims of the research)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define response bias

A

A tendency for interviewees to respond in the same way to all questions, regardless of context. This would bias their answers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define Acquiescence bias

A

A tendency for an interviewee to respond to any question in agreement, regardless of the actual content

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define Qualitative data

A

Non-numerical language-based data collected through interviews, open questions and content analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define quantitative data

A

Numerical data that can be statistically analysed. Experiments, observations, correlations and closed/rating scale questions from questionnaires all produce quantitative data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

AO1 Questionnaires

A
  • Pre-set list of questions to record thoughts and feelings.
  • Open and closed questions:
  • Open questions do not include a fixed range of answers. –> qualitative data
  • In closed questions respondents are directed to a fixed set of responses from which they have to choose. –> quantitative data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

AO3 Questionnaires

A

Strengths
* Cost-effective:
* Lots of data quickly –> distrubuted to lots of people
* Completed without researcher present –> less efffort
* Straightforward to analyse
* particulary for fixed-choice, closed questions
* Lends itself to statistical analysis + comparisons between groups

Limitiations
* May not be truthful
* Present themself in a positive light,
* Eg, how often do you use your phone. People might give a lower time
* This is social desirability bias (a demand characteristic)
* Response bias
* Respondants tend to reply in a similar way
* Eg, always ticking yes or answering at the same favoured end of a rating scale
* Maybe they complete questionnaire too quickly + not read questions properly
* Aquiesence bias (tendancy to always agree on questionnaire)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

AO1 interviews

A
  • Can be face to face or over phone/internet
  • Structured- Pre-determined question set + fixed order
  • Unstructured- No set questions but general aim of topics to discuss. Free flowing + encourage intervieww to expand + elaborate answers
  • semi-structured- List of questions in advance but able to ask follow up questions based on previous answers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

AO3 Interviews structured vs unstructured

A

Structured:
Strength
* Straightforward to replicate due to standardised format
* Reduces differences between interviewers
Limitation:
* Not possible to deviate from topic or explain questions
* Limit richness of data
* Limit unexpected information

Unstructured:
Strength
* More flexibility- can follow up points as they arise
* More likelu to gain insight (including unexpected info)
Weakness
* Increased risk of interviewer bias
* Data analysis = not straighforawrd
* Sift through irrelevant information
* Firm conclusions may be difficult

Weakness
* May lie due to social desireability
* Skilled interviewer should build a rapport so this doesn’t happen (even in sensitive situations)
* Hard to build rapport with structured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is Likert Scale

A

A scale where respondant indicates their agreement (or lack of) with a statement, using a scale of five points. Often, ranges from strongly disagree to strongly agree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a rating scale

A

A scale were respondants identify a value that represnts their strength of feeling about a topic
Eg, 1 to 5 (1= awful and 5 = perfect)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a fixed-choice option

A

A fixed choice option item includes a list of possible options and respondents are required to indicate those that apply to them.
(A select all that apply question)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is an interview schedule

A

The list of questions that the interviewer intends to cover.
They should be standardised to reduce the contaminating effect of the interviewer bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is interviewer bias

A

an interviewer’s expectations, beliefs, and prejudices as they influence the interview process and the interpretation of the data it provides

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a leading question

A

A question which guides the respondant towards a particular answer and/or makes assumptions that could influence the response to the question
Eg, is it not obvious that student fees should be abolished? Or, When did you last drive over the speed limit?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is emotive language

A

Language that evokes an emotional response or has emotional connotations ie is not neutral terms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are double-barrelled questions?

A

A question that contains two questions in one, the issue being that the respondents may agree with one half of the question and not the other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is a double barrelled negative question?

A

A question containing two negatives. These can be confusing.
Eg do you agree that you are not unhappy in your job?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What should be avoided in questions for self-report techniques?

A
  • Overuse of jargon
  • Leading questions
  • Emotive language
  • Double-barrelled questions
  • Double-baralled negatives
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Define directional and non-directional hypotheses

A

Directional:
States direction of the difference or the relationship

Non-directional:
Does not state the direction of the difference or relationship

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the levels of the IV

A
  1. The experimental condition
  2. The control condition

Minimum of 2 levels of the IV are needed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

When should you use a directional hypothesis?

A
  • A previous theory or research finding suggests a certain outcome
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
When should a non-directional hypothesis be used?
* Research is contradictory * No previous research
26
What is a peer review?
* The assessment of scientific work by others who are specialists in the same field, to ensure any research intended for publication is of high quality. * Consider its validity, ethics, methodology, originality.
27
What are the main aims of a peer review?
* Prevents the dissemination of irrelevant findings/ unacceptable interpretations/ personal views/deliberate fraud. * It ensures research is taken seriously because it is independently scrutinized. * Allocate research funding * Suggest improvements/amendments + identify errors
28
Strengths of peer review
**Maintains validity of publications:** It ensures that only high quality research, supported by valid methodology, is disseminated and available as a body of scientific evidence. **Ensures publications are true + wanted:** It helps to prevent the dissemination of irrelevant findings / unwarranted claims / unacceptable interpretations / personal views and deliberate fraud. **Honest criticisms can be shared:** It helps that the reviews are anonymous, as this means that reviewers are more likely to be honest.
29
Limitations of peer review
**Anonymity:** Reviewers may take advantage of anonymity to criticise rival researchers, especially if in competition for funding. **Publication bias (the file drawer problem):** Research that is not attention-grabbing or positive might not get published. This creates a false impression of current psychology. **Burying groundbreaking research:** * May suppress opposition to mainstream theories in order to keep status quo. * More critical if work goes against their viewpoint + more favourable to those which they agree with. * Established scientists are more likely to be chosen to review so work which agrees with current opinion is more likely to be published than that which is innovative. * Slows down the rate of change in psychology.
30
Define correlation
A correlation is a mathematical technique investigating an association/relationship between two or more co-variables
31
What is a scattergram?
A scattergram is a graph where one co-variable forms the x-axis and the other the y-axis (for a correlation it doesn’t matter which variable is on each axis). Each point or dot on the graph represents each pair of scores
32
What are the differences between experiments and correlations
**Experiments:** * The IV is manipulated * Assesses the effect of the IV on the DV * Extraneous variables are controlled so that they do not have an effect on the DV **Correlations:** * The IV is not manipulated, as there is no IV/DV * The co-variables relationship is observed/recorded * Extraneous variables are not controlled so not possible to find cause and effect
33
What is the correlation co-efficient?
A **numerical value** between 1 and -1 that indicates the **relationship** between two co-variables: the **direction** (positive/negative) and **strength** (strong/weak) The **stronger** the correlation: The closer to 1 or -1 The **weaker** the correlation: The closer to 0
34
What is the terms for the following: 1. As one variable increases so does the other. 2. As one variable increases, the other decreases. 3. No relationship between the variables.
1. Positive correlation 2. Negative correlation 3. Zero Correlation
35
Correlation format for hypotheses
Directional: There is a positive/negative correlation between Co-variable 1 and Co-variable 2 Non-directional: There is a correlation between condition 1 and condition 2 There is no IV and DV in a correlation
36
Strength of correlations
* A precise and quantifiable measure of how two variables are related * Can show if there is a positive/negative and strong/weak relationship * Quick and easy to carry out - can use pre-existing data (secondary data) * A useful starting point for further research e.g. experimental research, which is more time-consuming
37
Limitations of correlations
* **No cause and effect** * Risk that they are presented as causal by media * Only show how two variables are related * leads to false conclusions about behaviour * **Intervening variables** * Another untested variable may explain relationship between the co-variables * Lead to false conclusions
38
Give 3 Research issues
1. Extraneous variables 2. Confounding variables 3. Demand Characteristics
39
What are extraneous variables
Any variable, other than the IV, that may affect the DV if it is not controlled. They are 'nuisance' variables that do not vary systematically with the IV. They make it harder to detect a result but do not confound the findings
40
What are confounding variables?
Variables that do vary systematically with the IV so we cannot be sure what caused the change in the DV.
41
What are demand characteristics?
Any cue from the researcher that the participant might interpret as revealing the aim of the study leading to behaviour change Participant reactivity = EV Please-U effect or Screw-U effect may occur. So behaviour = unnatural --> EV affecting DV
42
What is participant reactivity
The extent to which Pps alter their behaviour in line with demand characteristics may be affected by dispositional traits within the individual participants relating to age, experience, identification with the researcher. It is an EV
43
What are investigator effects?
Any effect of the researchers behaviour (unconscious/conscious) on the research outcome (DV). Unwanted influence of the investigator on the research outcome Could include design of study, selection of participants, interaction with participants etc Eg, leading questions, tone of voice, body language
44
How can the effect of EV/CV on DV be reduced?
1. Randomisation 2. Standardisation
45
What is randomisation?
The use of chance to reduce the effects of the researcher's unconscious biases when designing an investigation. Controls investigator effects It controls effect of bias in experimental materials + determines order of events in experimental conditions
46
What is standardisation?
Making sure that all participants are subject to the same instructions (standardised instructions) and experience (including environment) using the same formalised procedures. So non standardised changes in procedures do not act as EV
47
Quantitative vs qualitative data
Qualitative data= expressed in words, non-numerical. Quantitative data= Expressed numerically rather than in words.
48
Evaluation of qualitative data
**Strengths** * Richness, insight as it gives the pps opportunity to share their thoughts and thus has higher internal validity **Limitations** * Difficult to analyse as it is hard to summarise statistically so hard to make comparisons * Affected by subjectivity of researcher as conclusions are made by the researcher’s interpretation of that data which could introduce bias
49
Evaluation of quantitative data
**Strengths** * Easy to analyse as the results are easy to turn into statistics and graphs which causes comparisons to be drawn simply and quickly * Data less open to bias as less interpretation of the data is required to draw conclusions **Limitations** * Lack depth as participants cannot share their own thoughts as they are limited to fixed choices so it may not represent real life
50
Primary vs secondary research
Primary = original data collected for the purpose of the study; has not been published. Secondary = Data collected by someone other than the person doing the research. It already exists before the investigation begins.
51
Evaluation of primary data
**Strengths** * Fit for purpose as it is authentically from the participants and the data (eg questions) have been specifically designed for the study so it targets the aim of the research **Limitations** * High effort and cost as conducting research requires significant time and planning and requires resources that need to be paid for
52
Evaluation of secondary data
**Strengths** * Minimal effort and inexpensive because it can be accessed online or in person upon collection. **Limitations** * Variable quality as it may appear to be valuable but actually could be outdated, incomplete or have been collected in a way with low internal validity. * May not fit researchers needs as it has not been designed specifically for the aims of the research so it has low validity
53
What is a meta analysis
The process of combining findings from a number of studies on a particular topic to provide an overall view.
54
Evaluation of a meta analysis
**Strengths** * increases the validity of the data because it is based on a wider range of data. * Results can be generalised across much larger populations. **Limitations** * Possible publication bias as the researcher may not select all the relevant studies (leaving out ones with no significant results or negative results). So conclusions may be biased as they only represent part of the data
55
What is nominal data
* Data presented in categories (also known as categorical data). * Each answer is ‘discrete’; it fits into one category only. * The frequency is counted. * Eg fave ice cream flavour
56
What is ordinal data
* Data that is ordered (e.g. ranked) in some way. * This data is numerical, but the numbers are not set at equal/objective/standardised intervals. * Eg, rating how good NYE was
57
What is interval data
* The most precise form of measurement in psychology. * Data measured using a public scale of measurement with units of precisely defined equal/objective/standardised intervals. * E.g. Measurement in cm. The distance between 4 and 8 cm is precisely double.
58
What is mean and which data can it be used with?
* Arithmetic average, add up the scores and divide by number of scores (N) * Use: Interval (shouldn’t use ordinal as intervals are not standardised)
59
What is median and which data should it be used with
* Middle value when scores are arranged in order. * Use: Ordinal (can use with interval but mean is more precise so is ideal)
60
What is mode and which data should it be used with
* Most frequently occurring value. * Use: Nominal, ordinal or interval
61
Evaluate the use of mean as a measure of central tendency
**Strength** * Most precise measure of central tendency because it takes all of the values into account in the final calculation. **Limitation** * May provide a distorted measure of central tendency because outliers can distort the outcome. When there are extreme values in a data set, this measure of central tendency should not be used.
62
Evaluate the use of mode as a measure of central tendency
**Strength** * Has broad use because it is the only measure of central tendency that can be used with all types of data and the only measure for nominal data. **Limitation** * Not always a useful way of representing data because there may be multiple averages this way and therefore it is not really a summary. * Also, it isn’t very representative because it doesn’t include all of the data.
63
Evaluate the use of median as a measure of central tendency
**Strength** * Less affected by extreme values because it only focuses on the middle values. **Limitation** * Not truly representative of the data set because it does not include all values in the calculation, making it less precise.
64
Experimental method vs experimental design
Experimental method = Manipulation of the IV to measure the effect of the DV. ie what makes it an experiment Experimental method is not the same as types of experiment Experimental design= The different ways participants can be organised in relation to the experimental conditions ie RM, IG or MP
65
What are independent groups?
* Two separate groups of participants are involved, each group does one of the conditions of the experiment. * So each participant takes part in only 1 level of the IV * Results between groups are compared * Random allocation can be used to distribute participant differences across experimental conditions which ensure there is an equal chance for each participant to be selected for each condition
66
What are repeated measures
* All participants take part in all conditions of the experiment. * To reduce the practice effect having an issue, counterbalancing can be used: half the participants do the conditions in one order and the other half in the opposite order.
67
What are matched pairs
Participants are matched on some variable that is important to the experiment and then one of each pair is allocated to a different condition.
68
Independent groups Evaluation
**Strengths** + not affected by order effects, such as fatigue and practise + Less likely to guess experimental aim **Limitations** - Participant variables are a problem; differences between the groups may act as a confounding variable
69
Repeated measures evaluation
**Strengths** + Participant variables are controlled; compare a participant’s behaviour in one condition directly with the second **Limitations** - Order effects; Ps may guess the aim, or change their behaviour through fatigue, boredom, repetition
70
Matched pairs evaluations
**Strengths** * Can reduce order effects and demand characteristics, compared to a repeated measures design **Limitations** * But, matching will never be perfect; even identical twins will be different in their attitudes and behaviour * Can be time consuming because pre-testing becomes necessary + this can be expensive
71
What is content analysis?
* observational research technique in which people are studied indirectly via the communications they produce * systematically analysises qualitative info * draws up coding categories and counts how much theses occur * turns qualitative into quantitative
72
What is coding in content analysis?
Initial stage, categorising large amounts of information into meaningful units (produces quantitative data)
73
What data can content analysis be used with?
* Primary data eg, pps answers to unstructured interviews or open questions or unstructured observations * Secondary data eg, newspapers, magazines, movies etc
74
What is thematic analysis?
* Analyses qualitative data by identifying recurring patterns/themes * more descriptive than coding units (cont analysis) * may review similar data to test validity of themes * writes a final report using quotes to illustrate theme
75
Strengths of content and thematic analysis
* Easy techniques to summarise info * Can use secondary data from public domain so no ethical issues surrounding consent
76
Strengths of content analysis only
* allows statistical analysis if required as there is quant data * if clear coding units --> not open to interpretation so a good analysis of qual data
77
Limitations of content and thematic analysis
* only summarise and describe behaviour --> no cause identified * can be subjective --> researchers own bias/opinions may impose on info
78
Lab experiment AO1+AO3
* Controlled environment * IV manipulated * DV measured **Strengths** * Control over CVs + DVs --> IV effects DV --> cause + effect ---> internal validity * High control --> replication possible as new EVs not introduced --> replication is needed to check validity of results **Limitations** * Lack generaliseability --> lab=artificial --> unusual beahviour --> low external validity as not generaliseable beyond research * Pps aware of testing --> unnatural behaviour --> demand characteristics * Tasks do not represent real life --> low mundane realism
79
Field experiment
* In a natural environment * IV manipulated * DV measured **Strengths** * Natural environment --> high mundane realism * Natural environment --> real life like behaviour (may be unaware being studied) --> high external validity **Weaknesses** * Lack of control on CVs + EVs --> cause and effect = difficult to establish --> precise replication = not possible * Ethical issues as pps unaware they're being studied --> invasion of privacy + no informed consent
80
Natural experiment
* IV = natural/real world event/condition, not manipulated by researchers * DV = measured * Could be in a lab **Strengths** * Opportunities for research which might not be undertaken due to ethical or practical barriers * Study real world issues as they happen --> high external validity **Limitations** * Natural events = rare --> not many opportunities for research + limited scope to generalise findings to similar studies * If conducted in a lab --> lacks realism * If not randomly allocated (only for IG) --> not sure DV effects IV
81
Quasi Experiments
* IV = pre existing characteristic in participants * IV = not controlled * DV is measured **Strengths** * Can be conducted in a controlled environment --> high replicability + internal validity **Limitations** * Cannot randomly allocate pps to conditions --> CVs could affect IV * IV is not deliberately changed --> cause an effect is hard to identify
82
Random sampling
* Every member of the target population has an equal chance of being selected * Eg names in hat or random generator **Strengths** * No researcher bias **Limitations** * Time consuming (needs a list of everybody in target pop + may need to repeat is ppl refuse) * Could be biased/ underepresnt target pop (left to chance) * may become volunteer sample if people refuse
83
Opportunity sampling
* Uses people who are conveniently available eg, at that specific time or place **Strengths** * Quick, easy + cheap as only people who are available are chose + no list of ppl is needed **Limitations** * Underepresents target population as it only draws from one area/pool of people * Only a certain type of person may be available at that time * Hard to generalise to target population * Researcher bias as research has control of who to pick
84
Volunteer sampling
* Uses people who step forward to take part * Eg, respond to a poster/advert **Strengths** * Quick, easy + time effective as only people who want to be involved offer and pps come to the researcher **Limitations** * Volunteer bias as a certain type of person may volunteer * Underepresentative of target population so hard to generalise to whole target pop
85
Systematic sampling
* Using people selected by some a system * Eg, select the nth person in a list (divide number of target pop by number of pps then select every nth pp) * A sampling frame (a list of organised target pop eg alphabetical order) is produced * A sampling system (eg every 3rd person) is used * The sampling frame may be randomly ordered to avoid bias **Strengths** * objective * minimal research bias as when a system is decided the researcher has little input **Limitations** * time consuming * if participants refuse to take part it can become more like a volunteer sample * Could be underrepresntive (it may happen that every 10th person is a woman)
86
Stratified Sampling
* Composition of sample represents proportion of target pop in certain subgroupd (strata) * Researcher identifies different strata that make up the population * Researcher identifies proportion of strata in target pop * Participants from each strata are randomly selected so the % in the sample are representative **Strengths** * representative sample as it accurately represents the type of people in the target population * good generalisability **Limitations** * time consuming as strata + % need identified + pps need matched to subgroups * cannot represent all the different possible sub groups so complete representation is not possible
87
What are ethical issues
* When a conflict exists between the rights and dignity of the participants and the aims of the research. * Researchers must follow the BPS code of conduct. * A cost-benefit analysis must be conducted to ensure research is ethical
88
Name the ethical issues
* Informed consent * Deception * Protection from harm * Privacy * Confidentiality
89
Outline informed consent
* Making participants aware of the aims, procedures of the research, rights (right to withdraw), and how their data will be used before participating * Pps make an informed judgement without coersion * There are alternatives: presumptive, prior general and retrospective consent.
90
Outline deception
* Deception = deliberately misleading or witholding information * Participants should not be deliberately misled. * Some deception is acceptable but participants must be fully debriefed at the end.
91
Outline protection from harm
* Participants should not be exposed to any more risk than they would be in everyday life. * Participants should be reminded of right to withdraw * If harm has been caused, participants should be offered counselling as part of the debrief.
92
Outline privacy and confidentiality
* Privacy= participants have the right to control information about themselves * Confidentiality= people have the right to have personal data protected (according to data protection act)
93
How to deal with informed consent
* Participants are given a consent form which must be signed * Under 16s must have parental consent * Alternatives: presumptive consent, prior general consent or retrospective consent
94
How to deal with deception and protection from harm
* Pps should be fulle debriefed with full aims + procedures and any deception * Pps should be told how their data will be used and their right to withold data * If subject to stress counselling should be provided * Reasurance that they behaved normally should be given
95
How to deal with privacy + confidentiality
* Personal details must be protected * Better to maintain anonymity eg only use initials * During briefing + debriefing pps are told data is protected + not shared
96
What is a positive skew
A positive skew is when most of the distribution is concentrated on the left (and may be the result of a very difficult test) It goes mode, median, mean
97
What is a negative skew
A negative skew is when most of the distribution is on the right (the result of a very easy test)
98
6 Observation Techniques
1. Overt vs Covert 2. Participant vs Non-participant 3. Naturalistic vs Controlled
99
100
What is a pilot study and what are its aims?
* A pilot study is a small-scale version of the experiment / questionnaire / observation and will usually involve a small number of participants. * It is a practise/trial run to ensure procedure work as intended * It prompts the researcher to modify to procdure so they end up with useful data
101
What is a single-blind procedure?
* Participants are not made aware of some details of the investigation in order to reduce demand characteristics * Eg, not told full aim, the presence of a control group or which experimental condition they're in
102
What is a double-blind procedure?
* Neither the participants nor the person conducting the study knows the aim of the research * This is to reduce demand characteristics and investigator effects.
103
What are control groups used for
Used for the purpose of comparison with the experimental group so the researcher can be more certain of the effect of the IV on the DV.
104
2 types of observational design
1. Unstructured- Record everything producing a narritive format. Qualitative 2. Structured- Create behaviour categories + tally how many times each behaviour occurs. Quantitative
105
Behavioural categories- Observational design
These must be precisely defined, observable and measurable categories that are non-overlapping.
106
Event + time sampling- observation design
Event sampling- Behaviour is recorded every time a behaviour is observed Time sampling- Behaviour is recorded at set time intervals eg 30 seconds Only occurs in structured observations
107
Inter-observer reliability- observation design
Inter-observer reliability- Measure the level of agreement between observers recording the same behaviour to check consistency
108
109
Give one strength and one weakness of using structured observations.
Strength: Easy to analyse + draw conclusion from quantitative data Limitation: Not all behaviour is recorded, might not fit into a category or sampling method. Reduce internal validity as it isn't representative.
110
Give one strength and one weakness of using unstructured observations.
Strength- Detailed info + detail as it produces qualitative Weakness- Hard to analyse, compare + draw conclusions so hard to generalise to real life situations
111
What is the role of an inferential test ?
To determine whether a significant difference or correlation exists and thus decide whether to accept the alternate or null hypothesis
112
What is significance?
Indicates degree of certainty that a difference or correlation exists
113
What is probability
The likeliness that an event will occur. The lower the p value, the more significant a result is Level of significance: p ≤ 0.05 (5%)
114
What is a type I error?
Type I Error (optimist error): Accept alternate when null should’ve been accepted (false positive)
115
What is a type II error?
Type II Error (pessimist error): Accept null when alternate should’ve been accepted (false negative)
116
What is the calculated value?
Calculated value: The result of the calculation from the stat test formula
117
What is the critical value?
Critical value: Numerical value that is the cut-off for acceptance or rejection of the null hypothesis
118
How do you decide which critical value to use?
One tailed (directional) or two tailed (non-directional)? How many participants? (for sign test: total - number of equal signs) Which p value? (usually p ≤ 0.05 (5%))
119
What are the steps for conducting the sign test?
1. Calculate sign of difference between groups 2. Count number of pluses, minuses and equals 3. The number of less frequent sign is S (calculated value) 4. Find appropriate critical value 5. Compare critical value and S
120
Nominal, independent groups
Chi-Squared
121
Nominal, repeated measures
Sign Test
122
Nominal, correlation
Chi-Squared
123
Ordinal, independent groups
Mann-Whitney
124
Ordinal, repeated measures
Wilcoxon
125
Ordinal, correlation
Speaman’s rho
126
Interval, independent groups
Unrelated t-test
127
Interval, repeated measures
Related t-test
128
Interval, correlation
Pearson’s r
129
Features of a science- paradigm and paradigm shifts
Paradigm- Shared assumptions and agreed methods within a scientific discipline. This distinguishes something as a science. Social sciences lack this. Paradigm Shift- A change in the dominant unifying theory within a scientific discipline
130
Features of science- theory construction and hypothesis testing
Theory construction- The gathering of evidence to form a set of general laws or principles to explain a phenomena. Hypothesis testing- Predictions are tested empirically to provide evidence for or to support the theory or to refute it.
131
Features of a science- falsifiability
Falsifiability- We can only demonstrate the truth of a scientific principle by demonstrating it is untrue (false). Without testing to be false/studies trying to conflict current theories, it is a pseudoscience not a science. Theories surviving attempts to falsify are the strongest.
132
Features of a science- replicability
Replicability- Repeating scientific procedures to test validity of findings. Replicability determines reliability and validity and the extent to which things can be generalised To allow replicability to be determined investigations need to recorded preciseley to allow others to verify the method
133
Features of a science- objectivity
Objectivity- All sources of personal bias are minimised so as not to distort the research process. Researchers must keep critical distance during research
134
Features of a science- Empircal method
Empirical- Gathering of evidence through direct observation and experience.
135
Name the features of a science
1. Theory construction and hypothesis testing 2. Objective 3. Emprical method 4. Falsifiability 5. Replicability 6. Paradigm and paradigm shift
136
What are the sections of a scientific report?
1. Abstract 2. Introduction 3. Method 4. Results 5. Discussion 6. References
137
What is an abstract?
Summary of aims and hypotheses, method/procedure, results and conclusions
138
What is an introduction?
Several studies summarised leading logically to the aim and hypothesis of te research
139
What is the method?
**Subsections:** - **Design**: stated and justification of choice - **Sample**: Sampling method, target population and information related to number of participants and demographic (anonymous) - **Apparatus**: Assessment instruments and other relevant materials - **Procedure**: Everything that occurred including verbatim record of briefing, standardised procedures and debriefing - **Ethics**: How these were addressed in study A detailed method section will allow accurate replication.
140
What are the results?
Summaries of the data in graphical, tabular and written formats include: - Descriptive statistics e.g. bar chart - Inferential statistics e.g. sign test - Analysis of qualitative data
141
What is the discussion?
Results verbally explained, limitations of study discussed, future suggestions and wider implications explained
142
What are the references?
All sources must be included
143
What is the appendix?
Where the raw data is written
144
What is a case study?
An in-depth investigation of a single individual, group or institution. Case studies tend to be longitudinal (taking place over a large amount of time)
145
Evaluation of Case Studies
**Strengths** * Holistic as it focuses on the participants as a whole not just one element * Useful as it can demomnstrate typical functioning and indicate causes of behaviour e.g. HM showed seperate memory stores * Stimulates more research **Limiations** * Biased as researcher may become attached and lose objectivity * Ungeneralisable as focuses on one specific person * Questionable validity as accounts from others about the person are untrustworthy
146
What is reliability
A measure of how consistent a particular measure is. A measurement made twice should produce similar results each time to be reliable
147
List the ways of testing reliability
1. A test-retest 2. Checking inter-observer reliability
148
What is a test-retest?
* A test/questionnaire/interview is used twice and the results are compared * The second time must be sufficiently after the original time so the participant cannot remember their answers * But it cannot be too close their views have changed * Answers are correlated between test and retest
149
How does inter-observer reliability check for reliability?
* Two observers of the same behaviour make reliable observations if they interpret the behaviour in the same way (also inter-interviewer reliability in interviews and inter-rater in content analysis) * The results of the observers are correlated
150
When is a measurement reliable?
* The correlation coefficient of the two correlated data sets from the test-retest or inter-oberver reliability check should exceed +0.8
151
What are ways of improving reliability in questionnaires and interviews?
Rewrite or remove ambiguous questions, use more closed questions, use structured interviews, train interviewers.
152
What are ways of improving reliability in observations?
Operationalise behavioural categories, avoid overlapping, train observers, discuss decisions.
153
What are ways of improving reliability in experiments?
Use standardised procedures.
154
What is validity?
The extent to which an observed effect is genuine
155
What are the types of validity?
1. Face 2. Concurrent 3. Ecological 4. Temporal
156
What is face validity?
Face validity is a measure of whether it looks subjectively that a tool measures what it's supposed to
157
What is internal validity?
Internal validity is a measure of whether results obtained are solely affected by changes in the variable being manipulated
158
What is concurrent validty
The extent of which a measure is in agreement with pre-existing measures that are validated to test for the same [or a very similar] concept It is a type of internal validity
159
What is ecological validity
The extent to which findings from a research study can be generalised to other settings, particularly the real-world This is a type of external validity
160
What is temporal validity
The extent to which findings from a research study can be generalised to other historical times and eras This is a type of external validity
161
What is external validity
External validity is a measure of whether data can be generalised to other situations outside of the research environment they were originally gathered in