research methods (Paper 2) Flashcards

(264 cards)

1
Q

Experimental method

Experimental method definition -

p. 166

A

A research technique that involves deliberately manipulating the IV to observe and measure any resulting changes in the DV.
This method is used to establish cause-and-effect relationships.

A01

Experiments can take place in controlled environments (laboratory experiments), real-world settings (field experiments), or involve naturally occurring variables (natural or quasi-experiments), depending on the level of control and realism desired.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Experimental method

Aim definition -

p. 166

A

A clear, general statement describing what the researcher intends to investigate. It outlines the overall purpose or focus of the study.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Experimental method

Hypothesis definition -

p. 166

A

A precise, testable statement predicting the expected relationship between variables in a study. It is formulated before the research begins.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Experimental method

Directional hypothesis definition -

p. 166

A

Predicts that a difference/relationship exists between variables, and also the specific direction of that effect (e.g., one group will score higher than another).

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Experimental method

Non-directional hypothesis definition -

p. 166

A

Predicts that a difference/relationship exists between variables but does not specify the direction of the effect.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Experimental method

Variables definition -

p. 166

A

Any ‘thing’ that can vary or change within an investigation. In experiments, they are used to assess whether changes in one variable cause changes in another.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Experimental method

Independent variable (IV) definition -

p. 166

A

The variable that is manipulated by the researcher, or changes naturally —to examine its effect on the dependent variable (DV).

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Experimental method

Dependent variable (DV) definition -

p. 166

A

The variable that is measured by the researcher. Any effect on the DV should have been caused by the change in the IV.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Experimental method

Operationalisation definition -

p. 166

A

The process of clearly defining variables to specify how they can be measured and tested within the context of an experiment or study.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Experimental method

Aim example: What might be the aim for a study who’s theory is that drinking energy drinks (in this case SpeedUpp) makes people more talkative? (Due to its caffeine and sugar)

p. 166

A

To investigate whether drinking energy drinks makes people more talkative.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Experimental method

Directional Hypotheses example: What might be the directional hypotheses for a study who’s theory is that drinking energy drinks makes people more talkative?

p. 166

A

Directional hypotheses include words like more or less, higher or lower, faster or slower, etc. since they predicts that a difference/relationship exists between variables. Within a certain direction, so examples could be:

  • People who drink SpeedUpp become more talkative than people who don’t.
  • People who drink water are less talkative than people who drink SpeedUpp.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Experimental method

Non-directional Hypotheses example: What might be the non-directional hypotheses for a study who’s theory is that drinking energy drinks makes people more talkative?

p. 166

A

A non-directional hypothesis simply states that there is a difference/relationship between variable. But the direction is not specified. Example:

  • People who drink SpeedUpp differ in terms of talkativeness compared with people who don’t drink SpeedUpp.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Experimental method

Experiment example: How might you conduct an experiment for e.g the energy drink theory?

p. 166

Which states that drinking energy drinks makes people more talkative.

A

Firstly, we are going to gather together two groups of people, let’s say 10 in each group (mostly because we only know twenty people).

Then, starting with the first group, we will give each participant a can of SpeedUpp to drink.
The participants in the other group will just have a glass of water each.

We will then record how many words each participant says in a five-minute period immediately after they have had their drink.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Experimental method

How might you decide which type of hypothesis to use?
For e.g the energy drink theory.

p. 166

A

Psychologists tend to use a directional hypothesis when the findings of previous research studies suggest a particular outcome.
When there is no previous research, or findings from earlier studies, they will instead decide to use a non-directional hypothesis.

Even though SpeedUpp is a ‘new energy drink’, the effects of caffeine and sugar on talkativeness are well-documented. Therefore we will opt for a directional hypothesis on this occasion.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Experimental method

IV and DV functions in an experiment:

p. 167

A

In an experiment, a researcher changes or manipulates the independent variable (IV) and records or measures the effect of this change on the dependent variable (DV). All other variables that might potentially affect the DV should remain constant in a properly run experiment. This is so the researcher can be confident that the cause of the effect on the DV was the IV, and the IV alone.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Experimental method

Levels of the IV:

p. 167

A

To test the effect of an independent variable (IV), researchers create different experimental conditions—known as levels of the IV. These usually include a control condition (e.g., drinking water) and an experimental condition (e.g., drinking an energy drink). This allows for comparison and helps identify any effect of the IV on the dependent variable (DV). A good hypothesis should clearly show the IV, DV, and how they are operationalised (i.e., made measurable).

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Experimental method

Operationalisation of Variables:

p. 167

A

Psychological concepts like intelligence or social behaviour can be vague, so researchers must clearly define them in measurable terms. This process—operationalisation—makes variables specific and testable.

A01

Example: Instead of saying someone is “more talkative,” a better operationalised hypothesis would be:
“After drinking 300ml of SpeedUpp, participants say more words in the next five minutes than those who drink 300ml of water.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Control of variables

Extraneous variable (EV) definition -

p. 168

A

Any variable other than the IV that could affect the DV if not controlled.

EVs are essentially nuisance variables that do not vary systematically with the IV, but can still influence results, making them potential sources of error.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Control of variables

Confounding variables definition -

p. 168

A

Confounding variables are variables other than the IV that have affected the DV and vary systematically with the IV.
confounding variables make it unclear whether changes in the DV are due to the IV or the confounding variable.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Control of variables

Demand characteristics definition -

p. 168

A

Cues from the researcher or research setting that may reveal the study’s purpose to Participants, potentially causing them to change their behaviour.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Control of variables

Investigator effects definition -

p. 168

A

Any influence the researcher’s behaviour (intentional or unintentional) has on the outcome of the study. This can affect the DV and may occur through study design, participant selection, or interaction during the research.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Control of variables

Randomisation definition -

p. 168

A

Using chance to reduce bias when designing a study, such as in the order of conditions, helping to ensure fair and unbiased results.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Control of variables

Standardisation definition -

p. 168

A

The process of using the same procedures and instructions for all participants to ensure consistency and control in a research study.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Control of variables

Extraneous variables (EV) in an experiment:

p. 168

A

EV’s potentially interfere with the IV or DV) so should be controlled or removed. When possible, they are identified at the start of the study by the researcher, who then takes steps to minimise their influence.

Many EV’s are straightforward to control e.g age of the Pp’s, the lighting in the lab, etc.
The EV is described as ‘nuisance variables’ because they do not confound the findings of the study but just make it harder to detect a result.

A01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
# Control of variables What might be a confounding variables? in e.g the energy drink experiment | p. 168
If participants in different conditions differ in personality (e.g., introverts in the water group and extraverts in the energy drink group), personality becomes a confounding variable. Since it varies systematically with the IV, we can’t be sure if the DV (e.g., talkativeness) is due to the IV (the drink) or the confounding variable (personality). | A01
26
# Control of variables Demand characteristics in an experiment | p. 168
In research, participants often try to guess the study’s purpose using cues in the environment 1. Its evolutionary to asses your environment to understand what's going on 2. To try and undertsand what's expected of them in their behaviour —these are demand characteristics. This can lead them to change their behaviour: Please-U effect: Over-perform to help the researcher. Screw-U effect: Under-perform to sabotage the study. In both cases, Pp's behaviour becomes unnatural, an extraneous variable that may affect the DV. | A01
27
# Control of variables Example of Investigator effects in E.g the energy drink experiment | p. 169 ## Footnote Talk about Hugh Coolican (2006)
Unintended influence. E.g Given that we are expecting the energy drink group to speak more than the water group, we may unknowingly - in our unconscious behaviour - encourage a greater level of chattiness from the energy drink Pp's. This is an example of an investigator effect, which refers to any unwanted influence of the investigator on the research outcome. As Hugh Coolican (2006) points out, this can include expectancy effects and unconscious cues (described above). It might also refer to any actions of the researcher that were related to the study's design, such as the selection of the Pp's, the instructions, etc. | A01 ## Footnote Leading questions, which are discussed in relation to EWT on page 58, are also a good example of the power of investigator effects.
28
# Control of variables Example of randomisation in E.g the energy drink experiment | p. 169
The use of chance to minimise researcher bias in the design and procedure of a study, helping to control investigator effects. Example 1: Randomly ordering words in a memory test to avoid experimenter influence. Example 2: In repeated measures designs (e.g., testing different doses of SpeedUpp), randomising the order of conditions for each participant prevents order effects and bias. Randomisation ensures fairer comparisons and more valid results. (this is an alternative to counterbalancing - discussed on the next spread). | A01 ## Footnote Randomisation ensures fairer comparisons and more valid results.
29
# Control of variables Standardisation into practise: | p. 169
Tries to ensure that all Pp's experience the same environment, information, and procedures during a study. This includes using standardised instructions read to each Pp and following a set protocol. By keeping everything consistent, standardisation prevents non-controlled changes from becoming extraneous variables, helping maintain the study’s reliability and validity. | A01
30
# Experimental Design Experimental Design definition- | p. 170
The method used to organise how Pp's are assigned to different conditions during testing in an experiment. Will determine how comparisons are made between groups/conditions. | A01
31
# Experimental Design Independent groups design definition - | p. 170
Pp's are allocated to different groups where each group experiences only one condition of the experiment. | A01
32
# Experimental Design Repeated measures definition - | p. 170
All Pp's take part in all conditions of the experiment. | A01
33
# Experimental Design Matched pairs design definition - | p. 170
Pairs of Pp's are first matched on some variable(s) that may affect the DV. Then one member of the pair is assigned to Condition A and the other to Condition B. | A01
34
# Experimental Design Random allocation definition - | p. 170
A method used in an independent groups design, to control for participant variables by ensuring that each participant has an equal chance of being assigned to any experimental condition. | A01
35
# Experimental Design Counterbalancing definition- | p. 170
A technique used in repeated measures designs to control for order effects, by having half the participants complete the conditions in one order, and the other half in the opposite order. | A01
36
# Experimental Design Independent groups design in E.g the energy drink experiment | p. 170
In an independent groups design, two separate groups of participants take part in different conditions of the experiment. Each participant only experiences one level of the independent variable (IV). In the energy drink study: * Group 1 drinks the energy drink (Condition A – experimental). * Group 2 drinks water (Condition B – control). We then compare the groups’ performances—for example, the average (mean) number of words spoken in five minutes after drinking (described on page 192). | A01
37
# Experimental Design Repeated measures in E.g the energy drink experiment | p. 170
In a repeated measures design, the same Pp's take part in both conditions of the experiment. In the energy drink study: * Each participant first drinks the energy drink (Condition A – experimental). * Later, the same participant drinks water (Condition B – control). We then compare each person’s performance across the two conditions. This design controls for individual differences, making it easier to compare ‘like with like’—which might make it more suitable than an independent groups design. | A01
38
# Experimental Design Matched pairs in E.g the energy drink experiment | p. 170
In a matched pairs design, participants are paired based on a relevant characteristic (e.g., how talkative they are). Then, one person from each pair is placed in each condition. In the energy drink study: You observe participants and match the two most talkative people. One is put in the energy drink group (Condition A), the other in the water group (Condition B) before having to talk with eachother. Repeat this for the next most talkative pair, and so on. This helps control for participant differences (like natural talkativeness) and avoids issues with participants figuring out the study, like in repeated measures design which can have this problem. | A01
39
# Experimental Design Topic, Evaluation, + Independent Groups Design Evaluation positive evaluation | p. 171
No order effects, since each participant only experiences one condition—unlike repeated measures where fatigue or practice can affect results. Participants are also therefore less likely to guess the aims of the study, reducing demand characteristics. | A03
40
# Experimental Design Topic, Evaluation, - Independent Groups Design Evaluation negative evaluation | p. 171
Negative Evaluation: * A key limitation is that participants in each group are different individuals, so any differences in the DV may be due to participant variables, not the IV. * It’s also less economical than repeated measures, as more participants are needed since each person only contributes data to one condition. | A03
41
# Experimental Design Topic, Evaluation, + Repeated Measures Design Evaluation, Positive | p. 171
Pp's variables are controlled, since the same individuals take part in all conditions, making comparisons more reliable. It is also more economical, as fewer participants are needed than in an independent groups design. | A03
42
# Experimental Design Topic, Evaluation, - Repeated Measures Design Evaluation, Negative | p. 171
* The main issue is order effects—doing more than one task may cause fatigue, boredom, or practice effects, which can affect results and act as a confounding variable. * E.g, if a participant drinks the energy drink first, it might still influence their performance in the second condition. There's also a higher chance participants will guess the aim of the study, increasing the risk of demand characteristics. | A03 ## Footnote Practice effects refer to improvements in performance due to repeated exposure to tasks, which is a specific type of order effect.
43
# Experimental Design Topic, Evaluation, + Matched Pairs Design Evaluation, Positive | P. 171
* Order effects and demand characteristics are less of a problem since Pp's only take part in one condition. * This design helps control for Pp variables, improving the reliability of comparisons between conditions. | A03
44
# Experimental Design Topic, Evaluation, - Matched Pairs Design Evaluation, Negative | p. 171
* Perfect matching of Pp's is difficult; even when matching identical twins, there will still be differences that could affect the dependent variable (DV). * Matching Pp's can be time-consuming and expensive, especially if a pre-test is required, making this design less economical than others. | A03
45
# Types of experiment Topic Laboratory (lab) experiment definition- | p. 172
A laboratory experiment is an experiment conducted in a controlled environment where the researcher manipulates the independent variable (IV) and measures its effect on the dependent variable (DV). The researcher ensures strict control over extraneous variables that could interfere with the results. While the term “laboratory” suggests a specific setting, it can refer to any environment (such as a classroom) where conditions can be tightly controlled. | A01
46
# Types of experiment Topic Field experiment definition - | p. 172
A field experiment is an experiment conducted in a natural setting, where the researcher manipulates the independent variable (IV) and measures its effect on the dependent variable (DV). In field experiments, the IV is manipulated in a real-world/ everyday environment (often called "the field"), rather than a controlled lab setting. | A01
47
# Types of experiment Topic Natural experiment definition - | p. 173
An experiment where the change in the independent variable (IV) occurs naturally, and the researcher records its effect on the dependent variable (DV). It's important to note that while the IV changes naturally, the setting does not necessarily have to be natural and could be in E.g a lab. In a field experiment, the setting is natural. | A01 ## Footnote It is in a field experiment where the setting is always going to be natural
48
# Types of experiment Topic Quasi-experiment definition - | p. 173
A study that examines existing differences between groups (e.g., age, gender) without manipulating the independent variable (IV). The IV is not controlled by the researcher; it simply exists. Example: If the anxiety levels of phobic and non-phobic patients were compared, the IV of 'having a phobia' would not have come about through any experimental manipulation. | A01 ## Footnote Not a 'true' experiment due to lack of manipulation of the IV.
49
# Types of experiment Topic, + Laboratory experiments, strength | p. 172
* **High Control Over Extraneous Variables**: Researchers can control variables that might influence the dependent variable (DV), making it more likely that any changes in the DV are due to manipulation of the independent variable (IV) rather than extraneous variables (EVs). This increases internal validity, allowing the study to more confidently demonstrate cause and effect. * **Easier Replication:** The high level of control allows for more consistent replication of the experiment, ensuring that new extraneous variables (EVs) are not introduced. Replication is crucial to verify results and confirm whether findings are reliable or just a one-off. | A03
50
# Types of experiment Topic, - Laboratory experiments, limitation | p. 172
* **Low Generalisability (External Validity)**: Lab environments can be artificial and not reflect real-life settings. Pp's may behave in ways that are atypical because they are in an unfamiliar place, so their behavior might not apply beyond the research setting. * **Demand Characteristics**: Participants often know they are being tested, which can lead to unnatural behavior as they might try to guess the aim of the experiment (even if they don't know exactly). This can distort results. * **Low Mundane Realism**: Tasks in lab experiments may not replicate real-life experiences. For example, recalling random lists of words in a memory experiment may not be a task participants would face outside the lab, lowering the study's relevance to everyday situations. | A03
51
# Types of experiment Topic, + Field experiments, strength | p. 172
* **Higher Mundane Realism:** Field experiments take place in natural settings, making the environment more realistic than in lab experiments. This leads to more authentic and valid behavior from participants. * **High External Validity**: Since Pp's are often unaware they are being studied, their behavior is more natural, which increases the external validity of the study (i.e., better ability to generalize findings to real-life situations). | A03
52
# Types of experiment Topic, - Field experiments, limitations | p. 172
* **Loss of Control Over Extraneous Variables:** Increased realism in field experiments comes at the cost of reduced control over extraneous variables (EV's). This makes it harder to establish a clear cause and effect relationship between the IV and DV, and precise replication of the study may not be possible. * **Ethical Issues**: If participants are unaware they are being studied, they cannot give informed consent, raising ethical concerns. This can be seen as an invasion of privacy, as Pp's may not have agreed to be part of the research. | A03
53
# Types of experiment Topic, + Natural experiments, strength | p. 173
* **Opportunities for Unique Research**: Natural experiments allow researchers to study situations that may not be likely due to practical or ethical constraints. E.g, the study of institutionalized Romanian orphans (Rutter) would be difficult to conduct as a traditional experiment. * **High External Validity:** Because natural experiments focus on real-life issues and events as they happen (e.g., the impact of a natural disaster on stress levels), they tend to have high external validity, making the findings more applicable to real-world situations. | A03
54
# Types of experiment Topic, - Natural experiments, limitation | p. 173
* **The rarity of Naturally Occurring Events:** A naturally occurring event may only happen very rarely, reducing the opportunities for research. This also may limit the scope for generalising findings to other similar situations.. * **Lack of Random Allocation**: In natural experiments *with an independent groups design*, Pp's may not be randomly assigned to experimental conditions, which makes it harder to determine whether the IV directly influenced the DV. | A03 ## Footnote E.g, in the study of Romanian orphans, the IV was whether children were adopted early or late, but there were other differences between these groups, such as those who were adopted late may also have been the less attractive children who no one wanted to adopt, which could confound the results.
55
# Types of experiment Topic, + Quasi-experiments, strength | p. 173
Quasi-experiments are often conducted under controlled conditions, allowing them to share many of the strengths of lab experiments, such as high control over extraneous variables. | A03
56
# Types of experiment Topic, - Quasi-experiments, limitation | p. 173
Like natural experiments, quasi-experiments cannot randomly allocate Pp's to conditions. This lack of randomization means there may be confounding variables that could influence the results, making it harder to establish clear cause and effect. | A03
57
# Sampling Topic Population definition - | p. 174
A group of people who are the focus of the researcher's interest, from which a smaller sample is drawn for the study. | A01
58
# Sampling Topic Sample definition - | p. 174
A group of people who take part in a research investigation. The sample is drawn from a target population and is presumed to be representative of that population. | A01
59
# Sampling Topic Sampling techniques definition - | p. 174
The method used to select individuals from the population to create a sample for the research study. | A01
60
# Sampling Topic Bias definition - | p. 174
In the context of sampling, bias occurs when certain groups are over/underrepresented within the sample. E.g, a sample may have too many younger people or people from one ethnic group. This limits the extent to which generalisations can be made to the broader target population. | A01
61
# Sampling Topic Generalisation definition - | p. 174
The extent to which findings and conclusions from a particular investigation can be broadly applied to the population. This is made possible if the sample of participants is representative of the population. | A01
62
# Sampling Topic Types of sampling: Random sample - | p. 174 ## Footnote What's a strength to this method? -->
A sampling method where all member of the target population have an equal chance of being selected. *How it's done*: 1. Obtain a complete list of the target population. 2. Each person assigned a number. 3. Use a lottery method to select the sample (e.g. using random number generator or picking the numbers from a hat). | A01 ## Footnote This method reduces selection bias and increases the chance of getting a representative sample.
63
# Sampling Topic Types of sampling: Systematic sample | p. 174 ## Footnote What's a strength to this method? -->
A sampling method where every nth person from the target population is selected (E.g. every 5th pupil on a school register or every 3rd house). *How it's done*: 1. Create a sampling frame (an ordered list of the target population). 2. Choose a sampling interval (e.g. every 4th person), either fixed or randomly selected. 3. Select participants by working through the list using the chosen interval. | A01 ## Footnote This method is simple and structured, and can reduce selection bias if the sampling interval is chosen randomly.
64
# Sampling Topic Types of sampling: Stratified sample | p. 174 ## Footnote What's a strength to this method? -->
A sampling method where the sample reflects the proportions of people in certain sub-groups (strata) within the target/wider population. *How it's done*: 1. Identify key strata that make up the population (e.g. age, gender, team supported). 2. Calculate the proportion of each subgroup needed to match the population. 3. Use random sampling to select participants from each subgroup. E.g: If 40% support Manchester United, 40% City, 15% Bolton, and 5% Leeds, a stratified sample of 20 people would include: 8 United, 8 City, 3 Bolton, 1 Leeds—randomly selected within each group. | A01 ## Footnote This method increases representativeness and reduces sampling bias.
65
# Sampling Topic Types of sampling: Opportunity sample | p. 174 ## Footnote What's a limitation to this method? -->
A sampling method where the researcher selects anyone who is available and willing at that time to partake in the study (since people are so difficult to obtain). *How it's done*: The researcher takes the chance to ask whoever is around, such as people on the street or in a public place. | A01 ## Footnote However, it may lead to bias and limit the ability to generalise findings due to lack of representativeness.
66
# Sampling Topic Types of sampling: Volunteer sample | p. 174 ## Footnote What's a strength and limitation to this method? -->
A sampling method where Pp's select themselves to be part of the study—also known as self-selection. *How it's done*: Researchers may place an advert (e.g. in a newspaper, online, or on a notice board), or ask for volunteers directly (e.g. people raising their hand). | A01 ## Footnote This method is easy and requires less effort from the researcher, but may result in a biased sample, as volunteers might differ in important ways from the general population (e.g. more motivated or helpful).
67
# Sampling Topic, Evaluation + Random sample, strengths | p. 175
* **Free from researcher bias**: Selection is completely random, so the researcher cannot influence who is chosen—this improves the objectivity of the sample. * **Potential for representativeness**: If the sample is large enough, random methods are more likely to produce a representative sample than some other techniques (e.g. opportunity sampling). | A03
68
# Sampling Topic, Evaluation - Random sample, limitations | p. 175
* **Time-consuming and impractical:** Requires a complete list of the target population, which may be difficult to obtain. * **Still possible to get an unrepresentative sample:** Pure chance may result in a biased group (e.g. all participants sharing similar traits). * **Participant refusal:** Selected individuals may decline to take part, making the final sample less random and more like a volunteer sample. | A03
69
# Sampling Topic, Evaluation + Systematic sample, strengths | p. 175
* **Avoids researcher bias:** Once the selection system is in place - especially if the starting point or interval is randomly chosen - the researcher has no influence over who is selected. * **Fairly representative:** Systematic methods usually result in a sample that reflects the population well. It's unlikely, for example, to get an all-male sample by chance. | A03
70
# Sampling Topic, Evaluation - Systematic sample, limitations: | p. 175
* **Not entirely free from bias:** If the list has an underlying pattern (e.g. alphabetical order tied to another variable), the sampling may still be biased. * **Still requires a sampling frame:** A complete, ordered list of the population is needed, which may not always be available or practical to create. | A03
71
# Sampling Topic, Evaluation + Stratified Sampling, strengths: | p. 175
* **Avoids researcher bias**: After dividing the population into strata, Pp's are randomly selected, reducing the chance of researcher influence. * **Highly representative**: The sample is designed to accurately reflect the proportions of key subgroups (e.g. gender, age, ethnicity) in the population, increasing the validity and generalisation of findings. | A03
72
# Sampling Topic, Evaluation - Stratified Sampling, limitations: | p. 175
* **Not fully representative:** Stratified sampling only accounts for identified subgroups, so it can't reflect all individual differences within a population. * **Time-consuming:** Requires detailed knowledge of the population and careful planning to divide into correct strata and calculate proportions. | A03
73
# Sampling Topic, Evaluation + Opportunity sample, strength: | p. 175
**Convenient and quick**: This method is time-efficient and cost-effective, requiring minimal effort - compared to more rigorous techniques like random sampling. | A03
74
# Sampling Topic, Evaluation - Opportunity sample, limitations | p. 175
* **Unrepresentative sample:** The sample is often drawn from a narrow, specific group (e.g. psychology students or people in one location), making it difficult to generalise findings to the wider population. * **Researcher bias:** The researcher has full control over who is selected and may (consciously or unconsciously) choose people who fit their expectations or avoid those they dislike, reducing objectivity. | A03
75
# Sampling Topic, Evaluation + Volunteer sample, strength: | p. 175
* **Easy and time-efficient**: Minimal effort from the researcher, as Pp's select themselves to take part, saving time compared to other sampling methods. | A03
76
# Sampling Topic, Evaluation - Volunteer sample, limitation: | p. 175
* **Volunteer bias**: The sample may consist of a particular type of person (e.g. helpful, curious, or enthusiastic), which may limit how well findings can be generalised to the broader population. | A03
77
# Ethical issues + ways of dealing with them, topic. Ethical Issues definition: | p. 176
Ethical issues arise when there is a conflict between the rights of Pp's (such as privacy and consent) and the goals of research (like producing valid and authentic data). | A01
78
# **Ethical issues** + ways of dealing with them, topic. BPS Code of Ethics definition: | p. 176 ## Footnote It is built around four major principles, those being:
A quasi-legal document produced by the British Psychological Society (BPS) that guides psychologists in the UK on what behavior is and isin't acceptable when working with Pp's. | A01 ## Footnote Respect, competence, responsibility and integrity.
79
# **Ethical issues** + ways of dealing with them, topic. Talk about Informed consent | p. 176 ## Footnote consequence to this -->
Informed consent involves making participants aware of: 1. The aims of the research 2. The procedures involved 3. Their rights, including the right to withdraw from the study at any time 4. How their data will be used This allows Pp's to make an informed judgment about whether to participate or not, ensuring they dont feel coerced or obligated. | A01 ## Footnote However, asking for informed consent might influence participant behavior, as knowing the study's aims could lead to unnatural responses, making the study less valid.
80
# **Ethical issues** + ways of dealing with them, topic. Talk about Deception: | p. 176 ## Footnote Deception example in the energy drink study -->
Deception involves deliberately misleading or withholding information from participants during any stage of the research process. When participants are misled or not fully informed, they cannot give true informed consent, as they don’t have all the necessary details about the study. It can be justified if it does not cause undue distress to Pp's. | A01 ## Footnote E.g: In an energy drink study, it may be reasonable to withhold information about a second group consuming a different substance to avoid influencing participants' behavior.
81
# **Ethical issues** + ways of dealing with them, topic. Talk about protection from harm: | p. 176
Pp's should not experience more risk than they would in their daily lives and must be protected from physical and psychological harm. E.g made to feel embarrassed, inadequate, or experiencing undue stress or pressure. Pp's should be reminded that they have the right to withdraw from the study at any time, ensuring their safety and wellbeing. | A01
82
# **Ethical issues** + ways of dealing with them, Topic. Talk about Privacy and confidentiality: | p. 176
* **Privacy:** Participants have the right to control information about themselves and to keep their personal details private. * **Confidentiality:** The researcher must protect Pp's personal data, ensuring it remains confidential as required by law (under the Data Protection Act). * **Right to Anonymity**: A Pp should not be named if it could breach confidentiality, this extends to location and institutions involved in the study. | A01
83
# Ethical issues + **ways of dealing with them**, Topic. Talk about BPS Code of Conduct - | p. 177
* The British Psychological Society (**BPS**) has a set of ethical guidelines that researchers are expected to follow. These guidelines aim to ensure Pp's are treated with respect and consideration during all stages of research. **Enforcement:** Researchers are not legally bound to follow these guidelines (no prison for violations), but failure to do so can result in professional consequences (e.g., loss of job). **Ethics Committees:** Research institutions use ethics committees to assess whether research proposals are ethically acceptable, often using a cost-benefit approach. | A01
84
# Ethical issues + **ways of dealing with them**, Topic. Dealing with Informed Consent | p. 177
**Procedure:** Pp's should be given a consent letter / form containing all relevant information that could influence their decision to participate. **Signature:** Once the participant agrees, they sign the form to confirm their consent. **For minors (under 16):** Parental consent must also be obtained. | A01 ## Footnote There are other ways to obtain consent, which may be used depending on the research context E.g verbal consent.
85
# Types of consent: Presumptive consent - | p. 177
Rather than gaining consent directly from the Pp's, a similar group of people are asked whether the study would be considered acceptable. If this group agrees, then the consent of the actual Pp's is presumed, even though they have not personally consented. | A01
86
# Types of consent: Retrospective consent - | p. 177
Consent is obtained after the Pp's has already taken part in the study, usually during the debriefing. This often applies in cases where the Pp was not aware they were being studied or where subjects of deception. | A01
87
# Types of consent: Prior general consent - | p. 177
Pp's give permission in advance to take part in a range of studies, including one involving deception. By doing so, they are effectively consenting to being deceived, even though they don’t know the exact details of the deception they will be subjected to. | A01
88
# Ethical issues + **ways of dealing with them**, Topic. Talk about Debriefing (Dealing with Deception and Protection from Harm) | p. 177
**At the end of a study, Pp's must be given a full debrief:** * The true aims of the study, including any information that was withheld during the investigation (e.g., the existence of other groups). * What will happen to their data should be made clear, and they should be given the right to withhold data if they wish. especially important if retrospective consent is a feature of the study (consent after the study is completed). **Concerns about performance:** * Pp's should be reassured that their behavior was normal. * If Pp's experienced stress or embarrassment, they may need counselling, which the researcher should provide. | A01
89
# Ethical issues + **ways of dealing with them**, Topic. Dealing with confidentiality | p. 177
* **Data Protection:** If any personal details are collected, they must be securely protected in accordance with ethical and legal standards. * **Anonymity:** Most studies avoid this issue by not recording personal details at all, instead maintaining anonymity using initials or Pp numbers (e.g., "HM" in case studies). * **Standard Practice:** During briefing and debriefing, Pp's are reminded that their data will remain confidential and used only for the purposes of the research. | A01
90
# Pilot studies (and more) Topic. Pilot study definition- | p. 178
A small-scale version of a study carried out before the main investigation. The aim is to check that the procedures, materials, and measuring tools work as intended, and to identify any issues so that necessary modifications can be made in advance. | A01
91
The aims of piloting | p. 178
A pilot study is a small-scale trial run of the actual investigation, using a smaller number of Pp's to test procedures and ensure the study runs smoothly. Pilot studies are not just restricted to experimental studies, they are also useful in: * In self-report methods: (e.g., questionnaires, interviews), pilot studies are used to test questions in advance and ensure they are clear and unambiguous. * In observational studies: to test coding systems and train observers. In short, piloting helps identify potential problems and allows researchers to make improvements before doing their full-scale data collection. | A01
92
Single-blind procedure | p. 178
A single-blind procedure is when Pp's are not fully informed about key aspects of the study — such as its aim or which experimental condition they are in. This is done to control for demand characteristics, which occur when participants alter their behavior because they think they know the purpose of the study. | A01
93
Double-blind procedures | p. 178
In a double-blind procedure, neither the Pp's nor the researcher conducting the study knows the aims of the investigation or which condition the Pp is in. This helps eliminate both demand characteristics (from Pp's) and researcher bias. Commonly used in drug trials, where treatments (real or placebo) are administered by an independent third party who is also unaware of which is which. | A01
94
Control groups and conditions | p. 178
In experiments, the experimental group receives the independent variable (e.g., a real drug), while the control group receives a placebo or no treatment. The control group provides a baseline for comparison, helping researchers determine whether changes in the experimental group are due to the IV, which is true if the experimental group shows a significantly greater effect than the control group — and all other variables are controlled. | A01
95
# Observational Techniques Naturalistic observation definition - | p. 180
Watching and recording behaviour in the natural environment where it would normally occur, without interference or control by the researcher. | A01
96
# Observational Techniques Covert observation definition - | p. 180
Pp's' behaviour is watched and recorded without their knowledge or consent. | A01 ## Footnote Ensures a more natural behaviour but also raises ethical concerns
97
# Observational Techniques Participant observation definition - | p. 180
The researcher actively becomes a member of the group whose behaviour they are watching and recording. They observe behaviour from within, gaining deeper insight through direct involvement. | A01
98
# Observational Techniques Overt observation definition - | p. 180
Pp’s behaviour is watched and recorded with their knowledge and consent, ethically sound but could be affected by demand characteristics. | A01
99
# Observational Techniques Controlled observation definition - | p. 180
Watching + recording behaviour within a structured environment, where some variables are controlled or manipulated to observe their effects. | A01
100
# Observational Techniques Non-participant observation definition - | p. 180
The researcher remains outside of the group whose behaviour they are observing + recording - in an objective manner, without actively participating unlike participant observation. | A01
101
# Types of observation Types of Observation – Introduction | p. 180
Observation is a non-experimental method used to watch and record natural behavior without relying on self-report techniques. Observations can occur in natural or controlled settings and are useful for studying complex interactions in a flexible and realistic way. This method is often used within experiments to help measure the dependent variable. | A01
102
# Types of observation Naturalistic Observation | p. 180
Takes place in an environment where the target behaviour would naturally occur. All aspects of the setting are uncontrolled, allowing behaviour to unfold naturally. Useful for studying real-life interactions — e.g., observing workplace dynamics from factory workers in an actual factory rather than a lab. | A01
103
# Types of observation Controlled Observation | p. 180
Conducted in a structured environment where certain aspects of the situation are controlled by the researcher. Allows for the manipulation of variables and better control over extraneous variables. Example: Mary Ainsworth’s Strange Situation, where child–mother interactions were observed in a designed playroom and recorded via a two-way mirror to avoid interference. | A01
104
# Types of observation Covert Observation | p. 180
In covert observations, participants are unaware they are being studied — their behaviour is observed secretly. This method avoids demand characteristics, allowing for natural behaviour to be recorded. For ethical reasons, only public behaviour that would be happening anyway can be observed without consent. | A01
105
# Types of observation Overt Observation | p. 180
In overt observations, participants are aware they are being observed and have given informed consent. This method is more ethically sound than covert observation, but participants may alter their behaviour due to demand characteristics. | A01
106
# Types of observation Participant Observation | p. 180
In participant observation, the researcher becomes part of the group being studied. This allows for a first-hand, in-depth account of behaviour in its natural context. E.g A researcher joining a workforce to study interactions between employees and management. | A01
107
# Types of observation Non-Participant Observation | p. 180
The researcher remains separate from the group being studied and observes from a distance. This approach helps maintain objectivity in recording behaviour. Often used when joining the group is impractical — e.g., a female adult researcher observing Year 10 boys in a school setting. | A01
108
# Observational Techniques, Evaluation, - General Limitation to All Observational Studies | p. 181
In all observations it is not possible to establish cause and effect. Observations provide valuable data about what happens but not necessarily why it happens. This limitation is because other uncontrolled factors might be influencing the behaviour, making it difficult to isolate the effect of the independent variable. | A03
109
# Observational Techniques, Evaluation, + and - Naturalistic Observations Evaluation | p. 181
**strength**: High external validity. Findings can be generalized to everyday life because the behaviour is observed in its natural context. **Limitation**: Lack of control over variables makes replication difficult and introduces uncontrolled extraneous variables, which can make it hard to identify clear patterns of behaviour. | A03
110
# Observational Techniques, Evaluation, + and - Controlled Observations Evaluation | p. 181
**Strength**: Easier to replicate due to more control over variables, reducing the influence of extraneous factors. **Limitation**: Findings may lack external validity, making them less applicable to real-life settings. | A03
111
# Observational Techniques, Evaluation, + and - Covert Observations Evaluation | p. 181
**strength**: Removes participant reactivity, meaning the behaviour observed is more natural and valid, as participants are unaware they are being studied. **Limitation**: Ethical concerns arise, as Pp's may not want their behaviour to be recorded, even in public spaces. This could violate privacy, especially if the behaviour being recorded is seen as private (e.g., spending money during shopping). | A03
112
# Observational Techniques, Evaluation, + and - Overt Observations Evaluation | p. 181
**Strength**: More ethically acceptable as participants are aware they are being observed, which ensures informed consent. **Limitation**: The knowledge of being observed can lead to participant reactivity (a demand characteristic), influencing their behaviour and reducing the validity of the data. | A03
113
# Observational Techniques, Evaluation, + and - Participant Observations Evaluation | p. 181
**strength**: Provides the researcher with an insider's perspective, increasing the validity of the findings by offering deeper insight into the participants' lives. **Limitation**: There's a risk of the researcher losing objectivity and becoming too involved with the Pp's, which can lead to biased results. This is known as "going native," where the researcher may no longer distinguish between their role as an observer and a participant. | A03
114
# Observational Techniques, Evaluation, + and - Non-Participant Observations Evaluation | p. 181
**strength**: Allows the researcher to maintain objectivity and psychological distance from the Pp's, reducing the risk of bias or becoming too involved with the group. **Limitation**: The researcher might miss out on valuable insights that could come from being more immersed in the group they're studying, potentially reducing the depth of understanding about their behaviours. | A01
115
Behavioural categories definition - | p. 182
When a target behaviour is broken up into components that are observable and measurable, for consistent and objective recording during an observation | A01
116
Event sampling definition- | p. 182
A method of observational research where a specific target behaviour or event is defined in advance, and the researcher records every occurrence of that behaviour during the observation period. | A01
117
Time sampling definition- | p. 182
An observational technique where the behaviour of a target individual or group is recorded at fixed time intervals (e.g., every 60 seconds) to obtain a representative sample of their activity over time. | A01
118
Inter-Observer Reliability in Observational Research | p. 182
* Observational studies are best conducted by **two or more observers** to reduce bias and improve objectivity. * A single observer might **miss behaviours** or unconsciously record what supports their hypothesis. **Steps to establish inter-observer reliability:** 1. **Familiarise** observers with the behavioural categories. 2. **Observe** the same behaviour at the same time (often during a pilot study). 3. **Compare and discuss** recorded data to resolve any differences. 4. **Analyse** the consistency of data across observers. **Reliability check:** Calculated by **correlating** the data sets from each observer. A **high correlation score** (e.g., 80%+ agreement) indicates strong inter-observer reliability. | A01
119
Structured Observations: | p. 182
Researchers focus on specific target behaviours that are pre-defined + sampling methods, allowing for easier data recording and quantification. Helps when there's too much behaviour to record all at once. Example: In a school playground, the researcher might define 'aggression' in terms of verbal or physical acts. | A01
120
Unstructured Observations: | p. 182
* Researchers record everything they observe, providing detailed, rich descriptions of behaviour. * Best suited for small-scale studies with few participants. Example: Observing interactions between a couple and a therapist in a marriage guidance session. | A01
121
behavioural categories/behaviour checklists: | p. 182
* In structured observations, researchers break down the **target** behaviour into a set of **behavioural categories** or **behaviour checklists**. * This is similar to **operationalisation** (p. 167) — making abstract concepts observable and measurable. * Each category should be **precisely defined**, ensuring they are **observable** and **measurable**. **Example**: The target behaviour of '**affection**' might be broken down into: Hugging, kissing, smiling, holding hands, etc. **No inferences** should be made (e.g., ‘being loving’), as they could lead to **misinterpretation**. **Before the observation**: The researcher must ensure they’ve accounted for all possible ways the target behaviour may occur in their checklist. | A01
122
Sampling Methods in Observations: Event Sampling: | p. 182
* Involves **counting the number of times** a specific behaviour (or "event") occurs within the observation period. **Example**: In a football match, event sampling would involve counting the number of times players express dissent towards the referee. | A01
123
Sampling Methods in Observations: Time Sampling | p. 182
* Involves recording behaviour within pre-established **time intervals**. **Example**: If studying a specific football player, time sampling might involve observing and recording their behaviour every 30 seconds using a behavioural checklist. | A01
124
# Evaluation, + and - Structured Observations: | p. 183
*Strengths*: **Systematic and Reliable**: The use of predefined behavioural categories ensures a more consistent and objective approach to recording data. This makes it easier to replicate the study and reduces bias. **Easier to Analyse**: As structured observations often produce quantitative data (e.g., frequency counts or ratings), the data can be more easily analysed, compared, and summarized, especially with statistical tools. **Reduces Observer Bias**: The clear behavioural categories help reduce the chance that the researcher will be influenced by their subjective judgment, making the data more reliable. *Limitations*: **Loss of Rich Detail**: The structured approach may miss important nuances or details that could provide valuable insights into the behaviours being observed, as it narrows the focus to only specific categories. **Risk of Oversimplification**: By reducing behaviours to simple categories, there is a risk of oversimplifying complex human actions, potentially missing the full complexity of the situation. ## Footnote Structured observations are ideal for when the researcher wants to ensure consistency and obtain easily analysable data, especially if comparing groups or behaviours is important.
125
# Evaluation, + and - Unstructured Observations: | p. 183
*Strengths*: **Rich, Detailed Data**: Unstructured observations provide a more detailed and holistic view of the participants' behaviour. Researchers can capture a broader range of behaviours, including those that might not fit into predefined categories. **Flexibility**: This approach is flexible, allowing researchers to explore unexpected behaviours or patterns that emerge during the observation. *Limitations*: **Difficult to Analyse**: Unstructured observations tend to produce qualitative data, which is harder to quantify and analyse systematically. This makes drawing conclusions or comparisons more challenging. **Observer Bias**: There is a higher risk of observer bias, as the researcher may focus on behaviours that stand out to them or that confirm their expectations, rather than observing all behaviours equally. **Lack of Consistency**: Without predefined categories, different researchers may interpret and record behaviours in different ways, which can reduce the reliability of the data. | A03 ## Footnote Unstructured observations are better suited for exploratory research, where the researcher wants to capture a more nuanced and detailed picture of the behaviour, without predefined constraints.
126
# Evaluation, + and - behavioural categories in observational research Evaluation | p. 183 ## Footnote Explain ways to improve this limitation -->
strength: Makes data collection more structured and objective, improving reliability. limitation: Poorly defined or overlapping categories can reduce validity—if categories aren’t clear, researchers might interpret behaviours differently. | A03 ## Footnote Researchers should ensure that all possible forms of the target behaviour are included in the checklist. There should not be a 'dustbin category' in which many different behaviours are deposited. Finally, categories should be exclusive and not overlap; for instance, the difference between 'smiling' and 'grinning' would be very difficult to discern.
127
Event sampling evaluation | p. 183
(+) Good for behaviours that happen infrequently and could be missed if time sampling was used, - so less likely to miss important events. (-) May miss details if the behaviour is complex or happens quickly. | A03
128
# Evaluation, + and - Time sampling evaluation | p. 183
(+) Reduces the amount of data collected—more manageable. (-) May miss key behaviours and give an unrepresentative picture. | A03
129
Self-report technique definition- | p. 184
Any method where a person is asked to describe their own feelings, opinions, behaviours, or experiences related to a specific topic. | A01
130
# self-report techniques Questionnaire definition- | p. 184
A questionnaire is a set of written questions (also called 'items') designed to assess a person's thoughts, feelings, and experiences on a specific topic. | A01
131
# self-report techniques Interview definition - | p. 184
An interview is a 'live' encounter, either face-to-face or over the phone, where an interviewer asks a set of questions to assess the interviewee's thoughts and/or experiences. The questions may be pre-set (structured) or evolve during the conversation (unstructured). | A01
132
# self-report techniques How are Questionnaire's used in Psychology? | p. 184
Questionnaires - a pre-set list of written questions (or items) to which the participant responds. Psychologists use questionnaires to assess thoughts and/or feelings. They might be used to explore topics like dreams, personality traits, or attitudes towards specific issues (e.g., legalising recreational drugs). They can also be part of an experiment, for instance, to assess how views on topics differ between groups (e.g., age differences in opinions on drug legalisation) | A01
133
# Self-report techniques, Open and Closed Questions in a Questionnaire: Open Questions: | p. 184
Do not have a fixed range of answers; respondents can answer in any way they wish. Example: How did you feel during the energy drink experiment? tend to produces more qu**ali**tative data, which is rich in depth and detail but can be difficult to analyse. | A01
134
# Self-report techniques, Open and Closed Questions in a Questionnaire: Closed Questions: | p. 184
Offer a fixed number of responses. Example: Did you feel more talkative after consuming the energy drink? (Yes/No) or Rate how sociable you felt on a scale of 1 to 10. Produces qu**ant**itative data, which is easy to analyse but may lack depth. | A01
135
# self-report techniques Structured interviews: | p. 184
Structured interviews are made up of a pre-determined set of questions that are asked in a fixed order. Basically like a questionnaire but conducted face-to-face or over the phone in real-time. | A01 ## Footnote i.e. the interviewer is asking the questions and waiting for a response in real time.
136
# self-report techniques Unstructured interviews: | p. 184
An unstructured interview is more like a conversation, with no set questions. There is a general topic to discuss, but the interaction is free-flowing, and the interviewee is encouraged to expand and elaborate on their answers based on the interviewer’s prompts. | A01
137
# self-report techniques Semi-structured interviews: | p. 184
A semi-structured interview is a mix between structured and unstructured interviews. There is a pre-determined list of questions, but the interviewer is free to ask follow-up questions when they feel it is appropriate. | A01 ## Footnote Example: A job interview is often semi-structured, with a set of planned questions, but the interviewer can ask additional questions if needed for clarification or further insight.
138
Strengths of Questionnaires | p. 185
**Cost-effective**: Questionnaires can gather large amounts of data quickly, as they can be distributed to many people at once, often without the researcher needing to be present. **Ease of analysis**: The data, especially from closed questions, is straightforward to analyse and lends itself well to statistical methods. Comparisons between groups can be made easily using charts and graphs. | A03
139
Limitations of Questionnaires | p. 185
**Social desirability bias**: Respondents may not always be truthful, especially if they wish to present themselves in a positive light. E.g, people may downplay negative habits like giving and underestimated response to how many times they lose their phone. **Response bias**: Some Pp's may provide similar responses throughout the questionnaire (e.g., always answering "yes" or favouring one end of a scale). This can occur if they rush through the questionnaire or don’t read the questions carefully. | A03 ## Footnote **Acquiescence bias (in relations to response bias)**: A specific form of response bias where respondents only agree with statements or repeatedly choose the same option without considering each question carefully.
140
Strengths of Structured Interviews: | p. 185
**Easy to replicate:** Their standardised format makes structured interviews straightforward to repeat, increasing reliability. **Reduces interviewer bias:** The fixed questions minimise variations in how different interviewers conduct the interview. | A03
141
Limitations of Structured Interviews: | p. 185
**Lacks flexibility**: Interviewers cannot deviate from the set questions or explore interesting points further. **May frustrate participants**: The rigid structure might limit participants’ ability to fully express themselves, which can affect the depth of the data. | A03
142
What are the Strengths of Unstructured Interviews? | p. 185
**Greater flexibility**: Interviewers can explore new topics as they arise, making the interview more conversational and responsive. **Deeper insight**: The open format allows interviewees to elaborate, offering richer, more detailed data that gives insight into their perspective. | A03
143
What are the Limitations of Unstructured Interviews? | p. 185
**Difficult to analyse**: Data may be lengthy, unstructured, and include irrelevant details, making analysis time-consuming and conclusions harder to draw. **Risk of social desirability bias**: Interviewees may lie to appear socially acceptable. However, skilled interviewers can build rapport to encourage honest responses, even on sensitive topics. | A03
144
# self-report design Open questions definition - | p. 186 ## Footnote What kind of data does this produce?
An open question allows respondents to answer in any way they wish, with no fixed set of responses E.g Why did you take up smoking? | A01 ## Footnote Produces qu**ali**tative data which is rich in detail but harder to analyse.
145
# self-report design Closed questions definition - | p. 186 ## Footnote What kind of data does this produce?
A closed question provides a fixed set of responses determined by the question setter. E.g, "Do you smoke?" (Yes/No) | A01 ## Footnote Produces qu**ant**itative data, which is easy to analyse but may lack depth.
146
# self-report design Closed questions can be further divided into different types. But why would we call the following types 'Items' instead of questionnaires? | p. 186
Because they are not really questions in the traditional sense. | A01
147
# self-report design Likert scales | p. 186
A Likert scale is a type of closed question where respondents indicate their level of agreement with a statement, typically on a 5-point scale ranging from strongly agree to strongly disagree. Example Statement: "Zombie films can have educational value" 1 — Strongly agree 2 — Agree 3 — Neutral 4 — Disagree 5 — Strongly disagree | A01
148
# self-report design rating scales | p. 186
A rating scale is a type of closed question where respondents select a value that represents the strength of their feelings on a topic, typically on a numerical scale. Example Question: "How entertaining do you find zombie films?" (Circle the number that applies to you) 1 — Very entertaining 2 3 4 5 — Not at all entertaining | A01
149
# self-report design Fixed choice option | p. 186
A fixed choice option includes a list of possible responses, and respondents must choose those that apply to them. Example Question: "For what reasons do you watch zombie films?" (Tick all that apply) ☐ Entertainment ☐ To escape ☐ To be frightened ☐ Amusement ☐ Education ☐ To please others | A01
150
# self-report design What Should Be Considered When Designing an Interview? | p. 186
* **Interview Schedule**: A list of questions the interviewer intends to ask, standardised to reduce interviewer bias. * **Environment**: Conduct interviews in a quiet, private space to make participants feel comfortable and increase openness. * **Rapport Building**: Start with neutral, easy questions to help the interviewee relax and establish rapport. * **Ethical Considerations**: Ensure participants know their answers will be kept confidential, especially for sensitive topics. * **Recording**: Interviews may be recorded for later analysis, or notes may be taken during the interview. | A01
151
# self-report design, Written good questions: Why Is Clarity Important When Writing Questions? | p. 187
Clarity ensures that respondents understand the question correctly, leading to more reliable and accurate data. | A01
152
# self-report design, Written good questions: What Is Jargon, and Why Should It Be Avoided in Questions? | p. 187 ## Footnote how do we avoid this?
Jargon refers to technical terms familiar only to people within a specific field. Example of Jargon in a Question: "Do you agree that maternal deprivation in infanthood inevitably leads to affectionless psychopathy in later life?" This question may confuse the general public because of the technical terms used. | A01 ## Footnote Use simple language to make questions easily understood by all participants.
153
# self-report design, Written good questions: What Are Emotive Language and Leading Questions? | p. 187 ## Footnote how do we avoid this?
* **Emotive Language**: When a question uses emotional words to influence the respondent’s opinion. *Example* : "Boxing is a barbaric sport and any sane person would want it banned." This phrasing leads the respondent to feel a certain way about boxing. * **Leading Questions**: Questions that suggest a particular answer. *Example* : "Isn't it obvious that student fees should be abolished?" This question pushes the respondent toward agreeing. | A01 ## Footnote Use neutral language to avoid bias and emotional influence.
154
# Self-report design, Written good questions: What Are Double-Barrelled Questions, and Why Are They Problematic? | p. 187
* A double-barrelled question contains **two separate questions within one**. *Example* : "Do you agree with this statement: Premier League footballers are overpaid and should have to give twenty per cent of their wages to charity?" **Problem**: Respondents may agree with one part but disagree with the other, making the response unclear. | A01
155
What Is the problem with Double Negatives in Questions? | p. 187 ## Footnote how do we avoid this?
**Example** of a Double Negative Question: "I am not unhappy in my job (agree/disagree)." **Problem**: Double negatives confuse the respondent, making it harder for them to understand the intended meaning. | A01 ## Footnote Phrase questions in a straightforward, positive way to improve clarity.
156
Correlation definition - | p. 188
A correlation is a mathematical technique used to investigate whether there is an association between two variables (called co-variables). Unlike experiments, correlations do not show cause and effect. | A01
157
Co-variables definition- | p. 188
**The variables being measured in a correlation**, E.g, height and weight. They are not referred to as the IV and DV because a correlation investigates the **association** between the variables, rather than trying to show a cause and effect relationship, AKA only examines relationships, **not causality**. | A01
158
Positive correlation definition- | p. 188
As one co-variable increases so does the other | A01 ## Footnote E.g, As the number of people in a room increases, noise level also increases.
159
Negative correlation definition- | p. 188
As one co-variable increases the other decreases. | A01 ## Footnote E.g, As the number of people in a room increases, amount of personal space decreases.
160
Zero correlation definition- | p. 188
When there is no relationship between the co-variables | A01 ## Footnote E.g, The number of people in a room in Manchester and the daily amount of rainfall in Peru.
161
How Is a Correlation Represented in Research? | p. 188
A correlation shows the strength and direction of the relationship between two or more co-variables. * Correlations are **visually represented using a scattergram**. On a scattergram: * One co-variable is plotted on the x-axis. * The other co-variable is plotted on the y-axis. * Each dot represents the pair of values (x, y) for a single participant or observation. Patterns in the dots can reveal positive, negative, or zero correlations. | A01
162
What Is the Key Difference Between a Correlation and an Experiment? | p. 188
Experiments: * The researcher manipulates the IV. * Measures the effect on the dependent variable DV. * Causal relationships can be established (cause and effect). Correlations: * There is no manipulation of variables. * Measures the relationship between co-variables. * Cannot determine cause and effect. | A01
163
# Positive evaluations What Are the Strengths of Correlational Research? | p. 189
**Useful for preliminary research**: * Helps identify patterns or associations between variables. * Can guide future research by suggesting possible variables worth investigating further through experiments. **Precise and quantifiable**: * Provides a clear measure of the **strength and direction** of a relationship between variables. **Quick, economical, and efficient**: * No need for a lab setting or variable manipulation. * Can use **secondary data** (e.g., government statistics), saving time and resources. | A03
164
# Negative evaluations What Are the Limitations of Correlational Research? | p. 189
**Cannot show cause and effect**: * Only reveals associations, not which variable causes the other to change. *Example*: High caffeine and high anxiety — we can’t be sure which causes which. **Direction of effect is unclear**: * Known as the directionality problem — does A cause B, or does B cause A? **Third variable problem (intervening variables)**: * An unaccounted-for variable may be influencing both co-variables. *Example*: Job stress may lead to both more caffeine and higher anxiety. **Risk of misinterpretation**: * Correlations can be wrongly reported as causal, especially in media. *Example*: Link between single-parent families and crime may be due to factors like emotional hardship — not family structure alone. | A03
165
Qualitative data definition- | p. 190
Data expressed in words and so is in a non-numerical form. 📝 It may later be converted into numbers for analysis). | A01
166
Quantitative data definition- | p. 190
Data that can be counted/measured, usually presented as numbers. | A01
167
Primary data definition- | p. 190
Information collected firsthand by the researcher specifically for the current research project. 📌 Often gathered from Pp's via experiments, self-reports, or observations. | A01 ## Footnote Provides original, directly relevant data specific to the researcher’s question.
168
Secondary data definition- | p. 190
Secondary data - Information that has already been collected by someone else and so pre-dates the current research project. In psychology, such data might include the work of other psychologists, government statistics or public records. | A01 ## Footnote Saves time and resources for researchers and can be used to supplement primary data or offer a comparative basis.
169
Meta-analysis definition- | p. 190
A method that combines findings from multiple studies on the same topic to provide an overall conclusion. 🔍 Can involve a qualitative review and/or a quantitative analysis of the results producing an effect size. | A01 ## Footnote Effect Size: A measure of how big or small a difference/relationship is in a study.
170
What is Qu**ali**tative Data? | p. 190
The Data that is expressed in words, rather than numbers. It can be gathered through interviews, observations, and open-ended questions. It typically reflects thoughts, feelings, and experiences. It provides depth and rich detail about a Pp's experience, views, or emotions. | A01 ## Footnote Example: An interview or a personal diary entry.
171
What is Qu**ant**itative Data? | p. 190
The Data is expressed numerically and can be counted or measured. It's often collected in experiments or surveys and can be analyzed statistically. Offers a measurable, objective means of assessing variables and allows for statistical analysis. | A01 ## Footnote Example: The number of words recalled in a memory test or the time it takes to complete a task.
172
Which one is best, Qualitative Data or Quantitative Data? | p. 190
**Neither is inherently better**: It *depends on the research question*. * Qu**ali**tative data provides deep insights into experiences, while qu**ant**itative data provides measurable results that can be statistically analyzed. **Combination of both**: Many researchers use both approaches. * E.g, qu**ant**itative data can be *supported* by qu**ali**tative interviews for deeper context. Similarly, qu**ali**tative data can sometimes be *converted* into numerical data for statistical analysis. | A01
173
# evaluation, + and - Evaluation of Qu**ali**tative Data: | p. 191
**Strengths**: * **Richness and depth**: Qualitative data offers more detailed, nuanced insights into Pps' thoughts, feelings, and experiences, which provides a fuller understanding of the topic. * **External validity**: Because of the open-ended nature, qualitative data tends to have greater external validity, reflecting real-life experiences and Pp worldviews. **Limitations**: * **Difficult to analyze**: It doesn't lend itself to statistical analysis, making it challenging to identify clear patterns or relationships. * **Subjective interpretation**: Conclusions drawn from qualitative data often depend on the researcher's interpretations, which can be biased, especially if they have preconceptions about the data. | A03
174
# evaluation, + and - Evaluation of Qu**ant**itative Data: | p. 191
**Strengths**: * **Easy to analyze**: Quantitative data can be analyzed using statistical methods, which allow for straightforward comparisons and trends. * **Objectivity**: The numerical nature of the data makes it less open to bias and more reliable when it comes to drawing conclusions. **Limitations**: * **Limited scope**: Quantitative data is often narrower and doesn't provide the rich, detailed understanding that qualitative data offers. * **Less real-life relevance**: Because it's simplified into numbers, it may fail to capture the complexity and depth of real-life experiences. | A03
175
# evaluation, + and - Evaluation of Primary Data: | p. 191
**Strengths**: * **Tailored to the research**: Primary data is authentic and collected specifically for the research at hand, ensuring that it's directly relevant to the study. * **Control over data quality**: The researcher has full control over the process, ensuring that the data collection is tailored to meet the study's specific needs. **Limitations**: * **Time and resource-intensive**: Collecting primary data requires significant time, effort, and resources, making it more challenging compared to secondary data. * **Planning and preparation**: It requires detailed planning and preparation (e.g., designing surveys, interviews, experiments), which can be a barrier in some research contexts. | A03
176
# evaluation, + and - Evaluation of Secondary Data: | p. 191
**Strengths**: * **Cost-effective and accessible**: Secondary data is often inexpensive and readily available, saving time and resources in the research process. * **No need for primary data collection**: Researchers can find valuable data without the need to conduct time-consuming fieldwork, which makes secondary data particularly useful in large-scale studies. **Limitations**: * **Quality concerns**: Secondary data may be outdated, incomplete, or not fully reliable. Researchers need to critically evaluate the accuracy of the data first. * **Misalignment with research needs**: The data may not perfectly fit the research question or specific needs, and finding precisely what is needed can be a challenge. | A03
177
# Levels of Measurement Nominal Data | Teacher's notes
Data is grouped into categories (e.g., "Tall" and "Short") with a frequency count for each. It is the simplest level of measurement, and the mode is the appropriate measure of central tendency. | A01
178
# Levels of Measurement Ordinal Data | Teacher's notes
Data is ordered or ranked (e.g., 1st, 2nd, 3rd in a race), but the intervals between ranks are not necessarily equal. * Measure of central tendency: Median * Measure of dispersion: Range | A01
179
# Levels of Measurement Interval Data | Teacher's notes
Interval data refers to a type of data where the intervals between values are consistent and equal, but there is no true zero point. This means that while you can measure the distance between points, the zero point does not represent the absence of the quantity being measured. | A01 ## Footnote E.g, 0°C does not mean "no temperature.
180
Descriptive statistics definition - | p. 192
The use of graphs, tables, and summary statistics to identify trends and analyze sets of data. It helps to summarize and describe the main features of a data set in a clear and understandable way. | A01
181
Measures of central tendency definition - | p. 192
The general term for any measure of the average value in a set of data. | A01
182
Mean definition- | p. 192
The arithmetic average calculated by adding up all the values in a set of data and dividing by the number of values there are. | A01 ## Footnote It's a common measure of central tendency.
183
Median definition- | p. 192
The central value in a set of data when values are arranged from lowest to highest. | A01 ## Footnote The median divides the data set into two equal halves.
184
Mode definition- | p. 192
The most frequently occurring value in a set of data. | A01
185
Talk about the mean and what distortion is: | p. 192
The mean is commonly known as the average. To calculate it, you add up all the numbers in a data set and then divide by how many numbers there are. It provides a general idea of the "typical" value in a data set. Example: For the data set: 5, 7, 7, 9, 10, 11, 12, 14, 15, 17 Step 1: Add all the numbers: 5 + 7 + 7 + 9 + 10 + 11 + 12 + 14 + 15 + 17 = 107 Step 2: Divide the sum by the number of values (10): 107 ÷ 10 = 10.7 While the mean is sensitive because it includes all data points, it can be skewed by extreme values (outliers). Example of distortion: If the number 17 in the data set is replaced with 98, the new mean would be: (5 + 7 + 7 + 9 + 10 + 11 + 12 + 14 + 15 + 98) ÷ 10 = 188 ÷ 10 = 18.8 This new mean is significantly higher and doesn’t seem to represent the data well anymore. | A01
186
Talk about the Median and outliers: | p. 192 ## Footnote mention + and -
The median is the middle value in a data set when the numbers are arranged from lowest to highest. If there is an odd number of values, the median is the one in the middle. If there is an even number of values, the median is the average of the two middle numbers. Example (even number of values): For the data set: 5, 7, 7, 9, 10, 11, 12, 14, 15, 17 Step 1: The middle values are 10 and 11. Step 2: Average them: (10 + 11) ÷ 2 = 10.5 Strength of the median: The median is not affected by extreme values (outliers). Even if 17 is replaced with 98, the median remains the same at 10.5. However, the median is less sensitive as it only takes the middle values into account. | A01
187
Talk about the mode and what unimodal and bimodal is: | p. 192 ## Footnote mention + and -
The mode is the most frequent value in a data set. There may be: One mode (unimodal) if one value occurs more frequently than the others. Two modes (bimodal) if two values occur with the same highest frequency. No mode if all values are unique. Example: For the data set: 5, 7, 7, 9, 10, 11, 12, 14, 15, 17 The mode is 7, as it appears most frequently. Strengths and limitations: The mode is easy to calculate, but it may not always represent the data well. In this case, the mode 7 is quite different from the mean (10.7) and median (10.5), so it doesn't accurately reflect the overall distribution of the data. However, in categorical data (e.g., favorite dessert), the mode is the only measure that can be used. | A01
188
Measures of dispersion definition - | p. 193
A general term for any method used to describe the spread or variability within a set of data scores. Examples include: * Range * Standard Deviation | A01
189
Range definition - | p. 193
A basic measure of dispersion calculated by subtracting the lowest score from the highest and adding 1 as a correction for rounding. | A01
190
Standard deviation definition - | p. 193
A precise measure of dispersion that shows how much scores differ from the mean. It’s calculated by finding the variance (average of squared differences from the mean) and taking its square root. | A01
191
Range | p. 193 ## Footnote mention + and -
The range is the difference between the highest and lowest values in a data set, often adding 1 for correction. Example: For data (5, 7, 7, 9, 10, 11, 12, 14, 15, 17), the range is: (17 - 5) + 1 = 13. Strength: It is easy to calculate. Limitation: It only considers the two extreme values, which may not represent the data accurately if there are outliers. | A01
192
Standard deviation | p. 193 ## Footnote mention + and -
A measure of the spread or dispersion of scores around the mean. The larger the standard deviation, the greater the spread of data. Example: If the standard deviation of test scores is large, it indicates diverse responses; if it's small, the data are tightly grouped around the mean, meaning people responded similarly towards the test. Strength: It takes into account all values in the data set, offering a precise measure of dispersion. Limitation: It can be distorted by extreme values, similar to how the mean can be affected. | A01
193
# Measures of central tendency, Evaluation, + and - Mean evaluation | Teacher's notes (p. 193) ## Footnote Mean is used for interval data
(+) More sensitive than the median, because it makes use of all the values of the data. (-) It can be misrepresentative if there is an extreme value. | A03
194
# Measures of central tendency, Evaluation, + and - Median evaluation | Teacher's notes (p. 193) ## Footnote Median is used for ordinal data
(+) It is not affected by extreme scores, so can give a representative value. (-) It is less sensitive than the mean, as it does not take into account all of the values. | A03
195
# Measures of central tendency, Evaluation, + and - Mode evaluation | Teacher's notes (p. 193) ## Footnote Mode is used for nominal data
(+) It is useful when the data are in categories. (-) It is not a useful way of describing data when there are several modes. | A03
196
# Measures of Dispersion, Evaluation, + and - Range Evaluation: | Teacher's notes (p. 193)
(+) Provides you with direct information that is easy to calculate. (-) Affected by extreme values. Doesn't take into account all of the numbers in the data set. | A03
197
# Measures of Dispersion, Evaluation, + and - Standard Deviation Evaluation | Teacher's notes (p. 193)
(+) More precise measure which takes all values into account. (-) May hide extreme values of data sets. | A03
198
# Tables and Graphs Bar charts: | p. 194
`Used to represent`: 'discrete data'. This is when the data is in categories, which are placed on the y-axis. The mean or frequency is on the y-axis. Columns do not touch and have equal width and spacing. | A01 ## Footnote https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.twinkl.co.uk%2Fteaching-wiki%2Fbar-chart&psig=AOvVaw21JWvTQ7RxWotzrO0rRvtW&ust=1746455706638000&source=images&cd=vfe&opi=89978449&ved=0CBQQjRxqFwoTCJDT1NGEio0DFQAAAAAdAAAAABAE
199
# Tables and Graphs Histograms: | p. 194
Used to represent data on a 'continuous' scale. Columns touch because each one forms a single score (interval) on a related scale. Scores (intervals) are placed on the X-axis . The height of the column shows the frequency of values. | A01 ## Footnote https://www.google.com/url?sa=i&url=https%3A%2F%2Fstatisticsbyjim.com%2Fbasics%2Fhistograms%2F&psig=AOvVaw3loItV53iR6shWO74Bycwx&ust=1746455980628000&source=images&cd=vfe&opi=89978449&ved=0CBQQjRxqFwoTCPjr49WFio0DFQAAAAAdAAAAABAE
200
# Tables and Graphs Line graphs: | p. 194
Line graphs represent continuous data and use points connected by lines to show how something changes invalue, for instance, over time. Typically, the IV is plotted on the x-axis and the DV on the y-axis. | A01 ## Footnote https://www.google.com/url?sa=i&url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FForgetting_curve&psig=AOvVaw32B9OHx3Qzxvn0YXXtG2N6&ust=1746456396957000&source=images&cd=vfe&opi=89978449&ved=0CBQQjRxqFwoTCIi71pyHio0DFQAAAAAdAAAAABAJ
201
# Tables and Graphs Summary table | p. 194
There are various ways of representing data; one of these is in the form of a summary table. It is important to note that when tables appear in the results section of a report they are not merely raw scores but have been converted to descriptive statistics. It is standard practice to include a summary paragraph beneath the table explaining the results. | A01 ## Footnote check page 194, first picture.
202
# Tables and Graphs Frequency tables | Teachers notes (p. 194 in topic)
Uses Raw data to count frequency of certain items. | A01 ## Footnote https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.teachoo.com%2F8445%2F2755%2FGrouped-Frequency-Distribution-Table%2Fcategory%2FFrequency-Distribution-of-Data%2F&psig=AOvVaw2qIUqaXpOEh0jQpp9juv_H&ust=1746456898731000&source=images&cd=vfe&opi=89978449&ved=0CBQQjRxqFwoTCKDhvouJio0DFQAAAAAdAAAAABAJ
203
# Tables and Graphs Raw data tables | Teachers notes p. 32 in document, (p. 194 in topic)
Show scores prior to analysis. | A01
204
# Tables and Graphs Pie chart | Teachers note's p. 34 in document, (p. 194 in topic)
Used to represent frequency data or proportions. Each slice is a proportion/fraction of the total. | A01 ## Footnote https://www.google.com/url?sa=i&url=https%3A%2F%2Fdocs.tibco.com%2Fpub%2Fspotfire%2F6.5.3%2Fdoc%2Fhtml%2Fpie%2Fpie_what_is_a_pie_chart.htm&psig=AOvVaw3-EBePs2hsLDav9xR47lWU&ust=1746457270404000&source=images&cd=vfe&opi=89978449&ved=0CBQQjRxqFwoTCNDQxr2Kio0DFQAAAAAdAAAAABAE
205
# Tables and Graphs Scattergrams | p. 194
A scattergram is a graph used to display the relationship between two sets of data, known as co-variables. It helps identify patterns or correlations between them. X-axis: Represents one variable Y-axis: Represents the other variable Each point on the graph corresponds to a pair of values, one from each variable. Types of Correlation: Positive Correlation, Negative Correlation and No Correlation. While scattergrams can show correlations, they do not imply causation. A correlation indicates a relationship, but it doesn't mean one variable causes the change in the other. Scattergrams are to visualize the strength and direction of the relationship between two variables. Identify trends, clusters, or outliers in the data. Or determine if a linear relationship exists, which can be further analyzed with a line of best fit. | A01 ## Footnote https://www.google.com/url?sa=i&url=https%3A%2F%2Fplanyway.com%2Fblog%2Fhow-to-make-a-scatter-plot&psig=AOvVaw2ZF03m0KbH4adbTMr29Zn8&ust=1746457400477000&source=images&cd=vfe&opi=89978449&ved=0CBQQjRxqFwoTCNiFrPyKio0DFQAAAAAdAAAAABAJ
206
Normal distributions | p. 195
A symmetrical, bell-shaped curve that represents how certain variables (e.g., height, IQ) are distributed in a population. The mean, median and mode are all located at the highest peak. Left and right sides of the curve are mirror images. Frequency: Most individuals cluster around the central average. Fewer people appear at the extreme ends (the "tails"). The tails never touch the x-axis, meaning extreme scores are always possible, even if rare. Real-Life Example: Variables like height in a large population typically form a normal distribution. | A01 (example of it in page 134) ## Footnote https://www.google.com/url?sa=i&url=https%3A%2F%2Fdietassessmentprimer.cancer.gov%2Flearn%2Fdistribution.html&psig=AOvVaw0W6_AXiF_MPiukBvt6bQ_O&ust=1746459452243000&source=images&cd=vfe&opi=89978449&ved=0CBQQjRxqFwoTCPj3hcySio0DFQAAAAAdAAAAABAE
207
Skewed distribution definition - | p. 195
A spread of frequency data that is not symmetrical, where the data clusters to one end. | A01
208
Positive Skewed Distributions | p. 195
A type of distribution in which the long tail is on the positive (right) side of the peak. Most of the distribution is concentrated on the left. The mean is pulled to the right by a few extreme high scores. Typical example: a very difficult test. Order of central tendency: Mode < Median < Mean | A01 ## Footnote https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.statisticshowto.com%2Fprobability-and-statistics%2Fskewed-distribution%2F&psig=AOvVaw2IZIMkVFH9bd5lxqQ_NbHT&ust=1746459308508000&source=images&cd=vfe&opi=89978449&ved=0CBQQjRxqFwoTCOjLn4iSio0DFQAAAAAdAAAAABAb
209
Negative Skewed Distributions | p. 195
A type of distribution in which the long tail is on the negative (left) side of the peak. Most of the distribution is concentrated on the right. The mean is pulled to the left by a few extreme low scores. Typical example: a very easy test. Order of central tendency: Mean < Median < Mode | A01 ## Footnote https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.statisticshowto.com%2Fprobability-and-statistics%2Fskewed-distribution%2F&psig=AOvVaw2IZIMkVFH9bd5lxqQ_NbHT&ust=1746459308508000&source=images&cd=vfe&opi=89978449&ved=0CBQQjRxqFwoTCOjLn4iSio0DFQAAAAAdAAAAABAb
210
Calculation of percentages: e.g There were 6 participants whose word score was higher for the SpeedUpp condition than the water condition out of a total of 10 participants. Calculate the percentage: | p. 196
To calculate the percentage we use the following formula: Number of Pp's who spoke more after SpeedUpp/Total number of Pp's * 100 = ...% so that would be, 6/10 * 100 = 60% | A01
211
Converting a percentage to a decimal e.g for 37% and 60% | p. 196
To convert a percentage to a decimal, remove the % sign and move the decimal point two places to the left. For example: 37% is 37.0 then move the decimal point two places to left is 0.37 So, for the percentage of participants who spoke more words in the SpeedUpp condition: 60% is 60.0, move the decimal point two places to the left = 0.60 (0.6). | A01
212
Converting a Decimal to a Fraction: E.g 0.6 0.81 0.275 | p. 196
* 1. Count the decimal places: E.g: 0.6 → 1 decimal place 0.81 → 2 decimal places 0.275 → 3 decimal places * 2. Use the correct denominator: 1 decimal place → divide by 10 → 0.6 = 6/10 2 decimal places → divide by 100 → 0.81 = 81/100 3 decimal places → divide by 1000 → 0.275 = 275/1000 * 3. Simplify the fraction: Find the highest common factor (HCF) of numerator and denominator. Divide both parts of the fraction by the HCF. Examples: 275/1000 → divide both by 25 → 11/40 6/10 → divide both by 2 → 3/5 | A01
213
Using ratios: Part-to-Whole Ratio E.g 6 people (SpeedUpp) out of 10 total and Part-to-Part Ratio E.g 6 people (SpeedUpp) vs. 4 people (Water). | p. 196
You can express data relationships using ratios, which compare quantities either to the whole or to each other. ✅ Part-to-Whole Ratio : Compares one part to the total. E.g: 6 people (SpeedUpp) out of 10 total → 6:10 → simplify by dividing both by 2 → 3:5 🔁 Part-to-Part Ratio :Compares one part directly to another part. E.g: 6 people (SpeedUpp) vs. 4 people (Water) → 6:4 → simplify by dividing both by 2 → 3:2 🔧 Tip: Always simplify ratios like you would a fraction, by dividing both sides by the highest common factor (HCF). | A01
214
Estimating Results E.g For the range of a data set where the highest number is 322 and the lowest is 57. | p. 196
Sometimes you’ll need to estimate a value like the mean or range from a data set when exact calculations aren’t required. To estimate the range: Range = Highest value − Lowest value E.g: Highest = 322 Lowest = 57 Estimated Range = 322 − 57 = 265 | A01
215
# Symbol name, Meaning / definition and Example: = | p. 197
equals sign, Equality, 3+1 = 4 | A01
216
# Symbol name, Meaning / definition and Example: > | p. 197
strict inequality greater than 3 > 2 | A01
217
# Symbol name, Meaning / definition and Example: < | p. 197
strict inequality less than 2 < 3 | A01
218
# Symbol name, Meaning / definition and Example: **>>** | p. 197
inequality much greater than 3000 >> 0.02 | A01
219
# Symbol name, Meaning / definition and Example: **<<** | p. 197
inequality much less than 0.02 << 3000 | A01
220
# Symbol name, Meaning / definition and Example: `∝` | p. 197
proportional to proportional to f(x) `∝` g(x) or 5:10 and 1/2 | A01 ## Footnote a proportion states that two ratios are equal
221
# Symbol name, Meaning / definition and Example: ≈ | p. 197
approximately equal weak approximation 11 ≈ 10 | A01 ## Footnote means that two quantities are close in value, but not necessarily equal
222
Null hypothesis definition - | Futher knowlege (in relations to p. 198)
The null hypothesis suggests that there is no significant effect or no relationship between variables. It's the assumption that any difference or effect observed in the data is due to chance or random factors. | A01
223
Alternative hypothesis definition - | Futher knowlege (in relations to p. 198)
The alternative hypothesis suggests that there is a significant effect/relationship between variables. It is what the researcher aims to support or prove. | A01
224
What is the accepted level of probability in psychology? | p. 197
A numerical measure of the likelihood that certain events will happen. * p ≤ 0.05 is the accepted level of probability in psychology, that is 0.05 (or 5%). This means there is less than 5% probability the results happened by chance and we can be 95% confident in the results. * This is the standard significance level in psychology where the researcher then decides to accept the alternative hypothesis or not. * or higher-stakes research (e.g., drug trials), a stricter level like p ≤ 0.01 is sometimes used. | A01 ## Footnote ✅ If p ≤ 0.05, the result is statistically significant, so the researcher: Rejects the null hypothesis, and Accepts the alternative hypothesis, concluding the effect is probably real. ❌ If p > 0.05, the researcher does not have enough evidence to reject the null, so they do not accept the alternative hypothesis. Recap: The null hypothesis suggests that there is no significant effect or no relationship between variables. It's the assumption that any difference or effect observed in the data is due to chance or random factors. he alternative hypothesis suggests that there is a significant effect or a relationship between variables. It is what the researcher aims to support or prove.
225
Using an Appropriate Number of Significant Figures: for E.g, 432,765 0.003245 π (pi) | p. 197
Significant figures help present data more clearly and appropriately - especially when dealing with long numbers. E.g: 432,765 → 430,000 (2 significant figures) 0.003245 → 0.0032 (2 significant figures) π (pi) → 3.142 (4 significant figures) Rounding rule: If the next digit is 5 or more, round up. If it’s less than 5, round down. Common mistake: Confusing 5% (0.05) with 0.5 (50%). ✔️ 5% = 0.05 ❌ Not 0.5 | A01 ## Footnote **The 0's rule**: https://www.google.com/url?sa=i&url=http%3A%2F%2Fwww.learningaboutelectronics.com%2FArticles%2FSignificant-figures-calculator.php&psig=AOvVaw2Ib7r0wHQjfwaTnJPpArm5&ust=1746464732187000&source=images&cd=vfe&opi=89978449&ved=0CBQQjRxqFwoTCNiasaSmio0DFQAAAAAdAAAAABAJ
226
Statistical testing definition - | p. 198
Provides a way of determining whether hypotheses should be accepted or rejected. In psychology, they tell us whether differences/relationships between variables are statistically significant or have occurred by chance. | A01
227
Sign test definition - | p. 198
The sign test is a statistical test used to analyse the difference in scores between related items (e.g. the same Pp tested twice). Data should be nominal or better. | A01 ## Footnote E.g: A psychologist tests whether people speak more words after drinking an energy drink. Participants: 10 people Condition A: Water Condition B: Energy drink Each participant is tested in both conditions (repeated measures). If most people speak more after the energy drink than after water, the Sign Test is used to check if this difference is statistically significant.
228
The concept of significance: Why do psychologists use statistical tests in research? | p. 198
Statistical tests determine whether differences found in data are significant (i.e., unlikely to have occurred by chance). Even if a difference in mean scores is found (e.g., after drinking SpeedUpp), we need a statistical test to confirm it's not just a coincidence. | A01
229
# The Sign Test – When & How What are the conditions for using the sign test in psychology? | p. 198
The sign test is used when: The study is testing for a difference, not an association. A repeated measures design was used. The data is nominal, or can be converted to nominal form (organised into categories). E.g., by assigning + or – based on direction of change. | A01 ## Footnote so for the energy drink experiment, do this by subtracting the score for water from the score for SpeedUpp. If the answer is negative we simply record this sign, if the answer is positive we record a plus sign. Pp1: 'SpeedUpp 100' and 'Water 122' so its - Pp2: 'SpeedUpp 59' and 'Water 45' so its + (and so on)
230
What is the critical value in a statistical test and how is it used? | p. 198
After calculating a test statistic (e.g., using the sign test), it's compared to a critical value from a table to decide whether the result is significant or not. To find the critical value for your data, you need: The significance level (usually 0.05), The N value (number of participants), and Whether the hypothesis is directional (one-tailed) or non-directional (two-tailed (p. 166)). For the sign test, If the calculated value is equal to or lower (less) than the critical value, the result is are regarded as significant. | A01 ## Footnote Table of critical values example on page 198.
231
Go to page 199 to see a worked example of the Sign test | p. 199
You can put this to a 5 once you know exaclty how you could do this yourself: Step 1: Convert Data to Nominal Subtract the water score from the SpeedUpp score. Record a minus sign (-) if SpeedUpp < Water, or a plus sign (+) if SpeedUpp > Water. Step 2: Count Pluses and Minuses Add up the number of pluses (+) and minuses (-) from the data. Example: 13 pluses and 7 minuses. Step 3: Find the Less Frequent Sign (S) Identify the less frequent sign (in this case, minuses = 7). The less frequent sign represents the calculated value (S). Step 4: Compare with Critical Value Compare calculated value (S) with the critical value from the table. For N = 20, significance level = 0.05, and a one-tailed test, the critical value is 5. If S ≤ critical value, the result is significant. In this case, S = 7 > critical value = 5, so the difference is not significant. | A01
232
Peer review definition - | p. 200
The evaluation of scientific research by other scientific experts in the same field to ensure the work is of high quality, valid, and suitable for publication. | A01
233
Economy definition - | p. 200 ## Footnote What does this have to do with Psychology?
The state of a country in terms of its production and consumption of goods and services. | A01 ## Footnote Psychological research can impact the economy by influencing public policies, healthcare costs, and workplace productivity.
234
The Role of Peer Review in Psychology | p. 200
Ensures research is accurate, valid, and of high quality before publication. Research is evaluated by 2-3 experts (peers) in the same field. Maintains scientific integrity, improves credibility, and filters out flawed or unsubstantiated studies. Research findings are mainly publicised through academic journals, conferences and textbooks. | A01
235
The Main Aims of Peer Review: | p. 200
**Allocate Funding**: * Helps decide which research proposals should receive financial support (called independent peer evaluation). * Often done by bodies like the Medical Research Council (a government-run funding organisation). **Validate Quality & Relevance**: * Reviews all aspects of the research: hypotheses, methods, stats, and conclusions. * Ensures work is scientifically sound and meaningful. **Suggest Improvements**: * May recommend revisions to strengthen the study. * Can reject work if it's flawed or unsuitable for publication. | A01
236
# Peer review + economy, Evaluation, + and - 🕵️‍ Anonymity in Peer Review Evaluation | p. 200
* (+) Anonymity encourages honest and unbiased critique of research. * (-) Can be misused - reviewers may unfairly criticise rival researchers, especially when competing for funding or recognition. * (+) For this reason,some journals now use open reviewing, where reviewers' names are revealed to increase accountability. | A03
237
# Peer review + economy, Evaluation, - Publication Bias: | p. 200
Editors and Journals have a natural tendency to prefer to publish significant 'headline grabbing' findings or positive results to attract attention and boost reputation. In Consequence, Studies with non-significant or negative findings may be ignored (file drawer problem p. 191). If journals + editors are being selective in what they publish, They distort the scientific record, giving a false impression of research progress. | A03
238
# Peer review + economy, Evaluation, - Burying ground-breaking research: | p. 200
The **peer review process can sometimes suppress innovative or contradictory research**, as reviewers may be overly critical of studies that challenge their own views. Additionally, journals and **publishers often select reviewers who are aligned with mainstream theories** and prefer publishing 'headline-grabbing' or positive findings due to a form of publication bias. As a result, research that questions the established order is less likely to be accepted. This bias toward conventional knowledge can slow scientific progress by **discouraging fresh perspectives and groundbreaking discoveries**. | A01
239
Implications of Psychological Research for the Economy | p. 201
* Psychological research has real-world applications that can impact economic productivity and policy. * Findings can influence how individuals function in society from parenting to workplace effectiveness. | A01
240
Attachment Research and the Role of the Father to do with the Economy: | p. 201
* Early views (e.g., Bowlby's monotropic theory) saw the mother as the sole primary caregiver. * More recent research has highlighted the importance of multiple attachments, most notably the role of the father. * Fathers are equally capable of providing the emotional support needed for a child's healthy development. * This knowledge allows both parents to work and contribute economically, share childcare responsibilities across the working week. * This means that modern parents are better equipped to maximise their income and contribute more effectively to the economy. | A01
241
Development of Treatments for Mental Illness and what it means for the economy: | p. 201
* Mental illness is a major cause of absence from work, costing the economy up to £15 billion a year. * Psychological research has enabled: Faster diagnosis. Effective treatment (e.g. CBT, SSRIs, systematic desensitisation). Or individuals use Self-help strategies for managing conditions using similar methods. * Quicker recovery leads to higher workplace productivity and reduced long-term healthcare costs. This promotes a healthier, more resilient workforce, improving overall economic output (thanks to psychological research). | A01
242
The 3 criteria that must be met in order to use a parametric test: | Teachers note's
* Interval data – actual scores (not ranks) must be used. * Normal distribution – data should come from a population expected to show a normal distribution. * Homogeneity of variance – scores in each condition should have a similar spread (e.g., similar standard deviations). | A01
243
Type I error – | Teachers note's
Occurs when the null hypothesis is wrongly rejected and the alternative is accepted. Known as a false positive or optimistic error. The researcher claims a significant result (e.g., difference/correlation) when none exists. | A01
244
Type II error – | Teachers note's
When the null hypothesis is accepted, and the alternative hypothesis is rejected when it should have been the other way around. Known as a false negative. The researcher misses a real effect, failing to detect a significant result that actually exists. | A01
245
When are we more likely to make a Type 1 error? | Teachers note's
We are more likely to make a type I error when the level of significance is too lenient (e.g., 0.1/ 10%). | A01
246
When are we more likely to make a Type 2 error? | Teachers note's
A type II error Is more likely if the significance error is too stringent/ low (e.g., 0.01/ 1%), as significant values may be missed. | A01
247
What % do Psychologists favour for less risk of making a type 1 or 2 error | Teachers note's
Psychologists favour the 5% level of significance as it balances the risk of making a type I or type II error. | A01
248
What kind of data is the Test of difference | Teachers note's
ordinal data | A01
249
What is the unrelated t-test: | Teachers note's
The unrelated t-test is a test of difference between two sets of data. | A01
250
What kind of data is used for the unrelated t-test? | Teachers note's
It is used with interval level data only | A01
251
When is Spearman’s Rho's test chosen? | Teachers note's
Spearman’s is a test of correlation between two sets of values. Spearman’s is selected when one, or both, of the variables are ordinal, although it can also be used with interval data. | A01
252
What kind of data is used for the Spearman’s Rho? | Teachers note's
ordinal data | A01
253
# Stats tests – a summary: The sign-test: | Teachers note's
Test of difference. Related design – repeated measures design. Nominal data For results to be significant, the calculated value of S must be equal to or less than the critical value. | A01
254
# Stats tests – a summary: Wilcoxon: | Teachers note's
Test of difference. Related design. Ordinal data. For results to be significant, the calculated value of U must be equal to or less than the critical value. | A01
255
# Stats tests – a summary: Related t–test: | Teachers note's
Parametric test of difference. Related design. Interval data. For result to be significant, the calculated value of t must be equal to or more than the critical value. | A01
256
# Stats tests – a summary: Pearson’s r: | Teachers note's
Test of correlation. Design is not an issue here due to it being a test of correlation. Interval data. For result to be significant, the calculated value of r must be equal to or more than the critical value. | A01
257
# Stats tests – a summary: Mann-Whitney: | Teachers note's
Test of difference. Unrelated design – two or more independent groups involved. Ordinal data. For results to be significant, the calculated value of U must be equal to or less than the critical value. | A01
258
# Stats tests – a summary: Unrelated t–test: | Teachers note's
Parametric test of difference. Unrelated design. Interval data. For result to be significant, the calculated value of t must be equal to or more than the critical value. | A01
259
# Stats tests – a summary: Spearman’s rho: | Teachers note's
Test of correlation. Design is not an issue here due to it being a test of correlation. Ordinal data. For result to be significant, the calculated value of rho must be equal to or more than the critical value. | A01
260
# Stats tests – a summary: Chi-squared: | Teachers note's
Can be used as a test of correlation and association. Nominal data. For result to be significant, the calculated value of x2 must be equal to or more than the critical value. | A01
261
The broken up sections you should have if you make a scientific psychological report: | Teachers note's
* **Abstract**– 150–200 word summary of aims, methods, results & conclusions. Helps readers decide relevance. * **Introduction** – Literature review leading to study aims & hypotheses. * **Method** – Clear replication guide. Includes: Design (e.g., independent groups) Sample (number, demographics, method) Apparatus/Materials Procedure (step-by-step with standardised instructions) Ethics (how issues were addressed) * **Results** – Descriptive stats (tables/graphs), inferential stats (tests, significance, hypothesis decisions). Raw data in appendix. * **Discussion** – Summary in words, relates to intro, discusses limitations, future research & real-world applications. * **Referencing**– Full citations (books, articles, websites) in standard academic format. | A01
262
Thomas Kuhn – Paradigms & Psychology as a Science | Teachers note's
* Kuhn (1962) said true sciences share a single paradigm – a set of accepted theories and methods. * Psychology lacks one unified paradigm, with many conflicting approaches (e.g., behaviourism, psychodynamic, cognitive). * Therefore, Kuhn saw psychology as a “pre-science” – not yet fully scientific like physics or biology. * Scientific progress occurs via paradigm shifts, where overwhelming evidence replaces the old model (e.g., Newton → Einstein). * Critics say psychology has shown paradigm shifts (e.g., from Wundt’s structuralism to modern cognitive neuroscience). * Also, many psychologists agree on a broad aim: “the study of mind and behaviour”. * Others argue no science is fully unified – conflict and debate exist even in natural sciences. So, Kuhn’s idea of science as fully ordered and unified may be unrealistic. | A03
263
The case for psychology as a science: | Teachers note's
* Goes beyond commonsense; findings often counter-intuitive. * Uses a scientific model of enquiry, enhancing credibility. * Puts psychology on equal footing with natural sciences (despite Kuhn’s "pre-science" claim). * Leads to practical applications that improve lives and modify dysfunctional behaviour. | A03
264
The case against psychology as a science: | Teachers note's
* Although many psychologists attempt to maintain objectivity within their research, some of the methods that psychologists use are subjective, non-standardised and unscientific. * Science is based on the assumption that it is possible to produce universal laws that can be generalised across time and space. However, this may not be possible in psychology as samples of participants used in studies are rarely truly representative and the conclusions drawn from findings may often be influenced by social and cultural norms. * Much of the subject matter in psychology cannot be directly observed and must be based on inference rather than objective measurement. | A03