Research Methods (paper 2) Flashcards

(113 cards)

1
Q

What is a research method?

A

Strategies/processes of collecting data for analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a research aim?

A

Statement of what a researcher is intends to find out, should be stated before any study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the two independent variables?

A

1-control condition (baseline/no change)
2-experimental condition (change)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is directional versus non-directional hypothesis?

A

D-one tailed (–>) predicts results direction
ND-two tailed (<–>) no prediction of direction

in exam clearly state an appropriate directional, operationalised hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When is a non-directional hypothesis used?

A

When their is no pre-existing research or too much contradictory research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a difference hypothesis?

A

States the difference between conditions
(Experimental - says ‘difference’)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a relationship/correlation hypothesis?

A

Predicts the relation between two things
(Correlation - says ‘relation’)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What ate the five things psychology research should be?

A

1-General: representative if many people (gender/race/lifestyle)
2-Reliable: up to date and repeatable
3-Applicable: works in the real world to support a theory or to improve life
4-Valid: uses real life tasks and applies to real life (measure what it meant to)
5-Ethical: moral, participants have informed consent, can leave not harmed ( physically/mentally)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are operationalised variables?

A

Variables that are in a form that can be tested easily.
(Dependant variable=time or anxiety, operationalised variable=stopwatch minutes/seconds/milliseconds or scale of 1-5, ask participant)

You need to be specific!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are extraneous variables?

A

any variable other than the IV that may have an effect on the DV that is not controlled
(Eg noise/temperature/light/room size/mood/intelligence/age/anxiety/gender/concentration etc)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a confounding variable?

A

EV that varies systematically with IV, so cannot be sure of the true source of change of DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a demand characteristic?

A

A cue from the researcher/research situation that could reveal the purpose of the investigation (leads to changing behaviour)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the investigator effect?

A

Any effect of the investigator’s behaviour on the DV
(conscious/unconscious)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How can you reduce the investigator effect?

A

1-Randomisation: use of chance to control for the effects of bias (random selection)
2-Standardisation: exact same procedure and instructions for every participant
3-Single Blind procedure: participants don’t know which condition they’re taking
4-Double Blind procedure: both participants and experimenter don’t know which condition they’re taking

*these reduce demand characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the three experimental designs?

A

1-independent groups design
2-repeated group design
3-matched pairs design

*they ate meant to represent our population as a whole

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is independent group design?

A

-different participants used in each condition
-two levels of IV, experimental condition group and control group
-allocation of group should be random

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the strengths/limitations of independent group design?

A

+order effects avoided
+demand characteristics are avoided
-more participants needed (take time)
-participant varibles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is repeated measures design?

A

-same participants used in both conditions if the experiment

*in order to counterbalance the limitations, half the participants do the conditions (A to B) and the other half do the opposite (B to A)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the strengths/limitations of repeated measures design?

A

+fewer participants requires
+participant variables are controlled
-order effect is possible
-demand characteristics likely

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is matched pairs design?

A

-pairs of participant in terms of key variables (such as age)
-one member of each pair is then placed in the two different conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the strengths/limitations of matched pairs design?

A

+participant variables are reduced
+order effects are avoided
+demand characteristics reduces
-individual differences occur
-time consuming and expensive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the types of experimental methods?

A

-laboratory experiments
-field experiments
-natural experiments
-quasi experiments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is a laboratory experiment?

A

-takes place in a controlled environment
-researcher manipulates the IV and records the effect on the DV
-maintaining strict control of extraneous variables

-has the highest level of control of IV but the lowest level of ecological validity
-high internal but low external validity
-low ethics issues
-high reliability but high demand characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is a Feild experiment?

A

-an experiment that takes place in a natural environment
-the researcher manipulates the IV and records effect on the DV

-strikes balance between control on IV and having ecological validity
-low internal but high external validity
-high ethics issues
-low reliability but low demand characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is a natural experiment?
-an experiment where the change to the IV would have occurred without reacher, its not done by reseacher -researcher records effect on DV -has the lowest control of IV, but has the highest level of ecological validity -low internal but mid external validity -low ethics issues -low reliability but low demand characteristics
26
What is a quasi experiment?
-a study that is almost an experiment but the IV is not detirmined by anyone -the variables simply exist (eg being old or young) -the environment of the study is controlled -mid internal but low external validity -low ethics issues -high reliability but high demand characteristics
27
What is ecological validity?
-high, indicates the findings can be generalised and applies to real life -so people in real life situations will experience the same things as in the study
28
What are the different sampling methods?
-random -systematic -stratified -opportunity -volunteer (or self selection)
29
What is random sampling?
-Each member of a population has equal chance of being selected (lottery method used to select from obtained list of target population) +free from researcher bias -time consuming -difficult to get list of everyone -could be unrepresentative (chance) -people may not want to take part
30
What is systematic sampling?
-when every nth member of target population is selected from sampling frame +free from researcher bias +fairly representative (usually) -time consuming -list of everyone is difficult to get -people may not want to take part
31
What is stratified sampling?
-The composition of sample reflects the proportion of people in certain sub groups within the target population +free from researcher bias +most representative -time consuming -complete representation of sample is not always possible -people may not want to take part
32
What is opportunity sampling?
-selecting anyone who is willing/available to take part, researcher simply asks whoever is around +saves money (time) +quick and efficient -unrepresentative (only people there and only some will say yes) -researcher bias
33
What is volunteer sampling?
-an advert is produced, individuals self select themselves to take part +easy +less time consuming -volunteer bias occurs -usually less representative
34
What are ethical issues in studies as outlined by the British Psychological Society (BPS) in their quasi-legal document?
-*informed consent*: the right to be given detailed information about the nature/purpose of the research and their role in it, so they can make the informed decision to take part or not -*right to withdraw*: the right to leave the study for any reason and should also be able to refuse permission for the data they produced to be used -*confidentiality*: the right to have personal information protected (inc right for anonymity) -protection from harm: during study, participant should not experience any negative physical/psychological effects -*deception*: deliberately misleading/withholding info from participants at any stage in the study (so informed consent cannot truly be given)
35
What is a cost-benefit analysis?
-ethic committees weight up the cost vs the benefits of research proposals to decide if the study should go ahead -benefits may include the value of the research -possible costs may inc the damaging effects on individuals or in the reputation of psychology as a whole
36
How to deal with informed consent?
-need to give participant detailed letter with all relevant info that may affect decision to join study, that will be signed if they agree -Presumptive consent: rather than consent from participant, a similar group of people are asked if study is acceptable, they agree than participant consent is presumed -Prior general consent: participants give permission to take part in multiple different studies (inc one that involves deception), so they are agreeing to being deceived -Retrospective consent: during debrief after study participants are asked for consent
37
How to deal with deception, protection form harm and confidentiality?
-if they were deceived, participants should be informed of all details and the true intentions of the study after, as well as what their data will be used for so they can decide if it will be used -maintain anonymity and remind participants they can withdraw their data
38
What are observations?
-study of *observable behaviour* -non experimental method as there is *no independent variable * (cannot establish cause and effect) -can be *used within experiments*
39
What are naturalistic vs controlled observations?
Naturalistic- watching behaviour in the setting it would normally occur + high external validity + low demand characteristics - low reliability (difficult to replicate due to lack of control) - low internal validity (extraneous variables may effect DV) Controlled- watching behaviour in a structured environment + high internal validity + high reliability (easily repeatable) - high demand characteristics - low external validity (not applicable)
40
What are covert vs overt observations?
Covert- behaviour is watched without knowledge or consent of participants + high internal validity (will act natural) + low demand characteristics - high ethical issues (no informed consent and cant withdraw) Overt- behaviour is watched with knowledge and consent + low ethical issues - high demand characteristics - low internal validity (act less natural)
41
What are participant vs non participant observations?
Participant- researcher is member of group whose behaviour they’re watching + high internal validity (more insight) - low objectivity Non participant- researcher remains outside group whose behaviour they’re watching + high objectivity - low internal validity (loose insight)
42
What are structured vs unstructured observations?
Structured -used when there is *too much going* for researcher to record it all -produces *quantitive* data (analysis is more straightforward) Unstructured -researcher *writes down everything they see* -used on *small scale* when their is *few participants* -produces *qualitative* data, so it is difficult to analyse but more in depth -*prone to bias* as researcher may miss of purposely ignore things Observer bias = expectations impact what they see and hear, this reduces validity of observations
43
What are behavioural categories and why are they used?
-the *target behaviour* must be *clearly defined* before an observation -they use a *predetermined list of behaviours* to quantify their observations put into a checklist -researcher should *include all realistic ways target behaviour may occur* -they make *data collection more structured and objective* -the categories *must be clear and unambiguous* so *no interpretation required* (as it may differ for people) -no overlap of categories to avoid confusion of where behaviour belongs
44
What are the two sampling methods used during observation?
Event sampling -researcher *records every time event*/target behaviour occurs (using a *tally chart*) -used when *event occurs infrequently* Time sampling -used when *lots of people are observed* -researcher *records behaviour in a fixed time frame* (eg every 10mins) -important *behaviours may be missed* if they don’t occur at interval
45
What is inter-observer reliability and how is it gained?
The *agreement between multiple observers* involved in the observation of a behaviour -it should make data objective/unbiased -should be 2 or more observers -before observation, should *ensure they interpret behavioural categories the same* -*pilot test of categories*so they can practice using them and see if the categories need to be changed -whilst doing observations, *they are separate* -after observation, their *observations are correlated to check for reliability* -if *positive correlation of 80+* the observations are deemed *reliable*
46
What are self report techniques?
Any method when a person is asked to or explain their own feelings, opinions, behaviours and/or experiences related to a given topic There are two types: -questionnaires -interviews
47
What are open and closed questions?
Open= no fixed choice of response, people can answer how they wish (qualitative data) Closed= fixed choice of response, determined by question setter (quantitative data) Open -responses are more detailed/in depth -difficult to collate/summarise data -conclusions may be open to bias Closed -often involves ticking boxes/circling -responses are easier to compare -respondents cannot explain answers
48
What are some errors in question designs?
-use of jargon (technical language) may make questions difficult to understand and answer -emotive language demonstrate authors stance in particular topic and may influence respondent -leading questions also guide respondent to a particular answer -double barrelled questions have two questions in one, the respondent may only agree with one if the questions and not the other -double negatives may make questions confusing to understand
49
What is validity?
The accuracy of a study
50
What is ecological validity?
A measure of how test performance predicts behaviors in real-world settings, can findings be generalised to other settings
51
What is temporal validity?
a type of external validity that refers to the validity of the findings in relation to the progression of time
52
What is concurrent validity?
Where the performance on one meadow correlates highly with the performance on another measure of the same variable
53
What is internal reliability and how do you assess for it?
How *consistent* a method is within itself •*split half method* -compare half the questions to other half, checks for same level of difficulty •*inter observer reliability* -compare observations of each observer, checks that they are interpreting behaviour the same way
54
What is external reliability and how do you assess it?
How consistent the method is over time •*test retest method* -if the same questionnaire/interview is conducted again, the same results should be obtained each time •*replication* -an experiment should obtain the same results when repeated with the same standardised procedures used
55
How to improve reliably in questionnaires, interviews, and observations m?
questionnaires -some items must be rewritten or deselected Interviews -same interviewer used each time -properly train interviewer -structured interviews Observations -operationalised behavioural categories -categories are measurable and self evident with no overlap -trained observers -beforehand agreement in categories
56
What are case studies?
-an *in depth investigation* -often involve *anlysis of unusual individuals or events* -mostly *qualitative data* -researchers use *many research methods* -tend to be *longitudinal*
57
Evaluate case studies?
Strength -offer *detailed insights on atypical forms if behaviour* so we have more understanding about human behaviour that otherwise cannot be investigated (due manipulation ethics) -they can *generate hypothesises for future study* which may lead to development in n theories Limits -tend to *involve single individuals* so what may be true first this person may not be the case for everyone -*researchers may become to involved* which reduces validity, case studies take place over long periods of time which may lead to the researcher getting to know the family/person well leading to *subjective selection* and interpretation of info
58
What is content analysis?
-*Observational study*, behaviour is indirectly studied by *examining the communications* that people produce (inc, spoken interaction, written form, or media) -May involve quantitative and/or qualitative analysis -aim is to *summarise and describe data* in a systematic way so *overall conclusions* can be drawn -*deductive (top down) approach*. researchers already inc What they are looking for and have created categories to reglect this -*tally may be created* for how many times each category is used in the data -*themes will emerge* once the data has been analysed
59
What is coding?
-*stage of content analysis* -communication is *analysed by* identifying each instance of the *chosen categories*
60
What is thematic analysis?
-involves *identifying implicit or explicit ideas within the data*. -*data then is organised* according to these *themes* -data is *transcribed where necessary* -data is then *reviewed repeatedly* so that the researcher may *identify trends* -themes identified are re analysed so that they beceme more refined -themes Identified can be used to *support or challenge existing themes*
61
What are the types of data?
-*qualitative* (expressed in words, feelings or opinions. Concerned with interpretation of language) -*quantitive* (expressed numerically, data is open to being analysed statistically and converted to graphs) -*primary* (gathered firsthand,specifically for research investigation. May be referred to as ‘field research’.) -*secondary* (data already exists, collected by someone else. May be referred to as ‘desk research’)
62
What are the pros/cons of qualitative data?
Pris -offers the researcher more *richness of detail* -*broader in scope* and gives participants the opportunity to develop their thoughts, feelings and opinions. -tends to have *greater external validity* Cons -often *difficult to analyse* patterns and trends within and between data are hard to find. -*conclusions maybe subjective* -*reduces the validity and the reliability* of the data collected
63
What are the pros/cons of quantative data?
Pros -relatively *simple to analyse* Patterns and trends within and between data are more obvious. -*conclusions are more objective*. -*increased the validity and the reliability* of the data collected. Cons -*narrower in scope* and meaning, does not give the participants the opportunity to develop their responses. -tends to have *lower external validity*
64
What are the pros/cons of primary data?
Pros -*collected for a specific purpose*, (Questionnaires and interviews can be designed to *specifically target information researcher requires*) *increasing the internal validity* of the study. -*authenticity of the data can be checked during collection*, Researchers able to see if the data collected is genuine, (which they are unable to do with secondary data) *increasing the internal validity* of the data collected -*data collected is current*, collecting the data first hand, it reflects the current time period, *increases the external validity* of the research -*quality of the methods* used to collect the data *can be checked*, researcher is aware of procedure used(so it can be repeated) and knows data collected is genuine and not “massaged”, *increases both the validity and the reliability* of the findings Cons -Producing it *requires time and effort*, researcher has to *plan, prepare and resource the study*
65
What are the pros/cons of secondary data?
Pros -*requires little time and effort*, Takes no planing/resourcing/prep etc. Cons -*not collected for a specific purpose* The study not been designed to specifically target the information researcher requires, *decreases the internal validity* of the study. -*authenticity of the data cannot be checked during collection* Researchers not able to see if data collected is genuine, *decreases the internal validity* of the data collected. -*data collected may be outdated*, researchers are not collecting data first hand, it may not reflect the current time period, *decreases the external validity* of the research. -*quality of the methods* used to collect the data *cannot be checked*. researcher is not aware of actual procedure, so it cannot be repeated and they don’t know if data collected is genuine and has not be “massaged”. -*decreases both the validity and the reliability* of the findings
66
What is meta analysis?
-uses *secondary data* from a large number if studies -researchers *may discuss their findings and conclusions*, (known as qualitative analysis) -researchers *may use quantitative approach*and *statistically analyse the combined data* (involves calculating on effect size)
67
What are the pros/cons of meta analysis?
Pros -*validity of data is known* so we can view it with more confidence -*data can be generalised to larger populations*, as multiple pieces of research is used so sample size is increased Cons -*prone to publication bias*, researcher may have not used all relevant studies, (leaving out ones with negative or. insignificant results) *data is biased* and has incorrect conclusions drawn
68
What are descriptive statistics
The use of graphs, tables and summary statistics to identify trends and analyse sets of data Including: .Measures of central tendency - mean, median, mode .measures of dispersion - range and Standard deviation
69
Measures of central tendency evaluation
**mean** + representative as it includes all values - easily distorted by extreme values **median** + easy to calculate + not affected by extreme values - less sensitive that mean as lower and higher values are ignored **mode** + easy to calculate +can be used with nominal data (categories) - there can be more that one mode - not representative
70
Standard deviation analysis
The smaller the standard deviation the tighter the dispersion within a data set, this means people are impacted similarly by an IV in the experiment. (Dissimilar for greater SD)
71
Measures of dispersion evaluation
**range** + easy to calculate - only takes two most extreme values into account **standard deviation** + more precise measure of dispersion - easily distorted by extreme values
72
What do bar charts look like and why are they used?
-*categories* are placed on *x-axis* -*frequency of the categories* is placed on *y axis* -*space between the categories* on the x axis -key is used if there’s more than one set of data for each category -used for data from *categories* -categorical data means that it is *discrete (independent)* and *not related* so they should not touch
73
What do histograms look like and why are they used?
-*different “categories” are NOT separated* because they are related They continue on e.g. height, weight, test scores -bars of *unequal width* – this tells us the frequency. I.e. in a histogram, it’s about the area, not its height. -*frequency/frequency density* on the *y axis* -*continuous data* on the *x axis* -used when *data is related* Frequency Frequency density = —————— Width
74
What do line graphs look like and why are they used?
-*uses continuous data* . -*frequency* on the *y axis* -*continuous data* on the *x axis* -*dots are joined* up by a line -allow *more than one set of continuous data to be shown* at one time.
75
What do scattergrams look like and why are they used?
-for each individual two scores are obtained, one dot is plotted -*One dot = one participant* -*does not matter* if the co-variables go on the *x or y axis* -the *dots are not joined up* But *may have a line of best fit* -The *closer points are to line of best fit*, the *stronger correlation* -*show associations or relationships between co-variables* (Co-variables: not causally linked as the variables may or may not be related in some way)
76
What are the three measures of central tendencies (averages)?
Mean Median Mode
77
What are the measures of dispersion (spread of data)?
Range Standard deviation
78
What are distributions?
-*normal * distribution is a *bell curve*, symetrical -most scores fall on/around the mean -mean/mode/median should all fall on the same point -the *larger standard deviation*, the *greater the spread of data* from the mean *Skewed distribution*-data spread around the mean is not symmetrical (*mean is no longer in the middle*) Occurs *when theres extreme results*
79
What is a positive skew? When does it occur?
-*distribution* is concentrated *to the left* -more *low scores* in data
80
What is a negative skew? When does it occur?
-*distribution* concentrated *to the right* -more *high scores* in data
81
How are distribution graphs drawn? (Where is the mode/median/mean?)
-*mode* is the highest *point* -*mean*is pulled *towards extreme score* -*median* sits in the *middle of mode/mean*
82
What is a correlation?
-a *mathematical technique* in which the *association between two variables* is investigated (co-variables)
83
What is a positive/negative/zero correlation?
Positive- one co-variable increases so does the other Negative- one co-variable increases the other decreases Zero- no association between co-variables
84
What is a correlation co-efficient?
-shows *how closely the variables are* and what *type of correlation it is* -between -1 and +1 Numbers represent *strength/direction* of *co-variables relationship* -*closer to 0, weaker relationship*
85
What are the strengths and weaknesses of using correlations ?
Strengths -useful as *preliminary research* technique, *allows a link to be identified* and further investigated -relatively *quick and cost effective* to carry out (*secondary data* used) Weaknesses -can be *misused or misinterpreted* -*lack of experimental manipulation* and control so studies can inly tell if variables are linked, *not establish cause and effect* -another untested variable may be causing relationship between two co-variables
86
What are pilot studies?
*Small scale* version of an investigation that *takes place before real investigation* is conducted
87
Why are pilot studies used?
-check procedures, materials, measurement scales etc -allows researchers to her to change their studies if issues occur -findings in terms of data are irrelevant (as its about the method)
88
What are the advantages and disadvantages of pilot studies?
+saves time and money +opportunity to practice method/procedure +can correct mistakes in method +increases validity of research -could lead to investigation bias (researcher changing method to gain desired results) -may take more time/money -waste if time/money if no changes are required -needs new participants for final study which may be difficult to get
89
What are single blind procedures? Why is it used?
-researchers *don’t tell participants* if they are given a test treatment or a control experiment -used to *ensure particpants don’t bias the result* by acting in a certain way, helps *avoid effects of demand characteristics*
90
What are double blind procedures? Why is it used?
-*neither participants nor conductor know* of the studies aims -third party (not the researcher) conducts the experiment -used to *prevent investigator effects* as *researcher is not able to unconsciously influence participants or give cues* to the condition they are in (often used in drug trial)
91
What is the process of peer review?
1- *researcher submits an article to a journal* of their choice (may be determined by audience/prestige) 2- journal selects *two or more appropriate experts* (psychologists working in a similar field) to *anonymously* peer review the article *without payment*. peer reviewers *assess*: the *methods/designs* used, *originality* of findings, the *validity* of original research findings, its *content, structure and language* 3- *Feedback* from the reviewer *determines if article is accepted*. article *may be: Accepted as it is, accepted with revisions, sent back for author to revise and re-submit or rejected without the possibility of submission* 4- *editor makes the final decision* whether to *accept or reject* the research report *based on the reviewers comments/ recommendations*
92
What are the aims of peer review?
-To *allocate research funding*: research bodies and the government only fund worthwhile projects. -To *validate quality and relevance of the research*: all elements of the research are assessed for quality/accuracy, to prevent dissemination of irrelevant findings, deliberate fraud and unwarranted claims -To *suggest amendment or improvements*: ensures the research is taken seriously, and helps identify errors or weaknesses. This is because authors/researchers are less objective about their own work.
93
What are the strengths and limitations of peer review?
Strengths -can *establish the validity and accuracy of research* (prevents the dissemination of incorrect work) -usually *anonymous*, produces a more honest appraisal of the work Limitations -often *publication bias in journals* (Editors want to publish significant findings to increase credibility/circulation of their publication, so more likely to publish positive results. Other research ignored, creating false impression of state of psychology) -*slow process* which slows publication down -reviewers might use it to *prevent competing researchers from publishing work*/use anonymity to *criticise rival researchers* (likely as many researchers are in direct competition for limited funding) -may *suppress opposition to mainstream theories*. (Reviewers established in their field, usually critical of research that contradicts their view. findings that fit with current opinion more likely to be passed than new/innovative research challenging the status quo. This may slow rate of change in psychology)
94
What is publication bias?
-the idea that *research with significant findings/positive results is more likely to be published* than research with insignificant or negative results
95
How is the internet effecting the idea of peer review?
-The internet means that a lot of research and academic comment is being published without official peer reviews than before. -Systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.
96
What is falsification?
-idea that *scientific statements* are *can be proven wrong* -*Science aims to falsify the hypotheses* it forms not verify, to be a science psychology must do this -(Popper emphasised the idea that nothing can be proved aka ‘All swans are white’ - you can find lots of swans that are white, cannot be proved because you can never find or see all swans)
97
What is reliability?
-*extent* to which a test or measurement can *produce consistent results*
98
What is a paradigm?
-A *set of shared assumptions and agreed methods* within a scientific discipline. -Kuhn argued against Popper's idea of science explained through induction/deduction. He felt people collected data that fitted with accepted assumptions of science, thus creating a bias within research. -*Psychology*, argued to be *pre-scientific* as there are *too many differences within each area*, no universally accepted paradigm (so its pre-paradigmatic)
99
What is a paradigm shift?
-*result of a scientific revolution*: a *significant change in the dominant unifying theory* within a scientific discipline. -Kuhn argued there’s *two phases* in science that revolved around paradigm shifts. -Phase 1 = normal science. (One theory is dominant, Evidence against the dominant theory gathers, original theory is falsified/overthrown) -Phase 2 = revolutionary shift. (A new theory becomes dominant)
100
What is validity?
-*extent* to which *results accurately measure what they are supposed to* measure.
101
What is objectivity?
-Being *uninfluenced* by personal opinions or past experiences, *being free from bias* -Involves *keeping a critical distance* and making observations without bias. -To *lessen the bias of researchers* we have: (Standardised instructions/Operationalised variables/Physically defined measurements/Double blind procedures) -*Peer review* can also be used to *assess the objectivity* of researchers' work.
102
What is subjectivity?
-Being *influenced by personal feelings, tastes, or opinions*
103
What is verification?
-The *establishment of the correctness* of a theory *or fact*
104
What is a theory?
-*collection of general principles* used to *explain specific observations and facts*
105
What is replicability?
-Being *able to repeat a study* to *check the validity and reliability* of the findings. -Involves *repeating research* to check to validity. -This *can only happen* if the *research has been carefully written up* so that *someone else can repeat it exactly*
106
What is the scientific process?
-A *means of acquiring knowledge* based on *observable and measurable evidence*
107
What is rigorous?
-extremely *thorough and careful* so to *strictly apply or adhere to a system*
108
What is empirical?
-*Relying on or derived* from *observation or experiment* -*Science* is about *acquiring knowledge* -This knowledge is acquired through empirical methods - they are derived from observation and experiments where the results can be seen. -A *theory can only claim to be scientific* if it can be *rigorously/empirically tested* and verified.
109
What is bias?
-process where the *scientists performing the research influences the results*, in order to portray a certain outcome.
110
What is hypothesis?
-*clear, precise, testable statement* that *states the relationship between the variables* to be investigated.
111
What is theory construction?
-the *first stage of the scientific process* -Theories are *used to generate hypotheses* *Observation of phenomena* are the *first stage in theory construction*- we can only have ideas about the world based on what we see. -*Theories are adjusted following testing of hypotheses* based on whether they have been falsified (proven wrong) or verified (proven
112
What is Deductive Theory Construction (top down)?
-*observation leads* to the creation of a *theory* -*From this* theory a *hypothesis is created* -The *hypothesis is tested empirically* -*Conclusions* are drawn *from study* -This *may lead to new* questions and new *hypotheses* being created -New hypothesis are *then tested* -The *theory* may then be *refined*
113
What is inductive Theory Construction (bottom up)?
-*observation* leads to a *hypothesis* being created. -The *hypothesis is tested empirically* -*Conclusions* are drawn from *study* -This *may lead to new* questions and new *hypotheses* being created -New hypothesis are *then tested* -*Eventually the data is used* to *construct a theory*