Research Methods Flashcards

1
Q

What is content analysis?

A
  • A method for analysing qualitative data. e.g. content of communication between people.
  • Might be called behaviour categories or themes (in this case it is called thematic analysis).
  • Involves counting instances of such behavioural categories in order to produce numbers and percentages.

Researcher has to make design decisions about the following;
-Choosing how to sample the data: If analysing books – do you look at every page? Of just every 5th page (time sampling), random sampling etc.
-Choosing how to code the data: Using behavioural categories, then count the times each occurs.
-Choosing how to represent the data: Data can be recorded in each behavioural category in two ways: qualitatively and quantitatively.
Count the number of instances (quantitative).
Describe the examples in each category (qualitative)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Evaluate content analysis’

A

Strengths:
-High ecological validity=based on observations of real behaviour or communications.
-Ethical- data already exists in society. No consent needed.
-Replicable: can access the same books etc-enables the researcher to check reliability.
-Flexible – can produce qualitative or quantitative data, depending on what the topic requires.
Weaknesses:
-Observer bias reduces objectivity and validity=different observers-different interpretation.
-Less culturally biased-interpretations of verbal or written content affected by the language and culture of the observer and the behavioural categories being used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Outline thematic analysis

A

A method for analysing qualitative data which involves identifying, analysing and reporting patterns within the data

General principles:

  1. Look at the data several times, 2. Break the data into units, small units should each convey meaning.
  2. Assign a label/code to each unit, these labels are the initial categories.
  3. Combine simple labels/codes into larger categories.
  4. Check the data by accessing a new set of data and applying these categories.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are case studies

A

In-depth investigation of a single person, group or event, where data are gathered from a variety of sources and by using several different methods (e.g. observations & interviews).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How are case studies carried out?

A

People may be:

  • Questionnaires, interviews.
  • Observed during daily life
  • Asked to complete psychometric tests (IQ, personality etc)
  • Asked to take part in experiments to test what they can/cannot do.
  • Normally longitudinal, they follow the individual or group over an extended period of time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How can you organise the findings of a case study?

A
  • Using content analysis -organise into themes to represent the participants emotions, abilities etc (qualitative).
  • We might log scores to psychometric tests or data from observations (quantitative).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Evaluate case studies as a research method

A

Strengths:

  • Produces lots of rich and in depth information,
  • Provides insights into the complex interactions of many factors therefore not overlooking and areas
  • Can study instances that are rare.
  • Useful for generating ideas for further study, or completely disproving a theory.

Weaknesses:

  • difficult to generalise.
  • Often rely on accounts from pp and their family=subjective, prone to social desirability bias and memory decay as they are retrospective over some time.
  • Ethical issues, e.g. in the form of confidentiality and consent.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Outline what is meant by reliability

A
  • Refers to consistency
  • How much we can depend on a measurement, we want to know whether if we repeat a study, measurement, tests etc we can be sure to get the same results.
  • If we get different results then the method is not reliable.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do you assess reliability in observation methods?

What if we are biased?

A
  • Repeat the observation (e.g. watch the video again).
  • Compare their results.
  • Extent to which they agree is known as inter observer reliability. This is calculated as a correlation coefficient for pairs of scores. A result of .80 or more suggests good inter observed reliability.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you improve reliability in observation methods?

A
  • Operationalise variables
  • Make the behaviour categories clear, not overlapping etc=improve inter-observer.
  • So to improve the inter observed reliability we need to give the observers practise time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you assess reliability in self report methods?

A

Test retest reliability

  • Give the questionnaire to a group.
  • As with inter observed reliability, if the outcome of both tests is similar then we say they are reliable.

Inter interviewer reliability

  • Interviewing same person twice with a gap in between and comparing responses.
  • Get two interviewers and assess the consistency of their responses.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you improve reliability in self report methods?

A
  • Carry out a pilot study to check interpretation, as well as by ensuring questions are clear and not ambiguous in any way.
  • More closed questions (which are harder to misinterpret).
  • Same interviewer each time-properly trained and using a structured interview.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do you assess reliability in experimental methods?

A
  • Often experiments measure their DV using self report Therefore when assessing reliability we need to check whether the method for measuring the DV is consistent (are the observations or self report methods consistent).
  • Standardised instructions and methods will help with this.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How do you improve reliability in experimental methods?

A
  • Check methods used are consistent.
  • The same procedure is often repeated with different pps so it is important that this is done the same way each time.
  • If it isn’t then we cannot compare the responses.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is validity?

A

Whether something is true – measures what it sets out to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is internal validity and what factors might affect it?

A

-Whether results due to manipulation of IV rather than other factors-extraneous variables or demand characteristics.

  • Investigator effects.
  • Demand characteristics – act differently.
  • Confounding variables – factors that vary with the IV mean we do not know what has really affected the DV.
  • Social desirability bias – participants have a tendency to provide answers that do not reflect reality but are instead designed to portray themselves in a good light.
  • Poor behavioural categories – observes cannot record observations accurately because the categories are unclear or overlapping.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is external validity and give examples

A

Whether it is possible to generalise the results beyond the experimental setting.

Ecological validity – generalising results to real life / additional settings
Population validity – generalising results to other people
Temporal validity – generalising results to other historical periods

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How do we assess validity?

A

-Face Validity – does the measure look like it measures what it claims to?
Are the questions on a questionnaire or interview about the topic to be studied. Are the behavioural categories about the topic you want to observe for?
-Concurrent Validity – comparing the method to a previously validated one.
Participants are given both questionnaires (one we know is valid and your one). If yours is also valid they should gain similar scores (high correlation coefficient).

19
Q

How do we improve validity?

A
  • Poor Face Validity – questions or behavioural categories should be revised to be more on topic.
  • Poor Concurrent Validity – remove questions which seem irrelevant, look for ways to make the test more similar to ones which have been validated.
  • Poor Internal /External Validity – improve the design by using a single/double blind procedure, use realistic tasks, keep results anonymous, control for confounding and extraneous variables etc.
20
Q

How do you improve different research methods?

A

Experiments –
-Control group-see how IV affects experimental group better.
-Standardised procedures stops participants reacting differently or experimenters treating participants differently.
-Using Single and Double Blind procedures reduce the chances of demand characteristics affecting participants.
Questionnaire –
-Using a lie scale controls for social desirability bias and checks consistency.
-Ensuring responses are anonymous also reduces social desirability bias.
Observations –
-Remaining covert means participants are not aware of the observation so will act more naturally.
-Using behavioural categories that are not too broad, do not overlap and are not ambiguous will make data collection more accurate.

21
Q

Outline empirical methods as a key feature of a science

A
  • Gaining information via direct sensory experience (direct observation or experiment).
  • Scientists look for empirically based facts, empirical evidence.
  • Enables them to make claims about the truth of a theory.
22
Q

Outline objectivity as a key feature of a science

A
  • Unaffected by expectations of researcher (opinion or bias).
  • Increased by carefully controlled methods.
23
Q

Outline control as a key feature of a science

A
  • We aim to control many factors in research in order to be able to make cause and effect statements.
  • The more control we have over possible confounding variables the more likely it is that only our IV is affecting our DV and our study is Valid.
24
Q

Outline replicability as a key feature of a science

A
  • One way to demonstrate the validity of a study is to replicate it and gain the same outcome as this affirms the truth of the first result, we can be more confident in results if we can replicate them.
  • Guards against fraud. Enables us to check if something was a one off (chance) result because of something in the way the research was done.
  • Even more important (in psychology) as we often use small samples of people.
25
Q

Outline theory construction as a key feature of a science

A
  • Collection of general principles that explain observations and facts
  • Help us understand and predict the natural phenomena around us.
26
Q

Outline hypothesis testing as a key feature of a science

A

-When testing a hypothesis:
You will either find replicable evidence and can assume your theory is valid.
Or your theory will be proved incorrect – it needs amending or replacing.

27
Q

Outline the scientific method

A

-The method used to gain valid scientific information.
-Starts with a phenomenon=
In inductive model- leads to developing hypothesis, which can lead to new hypothesis, eventually we may consider a theory from this.
-In deductive model- theory construction comes at the start, after making observations.

28
Q

Outline falsifiability as a feature of science

A
  • The possibility that a statement or hypothesis can be proved wrong.
  • In any good science we must create hypothesis that we try to falsify. Any that cannot be falsified are considered stronger.
29
Q

Outline paradigms as a key feature of a science

A

-‘A shared set of assumptions about the subject matter of a discipline and the methods appropriate to it’s study’ (Kuhn, 1962)

30
Q

Outline probability
Null Hypothesis
Alternative hypothesis

A
  • A numerical measure of the likelihood or chance that certain events will happen.
  • A statement of no relationship. How probable is the research result if there is actually nothing going on?
  • A testable statement about the relationship between two or more variables.
31
Q

What is meant by a type 1 or a type 2 error?

A
  • Type 1: Is a false positive. It is where you accept the alternative/experimental hypothesis when it is false
  • Type 2: Is a false negative. It is where you accept the null hypothesis when it is false
32
Q

Why in general do psychologists use a probability of ≤ 0.05?

Level of significance

A
  • That the probability that the result is down to chance is equal to or less than 5%.
  • Therefore, there is a 5% risk that a result did not occur because of the IV but occurred because of chance.
  • So 95% of the time it occurred because of the IV.
  • The level at which we will accept our findings are real and not due to chance.
33
Q

How do we choose a probability?

A
  • In general we use a probability level of 95%. This means there is a 5% probability of our results occurring if the null is true.
  • To be more certain-use a more stringent probability of 1% or less.
34
Q

Why do we prefer a parametric test?

A
  • More powerful.
  • Can detect significance in situations where non parametric cannot.
  • However in order to use them the study must meet more strict criteria.
35
Q

What are the criteria for a parametric test?

A
  • Interval data.
  • Target population should have a normal distribution (not the sample).
  • Variance of the two samples are not significantly different.
36
Q

Outline the three levels of measurement

A
  • Nominal data: data in categories, Data is discrete, one item can only appear in one category.
  • Ordinal Data – data is ordered, intervals between data are not equal, Data lack precision and is subjective.
  • Interval Data – data is on a scale with equal units in precisely defined sizes. E.g. height, time, weight.
37
Q

Key terms to know about statistical tests

A
  • Test statistic (calculated/observed value) – the value calculated by a test, each is given a name (S for sign test).
  • To decide if a test is significant it is compared to another number from a statistical table – known as the critical value.
  • To find the critical value correctly you need to identify:
  • The significance level – normally p<0.05.
  • The kind of hypothesis used – one tailed V two tailed.
  • The value of N – this is the number of participants, there may be two values of N for an independent groups test and a chi squared test has the degrees of freedom instead.
38
Q

What is the importance of R in distinguishing between significant and non significant tables of critical values

A

If the test name has an R in it (speaRmans Rho, chi squaRed) then the calculated value needs to be gReater than the critical value.
For tests without an R (mann whitney, wilcoxon) the calculated value needs to be less than the critical value.

39
Q

What is the role of an abstract in a journal article?

A
  • Summary of the study covering aims, hypothesis, method (procedures), results and conclusions (including implications of current study).
  • Usually about 150-200 words in length
  • Allows reader to get a quick picture of the study and its results.
40
Q

What is the role of an Introduction in a journal article?

A
  • Intro begins with review of previous research (theories & studies), so reader knows what other research has been done and understands reasons for current study.
  • Focus of research review should logically lead to study to be conducted so reader is convinced of reasons for particular research.
  • Hypothesis could be stated too.
41
Q

What is the role of a method in a journal article?

A
  • Contains detailed description of what researcher did, providing enough information for replication of study
  • Design, e.g. repeated measures or covert observation- should be justified
  • Pps- Info about sampling methods and how many pps took part and their details (age, job etc.)
  • Apparatus/materials- Descriptions of any materials used
  • Procedures- Including standardised instructions, the testing environment, order of events and so on.
  • Ethics- Significant ethical issues may be mentioned, as well as how they dealt with it.
42
Q

What is the role of Results in a journal article?

A
  • Details given about what researcher found:
    • Descriptive statistics- Tables, graphs showing frequencies and measures of central tendency and dispersion.
    • Inferential statistics reported, including calculated values and significance level
    • In the case of qualitative research, categories and themes are described along with examples within these categories.
43
Q

What is the role of a discussion in a journal article?

A
  • Researcher aims to interpret results of study and consider implications for future research
  • Suggesting real world applications
  • Summary of the previous research
  • Consideration of methodology
  • Implications of psychological theory and possible real world applications
  • Suggestions for future research
44
Q

What is the role of references in a journal article?

A

-Full details of any journal articles or books mentioned in research report
-Format for journal articles is generally:
Author’s name(s), date, title of article, journal title, volume (issue number), page numbers.
If its a book: Authors name(s), date, title of book, place of publication, publisher.