Research Methods Flashcards
What is content analysis?
- A method for analysing qualitative data. e.g. content of communication between people.
- Might be called behaviour categories or themes (in this case it is called thematic analysis).
- Involves counting instances of such behavioural categories in order to produce numbers and percentages.
Researcher has to make design decisions about the following;
-Choosing how to sample the data: If analysing books – do you look at every page? Of just every 5th page (time sampling), random sampling etc.
-Choosing how to code the data: Using behavioural categories, then count the times each occurs.
-Choosing how to represent the data: Data can be recorded in each behavioural category in two ways: qualitatively and quantitatively.
Count the number of instances (quantitative).
Describe the examples in each category (qualitative)
Evaluate content analysis’
Strengths:
-High ecological validity=based on observations of real behaviour or communications.
-Ethical- data already exists in society. No consent needed.
-Replicable: can access the same books etc-enables the researcher to check reliability.
-Flexible – can produce qualitative or quantitative data, depending on what the topic requires.
Weaknesses:
-Observer bias reduces objectivity and validity=different observers-different interpretation.
-Less culturally biased-interpretations of verbal or written content affected by the language and culture of the observer and the behavioural categories being used.
Outline thematic analysis
A method for analysing qualitative data which involves identifying, analysing and reporting patterns within the data
General principles:
- Look at the data several times, 2. Break the data into units, small units should each convey meaning.
- Assign a label/code to each unit, these labels are the initial categories.
- Combine simple labels/codes into larger categories.
- Check the data by accessing a new set of data and applying these categories.
What are case studies
In-depth investigation of a single person, group or event, where data are gathered from a variety of sources and by using several different methods (e.g. observations & interviews).
How are case studies carried out?
People may be:
- Questionnaires, interviews.
- Observed during daily life
- Asked to complete psychometric tests (IQ, personality etc)
- Asked to take part in experiments to test what they can/cannot do.
- Normally longitudinal, they follow the individual or group over an extended period of time.
How can you organise the findings of a case study?
- Using content analysis -organise into themes to represent the participants emotions, abilities etc (qualitative).
- We might log scores to psychometric tests or data from observations (quantitative).
Evaluate case studies as a research method
Strengths:
- Produces lots of rich and in depth information,
- Provides insights into the complex interactions of many factors therefore not overlooking and areas
- Can study instances that are rare.
- Useful for generating ideas for further study, or completely disproving a theory.
Weaknesses:
- difficult to generalise.
- Often rely on accounts from pp and their family=subjective, prone to social desirability bias and memory decay as they are retrospective over some time.
- Ethical issues, e.g. in the form of confidentiality and consent.
Outline what is meant by reliability
- Refers to consistency
- How much we can depend on a measurement, we want to know whether if we repeat a study, measurement, tests etc we can be sure to get the same results.
- If we get different results then the method is not reliable.
How do you assess reliability in observation methods?
What if we are biased?
- Repeat the observation (e.g. watch the video again).
- Compare their results.
- Extent to which they agree is known as inter observer reliability. This is calculated as a correlation coefficient for pairs of scores. A result of .80 or more suggests good inter observed reliability.
How do you improve reliability in observation methods?
- Operationalise variables
- Make the behaviour categories clear, not overlapping etc=improve inter-observer.
- So to improve the inter observed reliability we need to give the observers practise time.
How do you assess reliability in self report methods?
Test retest reliability
- Give the questionnaire to a group.
- As with inter observed reliability, if the outcome of both tests is similar then we say they are reliable.
Inter interviewer reliability
- Interviewing same person twice with a gap in between and comparing responses.
- Get two interviewers and assess the consistency of their responses.
How do you improve reliability in self report methods?
- Carry out a pilot study to check interpretation, as well as by ensuring questions are clear and not ambiguous in any way.
- More closed questions (which are harder to misinterpret).
- Same interviewer each time-properly trained and using a structured interview.
How do you assess reliability in experimental methods?
- Often experiments measure their DV using self report Therefore when assessing reliability we need to check whether the method for measuring the DV is consistent (are the observations or self report methods consistent).
- Standardised instructions and methods will help with this.
How do you improve reliability in experimental methods?
- Check methods used are consistent.
- The same procedure is often repeated with different pps so it is important that this is done the same way each time.
- If it isn’t then we cannot compare the responses.
What is validity?
Whether something is true – measures what it sets out to measure.
What is internal validity and what factors might affect it?
-Whether results due to manipulation of IV rather than other factors-extraneous variables or demand characteristics.
- Investigator effects.
- Demand characteristics – act differently.
- Confounding variables – factors that vary with the IV mean we do not know what has really affected the DV.
- Social desirability bias – participants have a tendency to provide answers that do not reflect reality but are instead designed to portray themselves in a good light.
- Poor behavioural categories – observes cannot record observations accurately because the categories are unclear or overlapping.
What is external validity and give examples
Whether it is possible to generalise the results beyond the experimental setting.
Ecological validity – generalising results to real life / additional settings
Population validity – generalising results to other people
Temporal validity – generalising results to other historical periods