Research methods Flashcards
(37 cards)
What is a case study ?
To study a ‘case’ in psychology is to provide a detailed and in-depth analysis of an individual, group, institution or event.
What type of data is produced by case study ?
Generally Qualitative data is produced.
Generally, do case studies take a long time or short time to conduct ?
Generally they take place over a long period of time - longititudinal
What is content analysis ?
A research technique that enables the indirect study of behaviour by examining communications that people produce, for example, in texts, emails, TV , film and other media.
What is thematic analysis ?
An inductive and qualitative approach to analysis that involves identifying implicit or explicit ideas within the data. Themes will often emerge once the data has been coded.
What is a strength of case studies ?
Case studies are able to offer rich, detailed insights that may shed light in very unusual and atypical forms of behaviour. This may be preferred to the more ‘superficial’ forms of data that might be collected from, say, an experiment or questionnaire.
What is a weakness of case studies ?
Generalisation of findings is obviously an issue when dealing with such small sample sizes. Furthermore, the information that makes it into the final report is based on the subjective selection and interpretation of the researcher. Add to this the fact that personal accounts from the participants and their family and friends may be prone to inaccuracy and memory decay, especially if childhood stories are being told. This means that the evidence from case studies begins to look more than a little low in validity.
What is a strength of content analysis ?
Content analysis is useful in that it can circumnavigate many of the ethical issues normally associated with psychological research. Much of the material that an analyst might want to study, such as TV adverts, films, personal ads in the newspaper or on the Internet, etc, may already exist within the public domain. Thus there are no issues with obtaining permission, for example. Communication of a more ‘dubious’ and sensitive nature, such as a conversation by text, still has the benefit of being high in external validity , provided the ‘authors’ consent to its use. We have also seen that content analysis is flexible in the sense that it may produce both qualitative and qualitative data depending on the aims of the research.
What is a weakness of content analysis ?
People tend to be studied indirectly as part of content analysis so the communication they produce is usually analysed outside of the context within which it occurred. There is a danger that the researcher may attribute opinions and motivations to the speaker or writer that were not intended originally. To be fair, many modern analysts are clear about how their own biases and preconceptions influence the research process, and often make reference to these as a part of their final report. However, content analysis may still suffer from a lack of objectivity, especially when more descriptive forms of thematic analysis are employed.
What is reliability ?
Refers to how consistent the findings from an investigation or measuring device are. A measuring device is said to be reliable if it produces consistent results every time it is used.
What is test - retest reliability ?
A method of assessing the reliability of a questionnaire or psychological test by assessing the same person on two same person on two separate occasions. This shows what extent the test produces the same answers i.e., is consistent or reliable.
What is inter - observer reliability ?
The extent to which there is an agreement between two or more observers involved in observations of a behaviour. This is measured by correlating the observations of two or more observers. A general rule is that if (total number of agreements ) / (total number of observations) > +.80, the data have high inter- observer reliability.
How does questionnaires relate to improving reliability ?
As we have seen, the reliability of questionnaires over time should be measured using the test-retest method. Comparing two sets of data should produce a correlation that exceeds +.80. A questionnaire that produces low test-retest reliability may require some of the items to be ‘deselected’ or rewritten. For example, if some questions are complex or ambiguous, they may be interpreted differently by the same person on different occasions. One solution might be to replace some of the open questions with closed, fixed choice alternatives which may be less ambiguous.
How does interviews relate to improving reliability ?
For interviews, probably the best way of ensuring reliability is to use the same interviewer each time . If this not possible or practical, all interviewers must be properly trained so, for example, one particular interviewer is not asking questions that are too leading or ambiguous. This is more easily avoided in structured interviews where the interviewer’s behaviour is more controlled by the fixed questions. Interviews that are unstructured and more ‘free flowing’ are less likely to be reliable.
How does experiments relate to improving reliability ?
Lab experiments are often described as being ‘reliable’ because the researcher can exert strict controller many aspects of the procedure, such as the instructions that participants receive and the conditions within which they are tested. Certainly such control is often more achievable in a lab than the field. This is more about precise replication of a particular method though rather than demonstrating the reliability of a finding. That said, one thing that may affect the reliability of a finding is if the participants were tested under slightly different conditions each time they were tested.
How does observations relate to improving reliability ?
The reliability of observations can be improved by making sure that behavioural categories have been properly operationalised, and that they are measurable and self evident. Categories should not overlap and all possible behaviours should be covered on the checklist. If categories are not operationalised well, or are overlapping or absent, different observers have to make their own judgements or what to record where and may well end up with differing and inconsistent records.
What is validity ?
The extent to which an observed effect is genuine - does it measure what is was supposed to measure, and can it be generalised beyond the research setting within which it was found ?
What is face validity ?
A basic form of validity in which a measure is scrutinised to determine whether it appears to measure what it is supposed to measure - for instance, does a test of anxiety look like it measures anxiety ?
What is concurrent validity ?
The extent to which a psychological measures relates to an existing similar measure.
What is Ecological validity ?
The extent to which findings from a research study can be generalised to other settings and situations. A form of external validity.
What is temporal validity ?
The extent to which findings from a research study can be generalised to other historical times and eras. A form of external validity.
What is internal validity ?
Refers to whether the effects observed in an experiment are due to the manipulation of the independent variable and not some other factor.
What is external validity ?
Relates more to factors outside of the investigation, such as generalising to other settings, other populations of people and other eras.
How would you assess validity ?
one basic form of validity is face validity, whether a test, scale or measure appears ‘on the face of it’ to measure what it is supposed to measure. This can be determined by simply ‘eyeballing’ the measuring instrument or by passing it to an expert to check. The concurrent validity of a particular test or scale is demonstrated when the results are obtained are very close to, or match, those obtained on another recognised and well- established test.