Practical Work: Further Cc Information. Flashcards

1
Q

Practical work 1: You need to be able to say the 3 reasons or criteria for each of the 5 tests:

test of difference / association

nominal / at least ordinal data

independent / repeated measures

A
  1. Mann Whitney:
    = test of difference.
    = at least ordinal data.
    = independent measures
  2. Wilcoxon:
    = test of difference.
    = at least ordinal data.
    = repeated measures.
  3. Chi Squared:
    = test of difference.
    = nominal data.
    = independent measures.
  4. Pearsons r:
    = test of association/ correlation.
    = interval data.
    = when testing for a correlation.
  5. Spearman’s Rho:
    = test of association.
    = at least ordinal data.
    = independent measures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Practical work 2: Types of validity.

A

The concept of validity was formulated by Kelly (1927, p. 14) who stated that a test is valid if it measures what it claims to measure.
For example a test of intelligence should measure intelligence and not something else (such as memory).
A distinction can be made between internal and external validity. These types of validity are relevant to evaluating the validity of a research study / procedure.

What is internal and external validity in research?

Internal validity refers to whether the effects observed in a study are due to the manipulation of the independent variable and not some other factor.
In-other-words there is a causal relationship between the independent and dependent variable.
Internal validity can be improved by controlling extraneous variables, using standardized instructions, counter balancing, and eliminating demand characteristics and investigator effects.
External validity refers to the extent to which the results of a study can be generalized to other settings (ecological validity), other people (population validity) and over time (historical validity).
External validity can be improved by setting experiments in a more natural setting and usingrandom samplingto select participants.

Assessing the Validity of Test

There there are two main categories of validity used to assess the validity of test (i.e. questionnaire, interview, IQ test etc.): Content and criterion.

What is face validity in research?

Face validity is simply whether the test appears (at face value) to measure what it claims to. This is the least sophisticated measure of validity.
Tests wherein the purpose is clear, even to naïve respondents, are said to have high face validity. Accordingly, tests wherein the purpose is unclear have low face validity (Nevo, 1985).
A direct measurement of face validity is obtained by asking people to rate the validity of a test as it appears to them. This rater could use a likert scale to assess face validity. For example:

the test is extremely suitable for a given purpose

the test is very suitable for that purpose;

the test is adequate

the test is inadequate

the test is irrelevant and therefore unsuitable

It is important to select suitable people to rate a test (e.g. questionnaire, interview, IQ test etc.). For example, individuals who actually take the test would be well placed to judge its face validity.

Also people who work with the test could offer their opinion (e.g. employers, university administrators, employers). Finally, the researcher could use members of the general public with an interest in the test (e.g. parents of testees, politicians, teachers etc.).
The face validity of a test can be considered a robust construct only if a reasonable level of agreement exists among raters.

It should be noted that the term face validity should be avoided when the rating is done by “expert” as content validity is more appropriate.
Having face validity does not mean that a test really measures what the researcher intends to measure, but only in the judgment of raters that it appears to do so. Consequently it is a crude and basic measure of validity.
A test item such as ‘I have recently thought of killing myself’ has obvious face validity as an item measuring suicidal cognitions, and may be useful when measuring symptoms of depression.
However, the implications of items on tests with clear face validity is that they are more vulnerable to social desirability bias. Individuals may manipulate their response to deny or hide problems, or exaggerate behaviors to present a positive images of themselves.
It is possible for a test item to lack face validity but still have general validity and measure what it claims to measure. This is good because it reduces demand characteristics and makes it harder for respondents to manipulate their answers.
For example, the test item ‘I believe in the second coming of Christ’ would lack face validity as a measure of depression (as the purpose of the item is unclear).
This item appeared on the first version of The Minnesota Multiphasic Personality Inventory (MMPI) and loaded on the depression scale.
Because most of the original normative sample of the MMPI were good Christians only a depression Christian would think Christ is not coming back. Thus, for this particular religious sample the item does have general validity, but not face validity.

What is construct validity in research?

Construct validity was invented byCornball and Meehl(1955). This type of validity refers to the extent to which a test captures a specific theoretical construct or trait, and it overlaps with some of the other aspects of validity
Construct validity does not concern the simple, factual question of whether a test measures an attribute.
Instead it is about the complex question of whether test score interpretations are consistent with a nomological network involving theoretical and observational terms (Cronbach & Meehl, 1955).
To test for construct validity it must be demonstrated that the phenomenon being measured actually exists. So, the construct validity of a test for intelligence, for example, is dependent on a model ortheory of intelligence.
Construct validity entails demonstrating the power of such a construct to explain a network of research findings and to predict further relationships.
The more evidence a researcher can demonstrate for a test’s construct validity the better. However, there is no single method of determining the construct validity of a test.
Instead, different methods and approaches are combined to present the overall construct validity of a test. For example, factor analysis and correlational methods can be used.

What is concurrent validity in research?

This is the degree to which a test corresponds to an external criterion that is known concurrently (i.e. occurring at the same time).
If the new test is validated by a comparison with a currently existing criterion, we have concurrent validity.
Very often, a new IQ or personality test might be compared with an older but similar test known to have good validity already.

What is predictive validity in research?

This is the degree to which a test accurately predicts a criterion that will occur in the future.
For example, a prediction may be made on the basis of a new intelligence test, that high scorers at age 12 will be more likely to obtain university degrees several years later. If the prediction is born out then the test has predictive validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

+ Practical work 3: How to properly write a lab report.

A

A typical lab report would include the following sections: title, abstract, introduction, method, results and discussion.
Title page, abstract, references and appendices are started on separate pages (subsections from the main body of the report are not). Use double-line spacing of text, font size 12, and include page numbers.
The report should have a thread of argument linking the prediction in the introduction to the content in the discussion.

  1. Title Page:

This must indicate what the study is about. It must include the variables under investigation. It should not be written as a question.
Title pages should be formatted APA style.

  1. Abstract: (you write this last)

Theabstractprovides a concise and comprehensive summary of a research report. Your style should be brief, but not using note form. Look at examples injournal articles. It should aim to explain very briefly (about 150 words) the following:

• Start with a one/two sentence summary, providing the aim and rationale for the study.
• Describe participants and setting: who, when, where, how many, what groups?
• Describe the method: what design, what experimental treatment, what questionnaires, surveys or tests used.
• Describe the major findings, which may include a mention of the statistics used and the significance levels, or simply one sentence summing up the outcome.
• The final sentence(s) outline the studies ‘contribution to knowledge’ within the literature. What does it all mean? Mention implications of your findings if appropriate.

Advice

The abstract comes at the beginning of your report but is written at the end (as it summarises information from all the other sections of the report).

  1. Introduction:

The purpose of the introduction is to explain where your hypothesis comes from (i.e. it should provide a rationale for your research study).
Ideally, the introduction should have a funnel structure: Start broad and then become more specific. The aims should not appear out of thin air, the preceding review of psychological literature should lead logically into the aims and hypotheis.

• Start with general theory, briefly introducing the topic. Define the important key terms.
• Explain the theoretical framework.
• Summarise andsynthesize previous studies– What was the purpose? Who were the participants? What did they do? What did they find? What do these results mean? How do the results relate to the theoretical framework?

• Rationale: How does the current study address a gap in the literature? Perhaps it overcomes a limiation of previous research.
• Aims and hypothesis. Write a paragraph explaining what you plan to investigate annd make a clear and concise prediction regarding the results you expect to find.

Advice

There should be a logical progression of ideas which aids the flow of the report. This means the studies outlined should lead logically into your aims and hypotheses.
Do be concise and selective, avoid the temptation to include anything in case it is relevant (i.e. don’t write a shopping list of studies).

  1. Method

USE THE FOLLOWING SUBHEADINGS:

Participants

How many participants were recruited?

Say how you obtained your sample (e.g. opportunity sample).

Give relevant demographic details, e.g. gender, ethnicity, age range, mean age, and standard deviation.

Design

State theexperimental design.

What were theindependent and dependent variables? Make sure the independent variable is labeled and name the different conditions/levels.
For example, if gender is the independent variable label, then male and female are the levels/conditions/groups.

How were the IV and DV operationalised?

Identify any controls used, e.g. counterbalancing, control of extraneous variables.

Materials

List all the materials and measures (e.g., what was the title of the questionnaire? Was it adapted from a study?).

You do not need to include wholesale replication of materials – instead include a ‘sensible’ (illustrate) level of detail. For example, give examples of questionnaire items.

Include the reliability (e.g. alpha values) for the measure(s).

Procedure

Describe the precise procedure you followed when carrying out your research i.e. exactly what you did.

Describe in sufficient detail to allow for replication of findings.

Be concise in your description and omit extraneous / trivial details. E.g. you don’t need to include details regarding instructions, debrief, record sheets etc.

Advice

• Assume the reader has no knowledge of what you did and ensure that he/she would be able to replicate (i.e. copy) your study exactly by what you write in this section.
• Write in the past tense.
• Don’t justify or explain in the Method (e.g. why you choose a particular sampling method), just report what you did.
• Only give enough detail for someone to replicate experiment - be concise in your writing.

  1. Results:

The results section of a paper usually present the descriptive statistics followed by inferential statistics.

Report the means, standard deviations and95% confidence intervals(CIs) for each IV level. If you have four to 20 numbers to present, a well-presented table is best, APA style.

Name the statistical test being used.

Report appropriate statistics (e.g., t-scores,pvalues).

Report the magnitude (e.g., are the results significant or not?) as well as the direction of the results (e.g., which group performed better?).

It is optional to report theeffect size(this does not appear on the SPSS output).

Advice

• Avoid interpreting the results (save this for the discussion).
• Make sure the results are presented clearly and concisely. A table can be used to display descriptive statistics if this makes the data easier to understand.
• DO NOT include any raw data.
• Follow APA style.

Use APA Style

Numbers reported to 2 d.p. (incl. 0 before the decimal if 1.00, e.g. “0.51”). The exceptions to this rule: Numbers which can never exceed 1.0 (e.g.p-values, r-values): report to 3 d.p. and do not include 0 before the decimal place, e.g. “.001”.

Percentages and degrees of freedom: report as whole numbers.

Statistical symbols that are not Greek letters should be italicised (e.g.M,SD,t,X2,F,p,d).

Include spaces either side of equals sign.

When reporting 95% CIs (confidence intervals), upper and lower limits are given inside square brackets, e.g. “95% CI [73.37, 102.23]”

  1. Discussion:

• Outline your findings in plain English (avoid statistical jargon) and relate your results to your hypothesis, e.g. is it supported or rejected?
• Compare you results to background materials from the introduction section. Are your results similar or different? Discuss why/why not.
• How confident can we be in the results? Acknowledge limitations, but only if they can explain the result obtained. If the study has found a reliable effect be very careful suggesting limitations as you are doubting your results. Unless you can think of any confounding variablethat can explain the results instead of the IV, it would be advisable to leave the section out.
• Suggest constructive ways to improve your study if appropriate.
• What are the implications of your findings? Say what your findings mean for the way people behave in the real world.
• Suggest an idea for further researched triggered by your study, something in the same area, but not simply an improved version of yours. Perhaps you could base this on a limitation of your study.
• Concluding paragraph – Finish with a statement of your findings and the key points of the discussion (e.g. interpretation and implications), in no more than 3 or 4 sentences.

  1. References:

Thereference sectionis the list of all the sources cited in the essay (in alphabetical order). It is not a bibliography (a list of the books you used).
In simple terms every time you refer to a name (and date) of a psychologist you need to reference the original source of the information.
If you have been using textbooks this is easy as the references are usually at the back of the book and you can just copy them down. If you have been using websites then you may have a problem as they might not provide a reference section for you to copy.
References need to be set outAPA style:
Books
Author, A. A. (year).Title of work. Location: Publisher.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Practical

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly