Reliability + Validity Flashcards

1
Q

What are 6 Reliability issues?

A
  1. There is only one researcher
  2. The research has not been replicated
  3. Instructions are not given in the same way
  4. Key terms are not clear
  5. Participants are not asked questions in the same way.
  6. Items in the questionnaire are not answered in the same way.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How can you deal with said Reliability issues?

A
  1. Use more than one researcher
  2. Repeat the study
  3. Standardise procedures
  4. Operationalise terms

5.(&6) Standardise questions and possible answers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the 3 ways of assessing Reliability? (S.I.T)

A

(S.I.T):

-(S)plit half reliability

-(I)nter-rater reliability

-(T)est-retest reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the Split-Halves test?

A

-A method of assessing internal reliability by comparing two halves of the same measure

-EG: a psychological test to see if they produce the same scores

-If the test is assessing the same thing in all its questions then there should be a high correlation (close to 1) in the scores from both halves of the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Inter-rater test?

A

-Inter-rater reliability= the consistency between the recordings of two or more researchers.

-Used during observational research or when researchers have to rate or categorise behaviour.

-If the raters are assessing the same thing, there should be a high correlation (close to 1) in their ratings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the test-retest?

A

-The test-retest= method of assessing external reliability.

-It involves running the same test again to see whether the results are the same over time and place.

-If the two tests achieve the same results on both occasions there will be a high correlation (close to 1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are 5 Validity issues?

A
  1. The researcher is using an unrepresentative sample- impacts generalisability
  2. The researchers are having an influence on the participants responses- RESEARCHER BIAS
  3. The research setting is artificial
  4. The participants are trying act accordingly to the aim of the study (good/bad).- DEMAND CHARACTERISTICS
  5. The participants are worried about the impression they are giving to others- SOCIAL DESIRABILITY BIAS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can you deal with said Validity issues (internal)?

A

Maintain a high level of control by:

-Eliminating extraneous variables

-Using single or double blind control

-Standardising procedures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How can you deal with said Validity issues (external)?

A

Ensure generalisability by:

-Including a wide range of participants

-Ensuring the study represents the real world and real life experience

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the 5 ways of assesing Internal Validity?

A

(P.C.C.F.C):

(P)redictive Validity

(C)oncurrent validity

(C)onstruct validity

(F)ace validity

(C)ontent validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can you investigate Predictive Validity?

A

-By checking if the measure can be associated with future behaviour

-This is done by following up our participants to see if future performance is similar to performance on our measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can you investigate Concurrent Validity?

A

-By checking if the measure agree with existing measures

-This is done by by testing participants with the new test and an established test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How can you investigate Construct Validity?

A

-By checking if the method actually measures all parts of what we are aiming to test

-This is done by defining what it is we are aiming to measure, and making sure that all parts of that definition are being measured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How can you investigate Face Validity?

A

-By checking if the method used actually seems to measure what you intended

-This is done by asking participants what the study appears to be measuring and seeing if they agree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How can you investigate Content Validity?

A

-By checking if the method used actually seems to measure what you intended

-This is done by a panel of experts assessing the measure for validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the difference between investigating Content/Face/ConstructValidity?

A

-Content= Pannel of experts assessing

-Face= Participant agreement of what is measured

-Construct= Defining measure and ensuring all aspects are fufilled