Validity, Measurement, Reliability, Descriptive Research Flashcards

1
Q

Independent variable

A

manipulated variable (e.g. condition / congruence of colors and words presented)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

dependent variable

A

measured variable (e.g. time needed to read out words)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

control variable

A

variable that is being held constant on purpose (e.g. number of words)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

confounding variable

A

variable that change simultaneously with the independent variable (e.g. time point of testing, order of words)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

random error

A

noise during testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

conceptual variable

A

syn: construct
= variable stated at an abstract (theoretical) level (e.g. anxiety is to be measured)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

operational variable

A

= operational definition
= specific way in which a construct is manipulated / measured in a study (e.g. test score for anxiety)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the possible claims in experimentation?

A
  1. Frequency claims: focuses on 1 variable -> descriptive research
  2. Association claims: focuses on relationship between at least 2 variables
  3. Causal claims: focuses on causation, so a change in 1 variable is responsible for changing the value of another variable -> experimental research
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Validity

A

way of knowing whether the claims are good
= appropriateness of a claim

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

complete the sentence: a valid claim is…

A
  1. reasonable
  2. accurate
  3. justifiable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Construct validity - definition + question to be asked

A

= an indication of how well a conceptual variable is measured / manipulated in the study
Question to be asked: “How well is a construct operationalized?”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Threats to construct validity

A
  • inadequate operational definitoin (e.g. does test really measure anxiety or just the absence of self-confidence?)
  • mono-operation bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

mono-operation bias

A

Mono-operation bias occurs when a single measure or a single method is used to assess a complex theoretical construct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

External validity - definition + question to be asked

A

= indication of how well the results of a study generalize to, or represent, individuals, settings, places, and times (contexts) besides those in the study itself
Question to be asked: “Is it possible to generalize?”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Threats to external validity

A
  • selection biases
  • study setting is different from other settings
  • specific time at which the study is performed is different from other times
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How do you get external validity?

A
  1. assigning participants to experimental and control conditions by chance
  2. minimizing pre-existing differences between those assigned to the different groups
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Internal validity - definition + question to be asked

A

= the extent to which it is possible to rule out alternative explanations for a causal relationship between two variables
Question to be asked: “Can we rule out alternative explanations?”

18
Q

Threats to internal validity

A
  • maturation
  • history
  • testing
  • instrumentation
  • regression to the mean
  • attrition / experimental mortality
  • selection effects (e.g. Simpson paradoxon)
  • design confounds = the presence of another variable that unintentionally varies systematically with the independent variable
  • observer bias (single-/double-blind studies recommended!!)
  • demand characteristics
  • placebo effects
19
Q

maturation

A

Maturation refers to changes that occur naturally over time within the participants of a study. These changes may affect the dependent variable independently of the independent variable. For example, if you are studying the effects of an educational intervention on children’s reading skills, normal developmental changes in reading ability that occur with age could be a threat to internal validity.

20
Q

regression to the mean

A

Regression to the mean refers to the tendency for extreme scores on a variable to move closer to the average (mean) when measured again. If participants are selected based on extreme scores, it may appear that a treatment had an effect when it’s merely a statistical artifact.

21
Q

attrition

A

Attrition occurs when participants drop out of a study before it is completed. If the reasons for dropout are related to the independent variable, it can bias the results. Experimental mortality is a similar concept, referring to participants who do not complete the study.

22
Q

demand characteristics

A

Demand characteristics refer to cues or hints that participants pick up during a study, which can lead them to guess what the researcher expects or wants. This can influence their behavior or responses in a way that doesn’t reflect their true reactions.

23
Q

Statistical validity - definition + question to be asked

A

= the extent to which a study’s statistical conclusions are accurate and reasonable
Question to be asked: “How well do the numbers support the claims?”

24
Q

Threats to statistical validity

A
  • violated assumptions of the test statistics
  • fishing and the error rate problem (seeing things that aren’t there)
  • low statistical power (missing the needle in the haystack, unreliability of measures)
25
Q

measurement

A

a systematic way to assign numbers or names to objects and their features

26
Q

measurement scales

A
  1. nominal scale -> just categories
  2. ordinal scale -> ranked categories
  3. interval scale -> adding / subtracting possible
  4. ratio scale -> the absolute is zero, therefore multiplying and dividing possible
27
Q

how do we calculate reliability

A

observed score = true score + error score
-> observed score = measured value
-> true score = real value
-> error score = difference between measured and real value

28
Q

types of reliability

A
  1. test reliability
  2. interrater reliability
  3. experimental reliability
29
Q

reliability

A

Reliability refers to the consistency and trustworthiness of a system, product, or process in delivering consistent and dependable results over time.

30
Q

Test reliability

A

Test reliability is the extent to which a particular assessment or measurement instrument produces consistent and stable results when administered to the same individuals on different occasions.

31
Q

Interrater reliability

A

Interrater reliability is the degree of agreement or consistency between different raters or observers when assessing or scoring the same data, typically used to assess the reliability of subjective judgments or evaluations.

32
Q

Experimental reliability

A

Experimental reliability pertains to the consistency and dependability of results obtained in a scientific experiment, ensuring that the same experiment conducted under similar conditions would yield similar outcomes.

33
Q

Types of test reliability

A
  1. test-retest reliability: same results at time points 1 and 2
  2. internal reliability
34
Q

Types of internal reliability

A
  1. Split-half reliability
  2. Cronbach’s alpha
35
Q

Cronbach’s alpha

A

Cronbach’s alpha, often referred to simply as “alpha,” is a statistical measure used to assess the internal consistency or reliability of a set of items or questions in a research survey or test. It is named after its developer, Lee Cronbach, and is widely used in fields such as psychology, education, and social sciences to evaluate the consistency of responses within a questionnaire or scale.
The primary purpose of Cronbach’s alpha is to determine whether the items in a survey or test are measuring the same underlying construct or concept. In other words, it helps researchers assess whether the items are reliably measuring what they are intended to measure.
Cronbach’s alpha produces a score between 0 and 1, with higher values indicating greater internal consistency.

36
Q

methods of descriptive research

A

observations, surveys, case studies

37
Q

problems of descriptive research

A
  • people are not always good at observations
  • low internal validity
  • threats to construct validity: observer bias, observer effects, reactivity, strategy, “wait it out” attitude, unobtrusive observations / measures
38
Q

problems of surveys

A
  • low internal validity
  • threats to construct validity: question wording, question order, use of response sets, trying to look good / bad, incapability of reporting feelings
39
Q

problems of frequency claims (e.g. study shows that 3 in 4 women are sad when doing xyz)

A
  • low internal validity
  • threats to external validity: biased sample, unrepresentativeness -> can be improved by random sample, replication, etc.
  • threats to statistical validity: samples too small
40
Q

what are important questions in descriptive research?

A
  1. how was each variable measured? -> construct validity
  2. how were the results generalized? -> external validity
  3. how do the numbers support the conclusions? -> statistical validity