Task 3 (chapters 5,8 And 9) Flashcards

(84 cards)

1
Q

How do you choose variables?

A

Research tradition

Choosing variables based on theory

Availability of new techniques/equipment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the reliability of a measure?

A

Reliability of a measure concerns its ability to produce similar results when repeated measurements are made under identical conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the relation between variability and reliability?

A

The more variability the less reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do you measure the reliability of a physical measure?

A

Height and weight are assessed by repeatedly measuring fixed quantity of variable

Precision represents range of variation to be expected on repeated measurement( precise measurements show little range of variation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do you measure the reliability of population estimates?

A

Measures of opinion attitude and similar psych. Variables

Problematic to estimate the average value of a variable in given population based on sample drawn from population

Precision of the estimate is called margin error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do you measure the reliability of judgements or ratings by multiple observers?

A

Establish degree of agreement amongst observers by using statistical measure of interrater reliability.

     —>how much agreement is there between raters?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do you measure the reliability of psychological test of measures?(e.g intelligence,anxiety etc)

A

Basic strategy is to repeat the assessment twice to a large group of individuals, and then determine correlation between scores .

High correlation=greater reliability (high reliability of r is 0.95 or higher)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the three ways to test reliability of psychological tests of measures?

A

Test-retest reliability

Parallel forms reliability

Split-half reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does test-retest reliability work?what are its limitations? What is it best to assess with?

A
  • Administering the same test twice separated by a long interval of time
  • Participants could respond in same way because they recall initial answers
  • it is best for assessing estable chatacteristics(such as intelligence)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does parallel(alternate) forms of reliability work?

Limitations?

A

Same as test-retes, except the form on the second retesting is replaced by a parallel form which contains quizá Kent items to the original

Differences I test performance can be due to nonequivalence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does split half reliability work?

A

Two parallel forms of test are intermingled in a single test and administered together in one testing

Responses from two forms are separated and scored individually

Quantity being measured had no time to change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the accuracy?

A

Measure that produces results that agree with a known standard

Value may not agree with the standard so you average all values and that is what has to be equal to the standard

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How is the difference between average and standard called?

A

Bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is precision? I

A

The range of variation that is expected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the validity of a measure ?

A

The extent to which a measure measures what you intend to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the different types of validity?

A

Face validity

Content validity

Criterion related validity

Construct validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is face validity?

A

How well a measurement Instrument appears to measure what it was designed to measure (judging by appearance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is content validity?

A

How adequately the content of a test samples the knowledge, skills or behaviors that the test is intended to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is criterion related validity?

A

How adequately a test score can be used to infer an individuals value on some criterion measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What two types of criterion-related validity are therev

A

Concurrent validity

Predictive validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is concurrent validity?

A

If scores are collected about the same time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is predictive validity?

A

Comparing scores on your test with the value of a criterion measure observed at a later time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is construct validity?

A

When a test is designed to measure a “construct” or variable not directly observable that has been developed to explain Behaviour in the basis of a theory(cognitions, happiness etc)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the differences between criterion and construct validity?

A

Construct is more about abstractions while criterion is just one variable.

Construct is theoretical that you cannot directly observe while criterion is more general and already established

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is the sensitivity of a dependent measure?
How much your measure responds to your manipulation
26
What are range effects?
Occur when the values of a variable have an up and lower limit If bathroom scale measures up till 100kg you put something of 200 kg it will still show 100 kg Psychology questionnaire is too hard or too easy for participants
27
What are the two distinct cases for range effects?
Floor effects Ceiling effects
28
What are floor effects?
Variable reaches lowest possible value
29
Ceiling effects
Variable reaches highest possible value
30
What are behavioral measures?
Measures that record the actual Behaviour of subjects Good indicator of overt Behaviour
31
Behavioral measures: what does frequency of responding do?
Count the number of occurrences over specified period
32
Behaviour measures: what does latency do?
Measure amount of time it takes for subjects to respond to stimulus
33
What are physiological measures?
Measures that record the participant bodily functions Provide fairly accurate info about the state of arousal within participants body Psych. States must be inferred from physical states
34
What are self report measures?
Participant self report variables Can’t be sure the participantes are telling the truth as they have the tendency to project themselves in a socially desirable manner
35
Self report measures: Rating scale and what is the likert scaling?
Rating from 1-10 Likely scaling: indicating the degree to which they agree or disagree with statement on a 5 point scale
36
Self-report measures: Q-sort methodology
Qualitative measuring technique; establishing evaluative categories and sorting items into those categories
37
What are implicit measures
Measure responses that are not under direct conscious control E.g IAT implicit association test
38
How to understand implicit measures and the role they play in an experiment?
The participant is not an object is a human being and the experiment is a relation between him and his attitudes and experimental context. Participant assessed you and laboratory and dress inferences on what the experiment is about.
39
Implicit measures: what are demand characteristics?
Cues provided by the researcher and the context that communicate the participant the purpose of the study (or expected responses of the participant)
40
Implicit measures: what are role attitude cues?
May signal the participant that a change in the participants attitudes is needed to conform to their role as a research participant
41
Implicit measures:what is the experimenter bias?
When the Behaviour of the experimenter influences the Behaviour of the experiment
42
Implicit theories: what are the expectancy effects?
When an experimenter develops preconceived ideas about the capabilities of the participants
43
How can we prevent experimenter bias?
Single blind technique Double blind
44
Single blind technique what is it?
Just experimenter or participant does not know which experimental condition a subject has been assigned to
45
What is double blind technique?
Technique to lower exp time tee kiss where neither the experimenter nor the participant know at the time of testing which treatments the participants are receiving
46
What does automating a research mean?
Use technology to eliminate th experimenter effects and increase the precision of measures
47
What is a pilot study?
Small scale version of a study used to establish procedures,materials and parameters to be used in the full study
48
What is the manipulation check?
Tests whether or not your independent variables had the intended effects on your participants
49
What are some quantifying behaviors in observational studies?
Frequency method Duration method Intervals method;helpful method to observe multiple behaviors at the same time Recording single events or Behaviour sequences
50
What are the different types of sampling?
Time sampling Individual sampling Event sampling Recording
51
Name the different types of observations
Naturalistic observation Ethnography Sociometry Case history Archival research Content analysis
52
What is naturalistic observation?
Observing subjects in their natural environments without making any attempt to control or manipulate variables
53
How do you make naturalistic observations?
You have to make unibstrusive observations so the subjects don’t know they are being observed
54
What are the advantages and disadvantages of naturalistic observation?
Advanatages: gives insight in how Behaviour occurs in the real world; observations made are not tainted by laboratory settings (high external validity) Disadvantages: Only description of observ d Behaviour can be derived from this method, no investigation of underlying causes of Behaviour. It also is time consuming and expensive
55
What is ethnography?
Becoming immersed in the behavioral or social system being studied.
56
What do we use ethnography for?
To study and describe the functioning of cultures through study of social interactions and expressions between people and groups
57
How do you perform ethnographical observations?
Conducting observations using participant observation(you act as a functioning member of the group) or using non participant observation (observing as a non member) Deciding whether to conduct observations overtly (group members will know) or undercover(covertly) (group members unaware)
58
What is sociometry?
Identifying and measuring interpersonal relationships within a group
59
Example of sociometry?
Have research participants evaluate each other along some dimension
60
What are case history observation?
Descriptive technique in which you observe and report on a single case (or a few cases)
61
Limitations of case history observations?
Purely descriptive
62
What are archival research observations?
Non experimental strategy that involves studying existing records It requires having specific research questions in mind
63
Limitations of archival research?
Purely descriptive causal relationships cannot be established
64
What are content analysis observations?
Used to analyze a written or spoken record (or other meaningful matter) for the occurrence of specific categories or events,items or Behaviour.
65
What should a content analysis be like?
Should be objective Should be systematic; Including articles not in favor of your position as well Should have generality; findings should fit within a theoretical, empirical, or applied context.
66
Limitations of content analysis
Purely descriptive, centers in the durability of findings
67
What is survey research?
Research where you directly question your participants about three Behaviour and underlying attitudes, beliefs and intentions.
68
What kind of study is survey research?
Is a correlational study
69
Limitations of survey research?
Usually does not permit you to draw causal inferences from your data
70
Steps to define a questionnaire or
1. Clearly define topic of your study 2. Demographics 3. write questionnaire items 4. order of questions in questionnaire
71
What are demographics?
Characteristics of participants(age,sex,marital status,occupation,income,education)
72
How should demographics be of use?
They’re used as predictor variables during analysis of the data to determine whether participants characteristics correlate with or predict responses to other items in the survey( should not be presented first k. He questionnaire,first question should be engaging).
73
What are the different types of writing questionnaire items?
1. Open ended items 2. Restricted items 3. Partially open-ended items 4. Rating scales
74
How do open ended items work?
Allow participant to respond in their own words
75
How do Restricted items (close-ended items) work?
Provide limited number of specific response alternatives
76
How do partially open ended items work?
They resemble restricted items but provide an additional “other” category, an opportunity to give an answer not listed amongs specific alternatives
77
How do rating scales work?
They are a variation on restricted items using rating scale rather than response alternatives.
78
How should the order of questions in a questionnaire be? How is a questionnaire more effective?
Sensitive questions should be towards the end Questionnaire is more effective if the organization is coherent and the questions follow a logical order and relate to each other
79
How can you administer a questionnaire?
- mail survey - internet survey - telephone survey - group administer d survey - face to face interviews Mixed mode survey
80
What are the two ways to asses the reliability of a questionnaire l
Repeated administration Single administration
81
How do you asses reliability of a questionnaire through repeated administration?
- test retest reliability | - use parallel forms to avoid the problem with test res test reliability
82
How do you asses reliability of a questionnaire through single administration?
Split half reliability Splitting the questionnaire into equivalent hables & deriving score from each half(split half...) Applying Kuder-Richardson formula
83
How do you apply Kuder-Richardson formula?
Yielding the average of all split half reliabilities that could be derived from the questionnaires handed. The resulting number should lie somewhere between 0 and 1 The higher the number, the greater the reliability of questionnaire
84
How do we increase the reliability of a questionnaire?
- increasing the number of items on questionnaire - standardize administration procedures(timing procedures,lighting,ventilation, instructions to participants, and instructions to administrators are constant) - score questionnaire carefully - items on questionnaire are clear, well written, and appropriate for sample