Up to Exam 2 Flashcards

(124 cards)

1
Q

Four scales of measurements

A

Nominal scale : one object is different from another

Ordinal scale : one object is bigger or better or more of anything than another

Interval scale : one object is so many units (degrees ect) more than another

Ratio scale : Interval scale with an absolute zero

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Validity and Reliability of Measures

A

Validity = the extent to which a measurement instrument measures what it is intended to measure

Reliability = the consistency with which a measurement instrument yields a certain result when the entity being measured hasn’t changed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is cross - level analysis

A

researchers use data collected for one unit of analysis to make inferences about another unit of analysis

two reasons why one would use cross level analysis: cost and availability issues

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is ecological inferences

A

cross level analysis goal = using aggregate data to study the behavior of individuals

Relationship between schools average test scores and percentage of children receiving subsidized

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is ecological fallacy

NOT IN BOOK

A

it is the use of information that shows a relationship for groups to infer that the same relationship exists for individuals when in fact there is no such relationship at the individual level

Ecological Fallacy = a flaw

example: group x has characteristic y. person 1 is in group x, so that person must have characteristic y.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

defining statements about concepts

six

A

concepts help us observe and understand aspects of our environment and help us communicate with others

a word or symbol that represents some idea

contributed to the identification and delineation of the scientific disciplines within which research is conducted

are developed through a process by which some human group (tribe, nation) agrees to give a phenomenon or a property a particular name

disappear from a group’s language when they are not longer needed, and ew ones are invented as a new phenomena are noticed that require names

we use concepts everyday to help cope with the complexity of reality by categorizing the things we encounter according to some of their properties that are relevant to us.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

a concept is

A

a word or symbol that represents some idea

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

do we use concepts everyday?

A

yes

we use concepts everyday to help cope with the complexity of reality by categorizing the things we encounter according to some of their properties that are relevant to us

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

concepts in social science and in everyday

A

Social science concepts serve the same purpose as everyday concepts

they point to the properities of objects (people, political systems) that are relevant to particular inquiry. One observer might be interested in a a person’s personality structure, another is interested in partisan identification, and a third focuses on the person’s level of political alienation

the person has all of these properties and many more but only certain of the properties are relevant to any given piece of research

all three observers are dealing with he same reality; they simply choose to organize their perceptions of it differently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Three Definitions: Concepts

Three things a concept must be

A

concepts help us to decide which of the many traits or attributes are important to our research

concepts, like theories, do not have a life of their own.

concepts are tools we create for specific purposes and cannot be labeled true or false, but only more or less useful.

Concepts must be
precise, accurate, informative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what makes a concept useful

A

the concept must refer to phenomena that are at least potentially Observable

a concept must refer to something that can be measured with our ordinary senses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

examples of concepts

A

people simply do not have a class status in the way they have red hair, but if we know certain things about them (income, occupation) we can infer what their class status is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

question about concepts

A

can we devise a set of procedures for using our senses to gather information that will allow us to judge the presence or absence of magnitude int he real world of the thing to which the Concept refers?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Empirical referents

A

if we can do this for a concept, it is said to have an empirical referents; it refers to something that is directly or indirectly observable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

why is precision important in concepts?

A

it tells us what to observe in order to see how a concept is manifested in any given case

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is a theoretical import

A

a concept has a theoretical import

a concept has a theoretical import when it is related to enough other concepts in the theory that it play an essential role in the explanation of observed events

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

operationalization

A

deciding how to record empirical observations of the occurrence of an attribute or a behavior using numerals or scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

conceptualization to operationalization

A

researchers must define the concepts they use in their hypotheses through Conceptualization. they also must decide how to measure the presence, absence, or the amount of these concepts in the real world.

Political scientists refer to this process as Operationalization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

operational definitions are…

A

seldom absolutely correct or incorrect

are evaluated according to how well they correspond to the concepts they are meant to measure

***Arriving at the operational definition is the last stage in the process of defining a concept precisely **

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

measurement

A

the process by which phenomena is observed systematically and represented by scores or numerals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

test retest method of reliability

A

applying the same “test” to the same observations after a period of time and then comparing the results of the different measurements

same score over time over and over again

Difficulty arises when our measure involves interviewing people (as opposed to inanimate objects) if we repeat questions in a short time, interviewees may. remember their first answer and, in effort to be consistent, repeat that answer rather than respond truthfully in answering the question.

this can be problematic because what’s being measured can change, it is not unreliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

alternative form method of reliability

A

measuring the same attribute more than once but uses two different measures of the same concept rather than the same measure.
using two sets of questions about the same topic and seeing if the answers are reliable

different forms of the measure are applied to the same group of cases, or the same measure is applied to different groups at the same time.

if we can assume that these conditions are met, the more the score on the two measures, or the score of the two groups, are alike, the more confidence we have in the reliability of our measure.

If we cannot come up with comparable measures or groups, we cannot use the method properly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

split-halves method of reliability

A

applying two measures of the same concept at the same time.

used with multi-item measures that can be split in two halves

in a survey, five questions represent one measure of the topic and the other represent a second half. if the scores are similar, the ten item measure is reliable

this methods avoids the problem that the concept being measured ay change between measures. Often used when a multi-item measure can be split into two equivalent halves.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

validity

A

the degree of correspondence between the measure and concept it is thought to measure

Unlike reliability, which depends on whether repeated applications the same or equivalent measures yield the same results.

voting: always over estimated because of self-reported voting
invalid if it measures a slightly or very different concept than intended
more difficult to demonstrate empirically than reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
face validity
asserted (not empirically demonstrated) when the measurement instrument appears to measure the concept it is supposed to measure. measurements lack face validity when there are good reasons to question the correspondence of the measure to the concept in question. It is essentially a matter of judgement. if there is no consensus, the face validity is problematic to assess the face validity of a measure we need to know the meaning of the concept being measured and whether the information being collected is relevant
26
Content validity
involves determining the full domain or meaning of a particular concept and then making sure all of the components of the meaning are included in the measure. measuring democracy: two concepts political rights and civil liberties, eleven items in each. we must make sure that all eleven (twenty two) components in the definition Content validity is similar to face validity but involves determining the full domain (whole content) or meaning of a particular concept and then making sure that ALL components of the meaning are including in the measure
27
Construct validity
Construct validation is achieved by inferring the validity of a measure from evidence of the extent to which actual relationships between scores of various measures are consistent with what we expect from the theory that has led us to use a given indicator Key term is inferring two different ways to measure empirically convergent construct validity discriminant construct validity
28
convergent construct validity
measure of a concept is related to a measure of another concept with which the original concept is thought to be related. two concepts ought to be related in a positive or negative manner. the researcher then develops a measure of each of the concepts and examines the relationship between them. if one measure is positively or negatively correlated, then one measure has convergent validity for the other measure. if there is no relationship between the measures, then the theoretical relationship is an error
29
discriminant construct validity
involves two measures that theoretically are expected to NOT be related; thus, the correlation between them is expected to be low or weak. If the measures do not correlate with one another, then discriminant construct validity is demonstrated
30
interitem association
the type of validity test most often used by political scientists relies on the similarities of outcomes of more than one measure of a concept to demonstrate the validity of the entire measurement scheme because just one measure is more prone to error or misclassification of a case
31
correlation matrix
shows how strongly related each of the items in the measurement scheme is to all the other items.
32
random measurement error
an error in measurement that has no systematic direction or cause
33
what would happen if you had inaccurate measurements in your research?
inaccurate measurements may lead to erroneous conclusions, since they will interfere with our availability to observe the actual relationship between two or more variables
34
what are the two major threats to the accuracy of measurements?
measures may be unreliable or invalid
35
what does an unreliable measure produce?
an unreliable measure that produces inconsistent results-- sometimes higher sometimes lower
36
what is reliability?
describes the consistency of results from a procedure or measure in repeated tests or trials in the context of measurement, a reliable measure is one that produces the same result each time the measure is used Can we get the same value for any given case when we apply the measure several different times, or does each application result in the assignment of a different value to each case?
37
a measure may be reliable without being valid BUT
it cannot be valid without being reliable.
38
systematic and random error in validity and reliability
validity is challenged by both systematic and random error, but reliability is jeopardized only by random error This means that if a measure have convincingly validated in prior studies, we can use it without being worried about its reliability; it has to be reliable if it is valid.
39
how do we guard against unreliability?
preventing unreliability depends on our being aware of the various sources of random measurement error and doing what we can to control them Catch it before it happens
40
how do we determine whether or not a given measure is reliable?
PRETESTING thinking through the actual measurement process and pretesting our measuring instruments to discover previously unrecognized causes of random error pretest= smaller sample size
41
why is it difficult to determine whether or not we have devised a reliable measure in the social sciences?
the true value of the variables with which we are concerned can change dramatically with time and circumstance-- people change their opinions it is hard to distinguish the effects of random measurement error from genuine fluctuations in the concepts being measured. So, tests of reliability should be conducted over a short period of time.
42
Steps to the split halves method
administer the test to a large group of students (ideally, over 30) randomly divide the test question into two parts. separate even and odd questions score each half of the test for each student find the correlation coefficient for the two halves
43
Split halves NOT FROM TEXT BOOK
used for measuring the internal consistency of the test it measures the extent to which all parts of the test contribute equally a test or group is divided into two parts scores collected from two half tests or two parallel groups
44
Nominal measurement
provides the LEAST amount of information about a phenomenon Nominal measurement is obtained by simple naming cases by some predetermined scheme of classification Nationality is typically measured at the nominal level by classifying people as Swiss, American eat. That measurement neither tells us how much of the characteristic nationality different individuals have nor allows us to rank-order them Using nominal measurement simply gives us a way of sorting cases into groups designated by the names used in a. classificatory scheme
45
Ordinal measurement
provides more information because it allows us both to categorize and to order, or rank, phenomena ordinal measurement allows us to associate a number with each case. That number tells us not only that the cause is different from some there cases, and similar to still others (with respect to the variable being measured) but also how it relates to those other cases in terms of how much of a particular property it exhibits With ordinal measurement we cans ay which cases have more or less of the measured quality that other cases and we can rank cases in the order of how much of the quality they exhibit that ranking gives us more detailed and precise information about the cases than we would get from a nominal measurement
46
Interval measurement NOT IN TEXTBOOK
Provides even more information We can classify and rank order cases when they have been measured at the Interval level, we can also tell HOW MUCH (or less) of the measured property they contain than other cases Ordinal measurement is NOT based on standardized unit of the variable in question and does NOT allow us to tell how far cases are from one another in terms of that variable. It allows us ONLY to say that some have more or less of it than others. Interval measurement is based on the idea that there is some Standard Unit of the property being measured. Interval measurement provides information on the "distance" between cases
47
Levels of Measurement
measuring procedures provide a means of categorizing and ordering phenomena Some procedures produce more precise and detailed distinctions between events than do others. Because of this, there are various Levels of Measurements When we say a procedure produces a given level of Measurement, we are classifying it according to how much information it gives us about the phenomena being Measured and their relationship to one another
48
Sets needed to make nominal measurement useful NOT IN TEXTBOOK
to be useful, nominal measurement schemes must be based on sets that are Mutually Exclusive and Collectively Exhausive It must not be possible to assign any single case to more than one category the categories should be set up so that ALL cases can be assigned to some category
49
Nominal example: US voters NOT IN TEXTBOOK
if we want to classify voters in the US by use of a nominal measuring scheme, we cannot use the categories democratic republic liberal and conservative successfully because these categories are not mutually exclusive Since US political parties each appeal to a broad spectrum of voters, it is possible for a person to be both DEM and conservatives or liberal, or both a. Republican and a conservative or liberal the categories do not allow us to differentiate among voters in all cases If we try to categorize voters by party affiliation using only two categories-repub and dem- we will find that our categories are not collectively exhaustive, because some voters consider themselves independents or members of other parties
50
Ordinal Measurement example NOT IN TEXTBOOK
Example: social class- often measured at the ordinal level with individuals being ranked as lower middle or upper class
51
Interval Measurement Example NOT IN TEXTBOOK
income is usually measured in units of currency We cannot do that with ordinal measurement. IF we measure income ordinal by dividing people into such income categories as under 10K and 10 K to 19,999 we can say that tone person has more income than another, but we cannot say exactly how far apart they are in income difference between a person in category 1 (under 10K) and a person in category 2 can be as little as one dollar or as much s 10 k depending on their exact incomes but we cannot make this distinction from an ordinal measure.
52
Interval Measurement example 2 NOT IN TEXTBOOK
Interval measurement lets us make accurate statements about the relative differences between concepts. Example: we agree that 50k people are twice as large as 25k people because we ca speak meaningfully of a place that has no population there is a zero point in true interval measures, and it is at least conceivable possible for a case to score zero on such measures because there is no meaningful zero point on an ordinal scale, we cannot say, for example, that upper class people have twice as much class as lower class people because we don't know what it means to have no class standing
53
validity
a valid measure is one that measures what is supposed to measure unlike reliability which depends now whether repeated applications of the same or equivalent measures yield that same result validity refers to the degree of correspondence between the measure and the concept it is thought to measure
54
validity-voter turnout
many studies examine the facotrs that affect voter turnout and thus require an accurate measurement of voter turnout one way of measuring voter runout is to ask people if they voted int he last election- self reported voting given the social desirability of voting in the us wearing the I voted sticker or position I voted on social media can bring social rewards- will nonvoters adroit to their failtree to vote to an open interviewer Some nonvoters may claim in surveys to have noted resulting inanimate invalid measure of voter turnout that overstates the number of voters. In fact, this is what usually happens. voter surveys yes commonly overestimate turnout by several percentage points
55
Validity- ideology
a measure can also be invalid if it measures a slightly or very different concept than intended Example: researcher purposes to measure ideology, conceptualized as an individual's political views on a continuum between Conservative, Moderate and Liberal if he asks which one do you feel closest to, Democratic or Republican, this measure would be invalid because it fails to measure ideology as conceptualized Partisan affinity, while often consistent with Ideology, is not the same as Ideology. This measure should be valid measure of party ID, but not ideology
56
validity -- indicators
we can seldom obtain direct measures of the concepts used in social science theories. such concepts as power, democracy, and representation cannot quantified as simply such concepts as length and weight we have to use indicators that correspond only indirectly tot he concepts the represent. there is always a chance that the indicators we choose will not adequately reflect the concepts we want to measure validity is the term we used to refer to the extent to which our measures correspond to the concepts they are intended to reflect.
57
to be valid, a measure must be
appropriate and complete achieving validity is often viewed as the basic problem of measurement in the social sciences. example: public edcuation number of teachers in the schools as an indicator of the quality of educational services This measure is inappropriate, because the number of personnel in school systems is determined largely by the number of students and the size of the city and may have little to do with the quality of education. **if we use the ratio of students to teachers as our indicator of educational services, we willl have a more appropriate measure in that differences caused by city size will be reduced or eliminated. this measure will still be incomplete though education involves more than teachers. it also involves school buildings, films, books. labs act we cannot just look at one factor, because a school system might have a high;y desirable student-teacher ratio but inadequate facilities and learning materials
58
If we are to achieve validity, we must
always strive to construct measures that are both appropriate and complete!
59
how can we create measures that are complete and appropriate?
begins with the OPERATIONALIZATION PROCESS we can define validity as the extent to which differences in scores on a measure reflect only differences int he distribution of values on the variable we intend to measure since we can probably never achieve e complete and total validity, our goal should be to select measures that are susceptible to as few influences as possible other than differences in our target variable this requires that we think carefully through the process that surround our measures in search of possible causes of variations in scores if we want to measure the relative influence of different interest groups in a state legislature, we may think of using newspaper reports interest group appearance before legislative committees as our indicator. There are so many other means of exercising influence that a measure that relies exclusively on the giving of testimony as an indicator of influence is incomplete achieving appropriate and relatively complete operationalizations depends both unknowing a good deal about the subject of our study and on conducting a careful logical analysis of at
60
how can we tell whether we have succeeded in creating measures that are complete and appropriate?
we can check the validity of our measures I order to determine whether or not to determine whether or not we have developed sound measures only after we have collected the process of evaluating the validity of our measures is referred to as validation
61
Pragmatic validation
involves assessing the validity of a measure from evidence of how well it works in allowing us to predict behaviors and events requires that there be some alternative indicator of variables that we feel fairly certain is a valid reflection of them We check our measures against this alternative as we might check the accuracy of verbal reports of age against birth certificates.
62
construct validation
achieved by INFERRING the validity of a measure from evidence of the extent to which actual relationships between scores of various measures are consistent with what we expect from the theory that has led us to use a given indicator
63
Example of construct validation
consider a study of international alliances. We might create a measure of the strength of an alliance based on a content analysis of newspaper articles from countries involved. Is what the newspaper of one nation say about another nation a valid indicator or the strength of the alliance between the two countries? We might get an idea of whether it is by reasoning as follows: our theory tells us that the stronger an alliance between two nations is, the more often they will vote together in the United Nations and the fewer restrictions they will place on trade with each other. Therefore scores on a valid measure of strength of alliance will be positively related to scores on measures of number of trade barriers we then proceed to do the data analysis necessary to see whether this expectation is supported by our observations if the relationships are as expected, we will have greater confidence in the validity of our measure of strength of alliance if they are not as we have expected, we will question whether we have a sound measure of this concept
64
External Validation
involves comparing scores on the measure being validated with scores on measures of other variables Efforts at external validation will produce convincing evidence about the validity of our measure of one variable only if we can have a high degree of confidence in the validity of the measures we use for the other variables
65
Internal (convergent) validation
the of validation involves devising several measures of the same variable and comparing scores on these various measures We reason that if each of the indicators provides a valid measure of the concept in question, the scores individual cases receive on the measure should be closely related If A, B and C are all valid measures of X, then any individuals scores on A, B and C should be highly similar Cause and effect relationship
66
Discriminant validation
asks whether using a measure as an indicator of a given concept allows us to distinguish that concept from other concepts we might want to measure the concept "trust in public officials" through a series of questions in a survey. If we also have on the questionnaire a series of questions designed to measure trust in people (in general) we can compare the scores not he two measures to ask whether our first set of questions actually reflects simply another way of measuring trust in people. if scores are highly similar we say that the political trust measure does not have discriminant validity because it does not permit us to distinguish the concept of trust in political officials from he concepts of trust in people
67
sub-sample method
drawing one sample of cases and divide it into several subsamples in such a way that each is highly similar to the other s in composition we then apply the same measure to all subsamples and use the similarity or difference of responses from subsample to subsample as an indicator of the reliability of the measure
68
data is
transient and ever changing not absolute reality but manifestations of reality must meet certain criteria to be admitted to study
69
primary and secondary data
primary: closest layer to the the truth | secondary
70
4 questions about the planning and collection of data collection
what data is needed? where is the data located? how will the data be obtained? how will the data be interpreted?
71
how do you identify appropriate measurement instruments?
pin down data by measuring it in some way measurement instruments provide a basis on which the entire research effort rests a research effort employing faulty measurement tools is of little value in solving the problem under investigation in planning the researching project, the nature of the measurement instruments should be clearly identified instrumentation should be describe in explicit, concrete terms
72
define measurement
limiting the data of any phenomenon- substantial or insubstantial- so that those data may be interpreted and, ultimately, compared to a particular qualitative or quantitative standard
73
Substantial measurement
physical substance
74
insubstantial measurement
exist only as concepts, ideas, opinions, feelings, or other intangible entities
75
internal validity
the extent to which the design and data of a research study allow the researcher to draw accurate conclusions about the cause and effect and other relationships within the data
76
external validity
the extent to which the results of a research study apply to situations beyond the study itself, generalized beyond the study itself
77
the goal of sampling
to create a sample that is identical to the population in all characteristics except size any difference between a population and sample is defined as bias
78
informed consent means | six
subjects are given information about the research, its purposes, risks/anticipated benefits, alternative procedures (when therapy is involved), how subjects are selected, and the person responsible for the research.
79
what is a research design
a plan that shows how one intends to study an empirical question indicates what specific theory or propositions will be tested, appropriate units of analysis, measurements and observations are needed and which analytical and statistical procedures will be used all parts of a research design should work to the same end: drawing sound conclusions supported by observable evidence
80
research design continued
a scheme that guides the process of collecting, analyzing and interpreting data. it is a logical model of proof that allows the making of valid causal inferences if it doesn't make sense to you it's not going to make sense to anyone else
81
exploratory research
intended to provide greater familiarity with the phenomena we want to investigate so that we can formulate more precise research questions and perhaps develop hypotheses such studies can be essential when we are investigating new phenomena or old phenomena that have not beens studied before
82
descriptive research
closed intended to provide an accurate representation of some phenomenon so that we can better formulate research questions and hypotheses example: frequency, geographic distribution and sequence of events of some phenomenon need to know what other phenomenon it leads to be associated with before we can begin to theorize ab out what might have caused it
83
causal hypothesis and exploratory research
if we can use the results of a study to argue that one thing causes another, we can begin to develop explanations of the second event. For that reason, hypothesis testing research may be described as explanatory research
84
exploratory research requires
Opened flexibility more than precision needs to provide only an opportunity to observe the phenomenon in question ensure unbiased and reliable observation and provide a basis for inferring the causal influence of one or more variables
85
descriptive research requires
accurate measurement of phenomena | the research design must ensure unbiased and reliable observations
86
what is a population?
any well defined set of unit of analysis determined by the research question consistent through all parts of a research project
87
what is a sample?
drawn through a systematic procedure called a sampling method
88
ethical issues in research | six
``` honesty with professional colleagues internal review board (rib) protection from harm informed consent right to privacy professional code of ethics ```
89
to avoid tautologies, one must have..
clear definitions of the concepts of interest are important if we are to develop specific hypotheses and avoid tautologies must refer to one and ONLY ONE set of properties of some phenomenon we must be able to know exactly what we are talking about when we use a concept to describe an object
90
a research design provides
a basis for causal inferences when it allows us to rule out any PLAUSIBLE explanations for observed results that represent alternatives to the causal hypothesis being tested.
91
6 steps (elements) of a research design
a statement of the purpose of the research a statement of the hypothesis to be tested a specification of the variables to be employed a statement of how each variable is to be operationalized and measured a detailed statement of how observations are to be organized and conducted a general discussion of how the data collected will be analyzed (compare your work to others)
92
Feasibility
the projects feasibility or practicality some designs may be unethical, while others may be impossible to implement for lack of data or insufficient time or money balance what is possible to accomplish against what would ideally be done to investigate a particular hypothesis
93
purpose of the investigation
whether the research is intended to be exploratory, descriptive, or explanatory
94
Verifying causal assertions
lets say that all those who report viewing negative commercials tell us they did not vote, whereas all those who were not aware of these ads cast ballots. we might summarize that hypothetical results in a simple table. let x stand for whether or not people saw the campaign ads and Y for whether or not they voted. what this symbolizes is a relationship or association between X and Y
95
opinion research
involves an investigator or observing behavior indirectly by asking people questions about what they believe and how they act. since we do not directly observe their actions, we can only take the respondents word about whether or not they voted or saw attack ads.
96
spurious relationship
arises because two things are both affected by a third factor and thus appear to be related. two things are affected by one. once this additional factor has been identified and controlled for, the original relationship weakens or disappears altogether
97
Feasibility
the projects feasibility or practicality some designs may be unethical, while others may be impossible to implement for lack of data or insufficient time or money balance what is possible to accomplish against what would ideally be done to investigate a particular hypothesis
98
three things to a valid causal design
1) covariation 2) time order 3) elimination of possible alternative causes, sometimes termed "confounding factors"
99
covariation
demonstrates that the alleged cause (call it x) does in fact covary with the supposed effect (y) covariational relationship indicate that two or more concepts tend to change together: as one increases (or decreases) the other increases (or decreases) covariational relationships tell us nothing about what causes the tow concepts to change together
100
Verifying causal assertions
lets say that all those who report viewing negative commercials tell us they did not vote, whereas all those who were not aware of these ads cast ballots. we might summarize that hypothetical results in a simple table. let x stand for whether or not people saw the campaign ads and Y for whether or not they voted. what this symbolizes is a relationship or association between X and Y
101
distinguishing real causal relations from spurious ones
distinguishing real causal relations from spurious ones in an important part of scientific research. to explain phenomena fully, we must know how and why two things are connected not simply that they are associated. thus, one of the major goals in designing research is to come up with a way to make valid causal inferences.
102
three things to a valid causal design
1) covariation 2) time order 3) elimination of possible alternative causes, sometimes termed "confounding factors"
103
three scales
likkert molken goodman
104
likert scale
a multi item measure in which the items are selected based on their ability to discriminate between those scoring high and those scoring low on the measure.
105
Goodman scale
a multi item measure in which respondents are presented with increasing difficult measures of approval for an attitude
106
Mokken scale
a type of scaling procedure that assesses the extent to which there is order in the responses of respondents to multiple items
107
focus group
10 to 20 individuals often used in market research to probe reactions to stimuli such as commercials meet in a single location and discuss with a leader
108
comparative study
more likely to have explanatory power than a single case study provides the opportunity for replication
109
policy evaluation
sometimes called policy analysis | simply means objectively analyzing the economic political cultural and social impacts of public policies
110
external validity
the extent to which the results of a study can be generalized across populations times and settings is the touchstone for natural and social scientists alike.
111
an experiment allows the researcher too...
control exposure to an *experimental variable* (often called a test stimulus, test factor, or independent variable) the assignment
112
an experimenter starts by establishing what two groups
experimental group and control group
113
a subset of a population is a
sample
114
Opinion Research
involves an investigator or observing behavior indirectly by asking people questions about what they believe and how they act.
115
Causal Relationships
exist when changes in one or more concepts lead to or cause changes in one or more other concepts.
116
Causal Relationships – Conditions
First – the postulated cause and effect must change together, or Covary Second – the cause must precede the effect • Third – we must be able to identify a Causal Linkage between the supposed cause and effect (meaning, we must be able to identify the process by which changes in one factor cause changes in another) • Fourth – the covariance of the cause-and-effect phenomena must not be due to their simultaneous relationship to some third factor
117
Probability Sampling
Probability samples: each element in the population has a known probability of inclusion in the sample. Probability samples are a better choice than non-probability samples when possible because they are more likely to be representative and unbiased.
118
Non-probability sampling
Non-probability samples: samples for which each element in the population has an unknown probability of inclusion in the sample. These sampling techniques, while less representative, are used to collect data when probability samples are not feasible.
119
Where we would like sampling to lead
The goal of statistical inference is to make supportable conclusions about the unknown characteristics, or parameters, of a population based on the known characteristics of a sample measured through sample statistics.
120
five steps to theory
``` development of idea hypothesis data collection interpretation and decision modification and extension (repeat) ```
121
three things that could effect internal validity
history: events other than the experimental stimulus that occur between the pretest and the post test maturation: the development or change of a subject over time test subject interaction
122
experimental mortality
the differential loss of participants from comparison groups when subjects selectively drop out of a study
123
selection bias
when subjects are picked, intentionally or not, according to some criterion and not randomly happens when people volunteer
124
demand characteristics
aspects of the research situation that cause participants to guess that the investigators goals and adjust their behavior or opinions accordingly