Introduction to Research in Psychology Flashcards

(4 cards)

1
Q

What is the scientific method?
Click to flip
starAnswer
The scientific method is a systematic process for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge, involving steps such as observation, hypothesis formulation, experimentation, and validation.
Click to flip
starQuestion
Define hypothesis.
Click to flip
starAnswer
A hypothesis is a specific, testable prediction about the expected outcome of a study, often formulated based on existing theories or prior research.
Click to flip
starQuestion
What is operationalization?
Click to flip
starAnswer
Operationalization is the process of defining abstract concepts in measurable terms, allowing researchers to collect data and test hypotheses effectively.
Click to flip
starQuestion
Explain the difference between reliability and validity in measurement.
Click to flip
starAnswer
Reliability refers to the consistency and stability of a measurement across time and different observers, while validity indicates how well a measurement truly reflects the concept it intends to measure.
Click to flip
starQuestion
What are the key types of validity?
Click to flip
starAnswer
The key types of validity include content validity (how well the measure covers the construct), criterion-related validity (how well one measure predicts another), and construct validity (how well the measure relates to the theoretical construct it is intended to measure).
Click to flip
starQuestion
What is a Type I error?
Click to flip
starAnswer
A Type I error occurs when a researcher incorrectly rejects the null hypothesis, concluding that there is an effect or difference when none exists.
Click to flip
starQuestion
What is a Type II error?
Click to flip
starAnswer
A Type II error occurs when a researcher fails to reject the null hypothesis when it is false, concluding that there is no effect or difference when one actually exists.
Click to flip
starQuestion
Define quantitative research.
Click to flip
starAnswer
Quantitative research involves the collection and analysis of numerical data to identify patterns, test theories, and make predictions about behavior or outcomes.
Click to flip
starQuestion
What is the null hypothesis?
Click to flip
starAnswer
The null hypothesis is a statement that there is no effect or relationship between variables, serving as a default or starting point for statistical testing.
Click to flip
starQuestion
How do confidence intervals relate to statistical analysis?
Click to flip
starAnswer
Confidence intervals provide a range of values within which we can be reasonably certain that the true population parameter lies, helping to assess the precision and reliability of sample estimates.
Click to flip
starQuestion
What is the role of the literature review in research?
Click to flip
starAnswer
A literature review summarizes existing research on a topic, identifies gaps in knowledge, and provides a context for the current study, guiding the research questions and methodology.
Click to flip
starQuestion
Explain the concept of empirical research.
Click to flip
starAnswer
Empirical research is based on observed and measured phenomena, relying on data collected through observation or experimentation to draw conclusions.
Click to flip
starQuestion
What is the significance of statistical power in hypothesis testing?
Click to flip
starAnswer
Statistical power is the probability of correctly rejecting the null hypothesis when it is false, influencing the likelihood of detecting true effects in a study.
Click to flip
starQuestion
Define the term ‘concept’ in psychological research.
Click to flip
starAnswer
A concept is an abstract idea or general notion that represents a class of phenomena or characteristics, such as intelligence, happiness, or anxiety.
Click to flip
starQuestion
What is the purpose of using multiple indicators in research?
Click to flip
starAnswer
Using multiple indicators allows for a more comprehensive assessment of complex constructs, improving the reliability and validity of the measurements.
Click to flip
starQuestion
How does one ensure that a measure has good reliability?
Click to flip
starAnswer
To ensure good reliability, researchers should use standardized procedures, clearly define constructs, and assess consistency through methods like test-retest or internal consistency checks.
Click to flip
starQuestion
What are the four levels of measurement?
Click to flip
starAnswer
The four levels of measurement are nominal (categorical), ordinal (ranked), interval (equal intervals without a true zero), and ratio (equal intervals with a true zero).
Click to flip
starQuestion
What is meant by ‘data triangulation’?
Click to flip
starAnswer
Data triangulation involves using multiple data sources, methods, or investigators to enhance the credibility and validity of research findings.
Click to flip
starQuestion
Define ‘empirical evidence.’
Click to flip
starAnswer
Empirical evidence refers to information acquired by observation or experimentation that can be verified and is used to support or refute a hypothesis.
Click to flip
starQuestion
What is a construct in psychological measurement?
Click to flip
starAnswer
A construct is a theoretical concept that is being measured, such as intelligence or self-esteem, often operationalized through specific indicators or scales.
Click to flip
starQuestion
How do researchers assess the validity of a study?
Click to flip
starAnswer
Researchers assess the validity of a study by evaluating whether the study accurately measures what it intends to measure, ensuring the results are generalizable and relevant.
Click to flip
starQuestion
What is the difference between qualitative and quantitative research?
Click to flip
starAnswer
Qualitative research focuses on exploring and understanding complex phenomena through non-numerical data, while quantitative research emphasizes measuring and analyzing numerical data.
Click to flip
starQuestion
What is the importance of ethical considerations in psychological research?
Click to flip
starAnswer
Ethical considerations ensure that research is conducted responsibly, protecting participants’ rights, welfare, and dignity, and maintaining the integrity of the scientific process.
Click to flip
starQuestion
What does it mean for a measurement to be valid?
Click to flip
starAnswer
A measurement is valid if it accurately reflects the concept it is intended to measure, free from bias and distortion.
Click to flip
starQuestion
What is the scientific method?
Click to flip
starAnswer
The scientific method is a systematic process for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge, involving steps such as observation, hypothesis formulation, experimentation, and validation.
Click to flip
1 / 24

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is validity in the context of psychological research?
Click to flip
starAnswer
Validity refers to the extent to which a score or measurement accurately reflects the construct it is intended to measure. It encompasses how well the research findings align with theoretical expectations.
Click to flip
starQuestion
What are the three broad categories of validity?
Click to flip
starAnswer
The three broad categories of validity are measurement validity, internal validity, and external validity. Measurement validity assesses whether the tool measures what it claims to measure, internal validity focuses on whether the observed effects are due to the manipulation of variables, and external validity concerns the generalizability of the findings to real-world settings.
Click to flip
starQuestion
Define internal validity.
Click to flip
starAnswer
Internal validity is the degree to which a study can demonstrate a causal relationship between the independent variable and the dependent variable, ensuring that no other variables are influencing the results.
Click to flip
starQuestion
What is external validity?
Click to flip
starAnswer
External validity refers to the extent to which the results of a study can be generalized to, or have relevance for, settings, people, times, and measures other than the ones used in the study.
Click to flip
starQuestion
What is reliability in psychological research?
Click to flip
starAnswer
Reliability is the consistency of a measurement across time and across different observers. A reliable measure will yield the same results under consistent conditions.
Click to flip
starQuestion
What is the difference between reliability and validity?
Click to flip
starAnswer
Reliability refers to the consistency of a measure, while validity refers to how well a measure assesses the construct it is intended to measure. A measure can be reliable but not valid.
Click to flip
starQuestion
What are some common threats to internal validity?
Click to flip
starAnswer
Common threats to internal validity include selection bias, maturation, history effects, testing effects, instrumentation changes, and experimenter bias. These threats can distort the perceived relationship between variables.
Click to flip
starQuestion
What is selection bias?
Click to flip
starAnswer
Selection bias occurs when non-random factors influence which participants are assigned to different groups in a study, potentially skewing the results.
Click to flip
starQuestion
What is regression to the mean?
Click to flip
starAnswer
Regression to the mean is a statistical phenomenon where extreme scores on one occasion tend to be closer to the average on subsequent occasions. This can complicate the interpretation of results in psychological research.
Click to flip
starQuestion
What is ecological validity?
Click to flip
starAnswer
Ecological validity is a type of external validity that assesses whether research findings can be generalized to real-world settings. It considers whether the study environment reflects everyday life.
Click to flip
starQuestion
How can researchers enhance internal validity?
Click to flip
starAnswer
Researchers can enhance internal validity by randomly assigning participants to groups, using control groups, maintaining consistency in procedures, minimizing dropout rates, and controlling for confounding variables.
Click to flip
starQuestion
What is face validity?
Click to flip
starAnswer
Face validity refers to the extent to which a test appears to measure what it is supposed to measure, based on subjective judgment rather than statistical analysis.
Click to flip
starQuestion
What is construct validity?
Click to flip
starAnswer
Construct validity is the degree to which a test or tool accurately measures the theoretical construct it is intended to measure. It encompasses both convergent and discriminant validity.
Click to flip
starQuestion
What is the purpose of pilot testing in research?
Click to flip
starAnswer
Pilot testing is conducted to refine research instruments and procedures before the main study. It helps identify issues with the measurement tools and ensures clarity and effectiveness.
Click to flip
starQuestion
What strategies can improve external validity?
Click to flip
starAnswer
To improve external validity, researchers can use diverse and representative samples, replicate studies in various settings, conduct field experiments, and ensure findings apply across different times and contexts.
Click to flip
starQuestion
Define convergent validity.
Click to flip
starAnswer
Convergent validity is a form of construct validity that assesses whether two measures that are supposed to be related are indeed related. It demonstrates that similar constructs yield similar results.
Click to flip
starQuestion
What is discriminant validity?
Click to flip
starAnswer
Discriminant validity is a measure of how well a test distinguishes between different constructs, indicating that measures of different constructs should not correlate highly.
Click to flip
starQuestion
What is the role of random assignment in research?
Click to flip
starAnswer
Random assignment is crucial in experimental research as it helps ensure that each participant has an equal chance of being assigned to any group, thus controlling for confounding variables and enhancing internal validity.
Click to flip
starQuestion
How can experimenter bias affect research outcomes?
Click to flip
starAnswer
Experimenter bias occurs when the experimenter’s expectations or beliefs inadvertently influence the outcome of the study, potentially skewing results and affecting the validity of the conclusions drawn.
Click to flip
starQuestion
What is validity in the context of psychological research?
Click to flip
starAnswer
Validity refers to the extent to which a score or measurement accurately reflects the construct it is intended to measure. It encompasses how well the research findings align with theoretical expectations.
Click to flip
1 / 19

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the primary aim of experimental research design?
Click to flip
starAnswer
The primary aim of experimental research design is to determine causal relationships between variables by manipulating independent variables and observing their effect on dependent variables.
Click to flip
starQuestion
What is the difference between independent and dependent variables?
Click to flip
starAnswer
The independent variable (IV) is the factor that is manipulated by the researcher, while the dependent variable (DV) is the outcome that is measured to assess the effect of the IV.
Click to flip
starQuestion
What is randomization in experimental design?
Click to flip
starAnswer
Randomization is the process of randomly assigning participants to different groups in an experiment to reduce biases and ensure that each group is equivalent at the start of the experiment.
Click to flip
starQuestion
Define internal validity.
Click to flip
starAnswer
Internal validity refers to the extent to which a study accurately establishes a causal relationship between the independent and dependent variables, free from confounding factors.
Click to flip
starQuestion
What is a control group?
Click to flip
starAnswer
A control group is a group of participants in an experiment that does not receive the experimental treatment or intervention, allowing researchers to compare results against the experimental group.
Click to flip
starQuestion
What is the significance of replicability in experimental research?
Click to flip
starAnswer
Replicability ensures that experiments can be repeated under the same conditions to yield consistent results, thereby confirming the reliability of the findings.
Click to flip
starQuestion
How do reliability and validity differ?
Click to flip
starAnswer
Reliability refers to the consistency of a measurement across time or different observers, while validity refers to the accuracy of a measurement in capturing what it is intended to measure.
Click to flip
starQuestion
What are the four types of reliability?
Click to flip
starAnswer
The four types of reliability are Test-retest reliability, Inter-rater reliability, Split-half reliability, and Parallel forms reliability.
Click to flip
starQuestion
What is a one-group posttest-only design?
Click to flip
starAnswer
A one-group posttest-only design is a weak experimental design where a single group is given a treatment and then measured on the outcome, without a control group for comparison.
Click to flip
starQuestion
What is the purpose of operationalization in research?
Click to flip
starAnswer
Operationalization involves defining how variables will be measured in a study, allowing for observable and quantifiable assessments of constructs.
Click to flip
starQuestion
What does a mixed design in experimental research entail?
Click to flip
starAnswer
A mixed design combines elements of both between-subjects and within-subjects designs, allowing researchers to investigate interactions between different factors while using the same participants across some conditions.
Click to flip
starQuestion
Explain the term ‘selection bias’.
Click to flip
starAnswer
Selection bias occurs when the participants included in a study are not representative of the general population, potentially affecting the study’s internal validity and the generalizability of the results.
Click to flip
starQuestion
What is the Solomon four-group design?
Click to flip
starAnswer
The Solomon four-group design is an experimental design that includes two groups receiving a pretest and two groups not receiving a pretest, allowing researchers to assess the effects of pretesting on the dependent variable.
Click to flip
starQuestion
What are the advantages of between-subjects designs?
Click to flip
starAnswer
Between-subjects designs are advantageous when studying irreversible changes, using intact groups, and when treatments have carryover effects that could bias results.
Click to flip
starQuestion
What is the role of confounding variables in experimental research?
Click to flip
starAnswer
Confounding variables are extraneous factors that can influence the outcome of an experiment, potentially leading to inaccurate conclusions about the relationship between independent and dependent variables.
Click to flip
starQuestion
Define external validity.
Click to flip
starAnswer
External validity refers to the extent to which the results of a study can be generalized to other settings, populations, or times, beyond the specific conditions of the study.
Click to flip
starQuestion
What is the importance of clear instructions in an experiment?
Click to flip
starAnswer
Clear instructions are crucial in an experiment to ensure that participants understand the procedure, which helps minimize confusion and variability in responses, leading to more reliable results.
Click to flip
starQuestion
What is a factorial design?
Click to flip
starAnswer
A factorial design is an experimental design that examines the influence of two or more independent variables simultaneously, providing insights into the interaction effects between these variables.
Click to flip
starQuestion
How does one assess the validity of a study?
Click to flip
starAnswer
Assessing the validity of a study involves evaluating the study’s design, measurement tools, and the extent to which the findings can be accurately interpreted and generalized to broader contexts.
Click to flip
starQuestion
What is the primary aim of experimental research design?
Click to flip
starAnswer
The primary aim of experimental research design is to determine causal relationships between variables by manipulating independent variables and observing their effect on dependent variables.
Click to flip
1 / 19

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is correlational research?
Click to flip
starAnswer
Correlational research examines the relationship between two or more variables without manipulating them. It identifies whether and how strongly pairs of variables are related.
Click to flip
starQuestion
What does a correlation coefficient of r = 0.98 indicate?
Click to flip
starAnswer
An r value of 0.98 indicates a very strong positive correlation between the two variables, suggesting that as one variable increases, the other variable also tends to increase.
Click to flip
starQuestion
Why is correlation not equivalent to causation?
Click to flip
starAnswer
Correlation does not imply causation because it does not show that one variable causes changes in another. Both variables may be influenced by a third variable, or the relationship may be coincidental.
Click to flip
starQuestion
What are the types of scales of measurement used in psychological research?
Click to flip
starAnswer
The scales of measurement include nominal, ordinal, interval, and ratio, each providing different levels of information about the data.
Click to flip
starQuestion
What is the significance of measurement error in research?
Click to flip
starAnswer
Measurement error affects the reliability and validity of research findings. It represents the discrepancy between the observed score and the true score.
Click to flip
starQuestion
How can a control group improve the validity of an experiment?
Click to flip
starAnswer
A control group helps isolate the effect of the independent variable by providing a baseline for comparison, thus enhancing internal validity.
Click to flip
starQuestion
What is the ‘third variable problem’?
Click to flip
starAnswer
The ‘third variable problem’ refers to the possibility that a third, unmeasured variable influences both variables being studied, which can create misleading correlations.
Click to flip
starQuestion
Define internal validity.
Click to flip
starAnswer
Internal validity refers to the extent to which a study accurately establishes a causal relationship between the variables studied, free from the influence of confounding variables.
Click to flip
starQuestion
What role does sampling play in correlational research?
Click to flip
starAnswer
Sampling is crucial in correlational research as it ensures that the sample accurately represents the population, which affects the generalizability of the findings.
Click to flip
starQuestion
What are outliers, and how do they affect correlations?
Click to flip
starAnswer
Outliers are data points that deviate significantly from other observations. They can skew the results and distort the correlation coefficient, potentially leading to incorrect conclusions.
Click to flip
starQuestion
What is the purpose of using questionnaires in correlational research?
Click to flip
starAnswer
Questionnaires are used to gather data from a large number of respondents, allowing researchers to explore relationships between variables efficiently and effectively.
Click to flip
starQuestion
Explain what a positive correlation indicates.
Click to flip
starAnswer
A positive correlation indicates that as one variable increases, the other variable also increases. For example, higher levels of education are often associated with higher income.
Click to flip
starQuestion
What is the difference between true experimental design and correlational research?
Click to flip
starAnswer
True experimental design involves manipulating one variable to observe its effect on another while controlling for extraneous variables, whereas correlational research observes relationships without manipulation.
Click to flip
starQuestion
How can correlational research inform future experimental studies?
Click to flip
starAnswer
Correlational research can identify relationships and generate hypotheses that can be tested through experimental research, helping to establish causation.
Click to flip
starQuestion
What is external validity?
Click to flip
starAnswer
External validity refers to the extent to which the results of a study can be generalized to and have relevance in settings outside the study itself.
Click to flip
starQuestion
What is the significance of using a large sample size in correlational studies?
Click to flip
starAnswer
A larger sample size increases the likelihood of obtaining a more representative sample, which enhances the reliability and validity of the research findings.
Click to flip
starQuestion
Describe the importance of operationalization in research.
Click to flip
starAnswer
Operationalization involves defining variables in measurable terms, which is essential for clarity in research, allowing constructs to be accurately assessed and compared.
Click to flip
starQuestion
What can be inferred from a correlation coefficient of r = -0.75?
Click to flip
starAnswer
An r value of -0.75 indicates a strong negative correlation, suggesting that as one variable increases, the other variable tends to decrease.
Click to flip
starQuestion
What are the limitations of correlational research?
Click to flip
starAnswer
Limitations include the inability to establish causation, potential for confounding variables, and the reliance on the accuracy of self-reported data.
Click to flip
starQuestion
What is correlational research?
Click to flip
starAnswer
Correlational research examines the relationship between two or more variables without manipulating them. It identifies whether and how strongly pairs of variables are related.
Click to flip
1 / 19

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly