Health Sciences Flashcards

1
Q

What does rigor in qualitative research consist of?

A

Credibility, transferability, dependability and confirmability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Credibility

A
  • Data triangulation, methodological triangulation, investigator triangulation and theory triangulation
  • Prolonged data collection
  • Member checking
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Transferability

A
  • Thick description
  • Explain sampling strategy
  • Discuss findings with existing literature from different settings
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Dependability

A
  • Data saturation
  • Iterative data collection
  • Iterative data analysis
  • Flexible emergent research design
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Confirmability

A
  • Search literature that disconfirms findings
  • Peer debriefing
  • Reflexivity
  • Audit trail
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Example of selection bias

A

In an RCT for a hypertension drug, participants are recruited through a fitness magazine, leading to a sample of health-conscious individuals. The sample isn’t representative of the broader hypertension population, potentially leading to misleading conclusions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Example of performance bias

A

In a double-blind pain relief medication trial, nurses know which group participants are in, potentially leading to performance bias. They may unintentionally offer more support to the experimental group, such as frequent check-ins and additional pain relief measures, even if the medication isn’t more effective.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Example of attrition bias

A

In a long-term smoking cessation study, some participants who struggled the most with quitting drop out, creating attrition bias. This skews the results as those who dropped out had different experiences. Dropouts are related to the study’s outcome (quitting smoking), potentially overestimating the effectiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Example of detection bias

A

In a study comparing a new breast cancer screening test to the standard one, radiologists, aware of the research, may inadvertently interpret results more cautiously. This can lead to more false negatives in the new test group. Measurement differs between the groups due to radiologists’ awareness, potentially underestimating the new test’s accuracy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you deal with selection bias?

A
  • Random sequence generation
  • Allocation concealment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you deal with performance bias?

A
  • Blinding of participants and personnel
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you deal with attrition bias?

A
  • Be transparent about incomplete outcome data
  • Intention-to-treat analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do you deal with detection bias?

A
  • Blinding of the outcome assessment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What do you look at when assessing the quality of an RCT?

A
  • Criteria for internal validity: RoB
    (randomization, blinding)
  • Criteria for external validity: generalizability
    (in- and exclusion criteria)
  • Criteria for precision: accuracy
    (sample size)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

True or false? A content analysis is deductive.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

True or false? A Grounded theory analysis is inductive.

A

True

17
Q

True or false? Absolute risk reduction (control risk - experimental risk) is the best way to show the risk in a study.

A

True

18
Q

Why should you be skeptical if only relative risk reductions are shown?

A

Because this could make the effect of a study look more positive (effective) than it really is.

19
Q

What does deductive reasoning look like?

A

Theory - hypothesis - data collection - confirmation, rejection, modification

20
Q

What does inductive reasoning look like?

A

Data collection - analyses of patterns - hypothesis - theory

21
Q

What is participatory action research?

A

Minimizing the gap between research and society.
- To minimize power differences
- Increase knowledge of participants
- Promote social change

22
Q

Transdisciplinary research consists of:

A

Participatory action research and interdisciplinary approach

23
Q

Definition and example of content validity

A

The extent to which the measurement covers all aspects of the concept being measured.
Example:
Suppose researchers want to create a math assessment test for elementary school students. To ensure all the right mathematical concepts and skills are in this assessment, they define the content domain. Other testing tools: expert review, pilot testing and refinement.

24
Q

Definition and example of criterion validity

A

The extent to which the result of a measure corresponds to other valid measures of the same concept (golden standard).
Example:
Imagine a pharmaceutical company is conducting a study to assess the effectiveness of a new drug designed to reduce blood pressure in hypertensive patients. To establish criterion validity for the new drug’s efficacy, they select a criterion measure (golden standard) to measure blood pressure. If the new drug demonstrates a significant reduction in blood pressure compared to the control group, it can be considered to have good criterion validity for its intended purpose.

25
Q

Definition and example of construct validity

A

The adherence of a measure to existing theory and knowledge of the concept being measured.
Example:
Suppose researchers want to develop a questionnaire to measure psychological well-being. To establish construct validity, they develop a theoretical framework that defines psychological well-being, including key components. Researchers use factor analysis to determine whether the items group together in ways that align with the theoretical framework.

26
Q

Researchers ask participants to fill out the questionnaire today and in one week from now. What reliability measure is used and do you recognize bias?

A
  • Test-retest reliability
  • Since it’s only one week, there is a risk of recall-bias
27
Q

In a study on the HRQoL a questionnaire is used. Researchers state in their article that the Chronbach’s alpha is 0.92. What is your opinion on this?

A
  • Chronbach’s alpha is a measure for the inter relatedness (reliability): do items more or less measure the same construct? 0.92 means high internal consistency.
  • It is a one dimensional scale!
  • Researchers should have reported on the outcome of the factor analyses!
28
Q

When do you use interviews?

A

To understand experiences, opinions, attitudes, values and processes

29
Q

When do you use non-participant observations?

A

To provide data on phenomena (behavior), as well as people’s accounts of those phenomena

30
Q

When do you use ethnography and participant observation?

A

To understand cultural phenomena that reflect the knowledge and meanings that guide cultural groups

31
Q

When do you use focus groups?

A

To elicitate information about views of a group and stimulate interactions and discussions

32
Q

When do you use visual research methods?

A

To interpret the world through the participant’s sight and help provide insights into difficult issues

33
Q

When is respondent-assisted sampling useful? What are the risks?

A
  • For hard-to-reach populations
  • Bias if recruits share common characteristics
  • Limited generalizability
34
Q

What is publication bias? Give an example

A

When studies with statistically significant or positive results are more likely to be published, while studies with non-significant or negative results are often left unpublished (overestimation)
Example:
Imagine a pharmaceutical company conducts a series of clinical trials to test the effectiveness of a new drug for Alzheimer’s disease. It is found to have no significant impact. However, the company decides not to publish the results of the negative trials and only submits the positive trial results to journals. As a result, a systematic review conducted on the published literature might conclude that the drug is highly effective, even though unpublished data reveals a different picture.

35
Q

What is language bias? Give an example

A

Studies published in one or a few languages are favored over studies published in other languages. Valuable data from studies in these excluded languages may be missed, potentially leading to incomplete results.
Example:
RCTs in Chinese language favor acupuncture. If not included, results of systematic review might underestimate the effect.

36
Q

What is location bias? Give an example

A

When researchers focus on studies conducted in specific geographic regions or countries while neglecting studies conducted in other locations.
Example:
A research team is conducting a systematic review on the dietary habits and their impact on heart disease. They primarily focus on studies conducted in Western countries, such as the US, Canada and EU, and include data from these studies in their review. However, they neglect to include studies from regions like Southeast Asia, where diets and lifestyle factors may significantly differ.

37
Q

Some studies in your systematic review have different study methodologies (2 studies have comparable groups at baseline and 3 don’t). What do you do?

A

You can find out whether this difference does have an impact on the overall outcome by a sensitivity analysis.

38
Q

Why would you do a meta analysis?

A

To increase power and to improve precision.

39
Q

What could you do to explore heterogeneity?

A

Subgroup analysis