EPA Flashcards

(63 cards)

1
Q

Q1. In EPA’s baseline human health risk assessment (as outlined in RAGS Part A), which of the following lists all the fundamental steps of the process in the correct general order (after planning and scoping)?
A. Hazard identification, dose-response assessment, exposure assessment, risk characterization
B. Data collection and analysis, exposure assessment, toxicity assessment, risk characterization
C. Release assessment, exposure modeling, toxicity testing, risk management
D. Problem formulation, exposure evaluation, hazard control, risk communication

A

Correct Answer: B
Explanation: RAGS Part A (EPA’s Risk Assessment Guidance for Superfund) describes a four-step baseline risk assessment process conducted after initial planning. The steps are: data collection and analysis (including hazard identification), exposure assessment (estimating chemical intakes for receptors), toxicity assessment (evaluating dose-response and toxicity values like RfDs or slope factors), and risk characterization (integrating exposure and toxicity to characterize risk). Answer B correctly lists these core steps in order. Option A uses similar terminology from the NAS/NRC framework (hazard ID, dose-response, etc.), which is conceptually similar, but RAGS Part A explicitly breaks “hazard identification” into data collection/analysis and toxicity assessment steps. Options C and D are incorrect – they include terms not in the standard risk assessment process (e.g. “hazard control” or “risk management,” which are outside the risk assessment process).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Q2. Under EPA risk assessment guidelines, what does a Reference Dose (RfD) represent?
A. A precise threshold below which no adverse effects occur in any individual
B. An estimate of a daily exposure to the human population that is likely to be without appreciable risk of adverse effects over a lifetime
C. The dose that produces a 50% response in animal studies, adjusted by uncertainty factors
D. A dose that should never be exceeded, derived from the lowest observed adverse effect level (LOAEL) without any uncertainty factors applied

A

Correct Answer: B
Explanation: The RfD is defined by EPA as an estimate (with uncertainty spanning perhaps an order of magnitude) of a daily exposure for humans (including sensitive subgroups) that is likely to be without appreciable risk of deleterious effects over a lifetime. It is usually derived from a point of departure (such as a NOAEL, LOAEL, or benchmark dose level) divided by appropriate uncertainty factors, accounting for interspecies differences, intraspecies variability, etc. Thus, it is not a precise hard threshold (A is incorrect), but a health-protective estimate. Option C confuses the RfD with an ED₅₀; while toxicologists may use ED₅₀ in studies, the RfD is not simply an ED₅₀ but often based on NOAEL/LOAEL or BMD with uncertainty factors. Option D is wrong because the RfD is not derived from a LOAEL without UFs; on the contrary, uncertainty factors are applied (especially if a LOAEL is used, an extra uncertainty factor is included). Therefore, B best describes the RfD.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Q3. In Superfund risk assessments, why is the 95% upper confidence limit (95% UCL) of the mean concentration often used as the exposure point concentration for contaminants in soil or water?
A. To account for potential future increases in contamination levels
B. To avoid underestimating the true average exposure concentration by ensuring with 95% confidence that the selected value is at least as high as the true mean
C. Because the 95th percentile concentration is assumed to be the dose that all individuals are exposed to
D. To lower the risk estimates by using a concentration that is usually much lower than the observed sample mean

A

Correct Answer: B
Explanation: The 95% UCL of the mean is used as a conservative estimate of the average concentration that an individual might be exposed to. By taking the 95% UCL, assessors ensure (with 95% confidence) that the true mean concentration in the exposure medium is not underestimated. This approach addresses uncertainty in the sampling data and is health-protective – it chooses a slightly higher-than-observed mean concentration so risk is not understated. It does not assume contaminant levels will increase in the future (A is unrelated to the statistical reason). It’s also not the 95th percentile of individual samples (which would be much higher than the mean) – it’s a confidence limit on the mean, not an extreme percentile of data (thus C is incorrect). Finally, using the 95% UCL typically raises the estimated exposure concentration (relative to the sample mean) or is roughly similar to a high-end mean, rather than “much lower” (eliminating D). Therefore, the main purpose is captured by B: to ensure the average exposure estimate is conservatively high enough.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Q4. Hazard Quotient (HQ) is a key concept for non-cancer risk characterization. How is the hazard quotient calculated, and what is its interpretation?
A. HQ = (Exposure dose or concentration) ÷ (Reference value like RfD or RfC); HQ > 1 suggests potential concern for non-cancer effects
B. HQ = (Carcinogenic risk) × (exposure duration in years); HQ above 1 indicates cancer risk above the safe level
C. HQ = (Animal LD₅₀ dose) ÷ (estimated human dose); HQ < 1 means the exposure is lethal
D. HQ = (Exposure dose) ÷ (Cancer slope factor); HQ > 1 indicates a cancer risk greater than 1 in a million

A

Correct Answer: A
Explanation: For non-cancer effects, the hazard quotient is defined as the ratio of the estimated exposure (dose or air concentration) to a reference value (such as an RfD for oral exposures or an RfC for inhalation) . Mathematically, HQ = Exposure / RfD (or Exposure / RfC). If HQ > 1, it means the exposure exceeds the level that is considered “safe” (the reference value) and thus potential for adverse non-cancer health effects cannot be ruled out – it’s a level of concern. If HQ ≤ 1, the exposure is at or below the reference dose, suggesting it’s unlikely to pose appreciable risk. Option B is incorrect – HQ is not defined for cancer that way; cancer risk is typically a probability (e.g., 1×10^-6), not compared to an RfD. Option C is wrong and nonsensical in risk assessment terms (LD₅₀ is a lethal dose for 50% of animals, not used in calculating HQ). Option D confuses HQ with cancer risk calculation; dividing exposure by a slope factor would give a risk, not a unitless HQ. Therefore, A correctly states how HQ is calculated and interpreted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Q5. When combining multiple non-carcinogenic chemicals or exposure pathways that affect the same target organ or system, risk assessors often use a Hazard Index (HI). What does an HI represent?
A. The sum of hazard quotients for all relevant exposures, indicating the overall potential for non-cancer harm to a target organ/system
B. A probabilistic estimate of cancer risk from multiple chemicals
C. The product of all individual hazard quotients, used for synergistic effects
D. A qualitative ranking of hazard from 0 (no hazard) to 10 (extreme hazard) used in Superfund decisions

A

Correct Answer: A
Explanation: The Hazard Index (HI) is used to evaluate combined non-cancer hazard from multiple substances or exposure pathways. It is calculated as the sum of the hazard quotients for those chemicals that affect the same target organ or organ system . For example, if three chemicals each have HQs of 0.3 (and they all affect the liver), the HI for liver effects would be 0.3+0.3+0.3 = 0.9. An HI > 1 indicates that, collectively, the exposures may exceed safe levels for that organ system (potential concern for adverse effects). Option B is incorrect because HI is not used for cancer (cancer risks are summed as probabilities, not via HQ/HI). Option C is wrong – we add HQs (assuming additivity of effect), not multiply them. Option D is also incorrect – HI is a calculated value (which can exceed 10 or be fractional) and not a qualitative 0–10 ranking. Thus, A accurately describes HI’s definition and purpose.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Q6. The EPA’s Guidelines for Carcinogen Risk Assessment (2005) introduced standardized weight-of-evidence descriptors. Which of the following is NOT one of the five standard cancer hazard descriptors recommended in the 2005 EPA guidelines?
A. Carcinogenic to Humans
B. Likely to Be Carcinogenic to Humans
C. Possibly Carcinogenic to Humans
D. Suggestive Evidence of Carcinogenic Potential

A

Correct Answer: C
Explanation: The five standard descriptors from the EPA 2005 cancer guidelines are: “Carcinogenic to Humans,” “Likely to Be Carcinogenic to Humans,” “Suggestive Evidence of Carcinogenic Potential,” “Inadequate Information to Assess Carcinogenic Potential,” and “Not Likely to Be Carcinogenic to Humans.” . The term “Possibly Carcinogenic to Humans” (Option C) is not one of the 2005 descriptors – it was part of the older 1986 classification system (Group C was “Possible Human Carcinogen”) . The 2005 guidelines replaced those letter categories with the new descriptor phrases. Therefore, option C is the correct choice as the incorrect descriptor in the context of 2005 guidance. All the other options A, B, and D are indeed among the current standard descriptors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Q7. According to EPA’s cancer risk assessment guidelines, what is the default approach for low-dose extrapolation for a chemical that is a known mutagenic carcinogen with no identifiable threshold?
A. Use a linear extrapolation from the point of departure (e.g., draw a straight line from the POD to zero dose/zero risk) because even low doses are assumed to pose some risk
B. Assume a threshold and use a reference dose approach, because even mutagens have safe doses at low levels
C. Skip dose-response assessment and just describe the hazard qualitatively
D. Apply a safety factor of 10 to the tumor dose instead of extrapolating, to account for uncertainty at low doses

A

Correct Answer: A
Explanation: For carcinogens that act via a presumed non-threshold mechanism (such as direct DNA mutagens), EPA’s default is to use linear extrapolation at low doses . This means starting from a point of departure (often the LED10 or a benchmark dose lower confidence limit for, say, 10% tumor incidence) and drawing a straight line to the origin (zero incremental risk at zero dose). The slope of this line gives the cancer slope factor, implying risk is proportional to dose even at low exposures. Option B is contrary to guidance for mutagenic carcinogens – a threshold (non-linear) approach is generally not assumed unless there is strong evidence of a non-linear mode of action. Option C is incorrect because dose-response quantification is a key part of risk assessment; one wouldn’t omit it for known carcinogens. Option D is not how low-dose cancer risk is handled – safety (uncertainty) factors are typically for non-cancer RfDs or used in deriving a reference value for a threshold carcinogen, not a substitute for modeling in linear extrapolation. Thus, A is the appropriate approach per EPA guidelines for a mutagenic carcinogen.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Q8. Benchmark Dose (BMD) modeling is often preferred over the older NOAEL approach for determining the point of departure in dose-response assessment. Which of the following is an advantage of using a Benchmark Dose instead of a NOAEL?
A. The BMD approach uses all the dose-response data to model a curve, providing a BMDL (lower confidence limit) that is a more statistically robust point of departure than a single NOAEL point
B. A BMD is always higher than the NOAEL, ensuring a more protective assessment
C. The BMD eliminates the need for any uncertainty factors in deriving an RfD
D. BMD modeling can only be applied to cancer endpoints, not non-cancer endpoints

A

Correct Answer: A
Explanation: Benchmark Dose modeling fits a dose-response curve to all the data, thereby utilizing the full range of information rather than relying on one “no effect” dose. It identifies the dose that produces a predefined benchmark response (e.g., 10% extra risk or 10% response rate) and then typically uses the benchmark dose lower confidence limit (BMDL) as the point of departure. This BMDL, being a lower bound on the dose causing the effect, inherently accounts for data variability and provides a more stable basis for extrapolation than a single NOAEL from one dose group. Thus, A is correct: BMD uses all data and gives a statistically informed POD. Option B is not necessarily true; a BMD could be lower, higher, or similar to the NOAEL depending on data – the key is it’s more statistically robust, not that it’s always higher or more conservative. Option C is incorrect because even with a BMDL, uncertainty factors (for interspecies, intraspecies, etc.) are still generally applied to derive RfDs or other reference values. Option D is false – BMD modeling can be applied to any dose-response relationship, including non-cancer endpoints (indeed, it’s commonly used to derive RfDs for non-cancer effects by modeling, say, incidence of a specific toxic effect). Thus, the key advantage of BMD (Answer A) is its use of the full dataset and derivation of a POD with known confidence bounds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Q9. In human health risk assessment, what is the distinction between “variability” and “uncertainty”?
A. Variability refers to true differences in a population or environment (heterogeneity), whereas uncertainty refers to lack of knowledge or precision in our measurements or models
B. Variability is always controllable by collecting more data, whereas uncertainty is always uncontrollable
C. They are essentially the same – both terms describe any kind of error or spread in risk estimates
D. Uncertainty only applies to cancer risk, and variability only applies to non-cancer risk

A

Correct Answer: A
Explanation: In risk assessment, variability and uncertainty are distinct concepts. Variability is the natural heterogeneity or diversity in a parameter across time, space, or individuals – for example, people have different body weights, drinking water intake rates, or susceptibilities. This reflects real differences that cannot be reduced by more measurement, only better characterized. Uncertainty, on the other hand, arises from lack of knowledge, limited data, or imprecision in measuring or modeling something. Uncertainty can potentially be reduced with additional research or better data. For instance, uncertainty exists in extrapolating animal data to humans or in estimating an upper percentile of exposure with limited samples. Option A correctly captures these definitions. Option B is backwards – typically, variability is inherent (not fully controllable), whereas some types of uncertainty can be reduced by more data. Option C is wrong because it ignores the important conceptual difference: they are not the same (treating them identically can mislead risk management decisions). Option D is false – both uncertainty and variability apply to all risk assessments (cancer and non-cancer). Understanding which factors are variability vs uncertainty helps in choosing modeling approaches (e.g., probabilistic analyses for variability) and conveys confidence in risk estimates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Q10. What is the Superfund acceptable risk range for lifetime excess cancer risk, as generally cited in EPA’s risk management guidelines (e.g., the National Contingency Plan for site cleanup decisions)?
A. 1 to 10 cancers per 100 people (1–10%)
B. 1×10⁻⁶ to 1×10⁻⁴ (one-in-a-million to one-in-ten-thousand risk)
C. Hazard Index between 0.1 and 1.0
D. Any detectable cancer risk is considered unacceptable under Superfund

A

Explanation: In Superfund site decision-making, EPA typically uses an acceptable risk range of 10⁻⁴ to 10⁻⁶ for lifetime excess cancer risk. This means a calculated risk of one-in-a-million (1×10⁻⁶) is the point of departure for remediation goals (very protective), and up to one-in-ten-thousand (1×10⁻⁴) may be deemed acceptable considering feasibility and site specifics . This risk range is codified in the National Oil and Hazardous Substances Pollution Contingency Plan (NCP) and related guidance. Option A (1–10% risk) is enormously higher and not acceptable; regulators aim for much lower risks. Option C (hazard index) is about non-cancer effects, not how cancer risk acceptability is defined. Option D is incorrect because while we strive for no added risk, in practice a de minimis risk like 10⁻⁶ is considered essentially negligible; any detectable risk is not the criterion – it’s the risk range in B that guides decisions. Thus, B is the correct representation of EPA’s acceptable risk range for carcinogens in Superfund.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  1. Which of the following is one of the four primary steps in an EPA baseline human health risk assessment (as described in RAGS Part A)?
    * A. Selecting a remedial action for the site
    * B. **Exposure assess i_citation_attribution:1‡epa.gov](https://www.epa.gov/risk/risk-assessment-guidance-superfund-rags-part#:~:text=and%20other%20criteria%2C%20advisories%2C%20and,toxicity%20assessment%3B%20and%20risk%20characterization)
    * C. Establishing cleanup levels (preliminary remediation goals)
    * D. Conducting an uncertainty analysis of risk estimates
A

Correct Answer: B. Exposure assessment is a key step of the baseline risk assessment process, along with data collection & analysis, toxicity assessment, and risk characterization . (Risk management decisions like selecting remedies or setting cleanup g not* part of the risk assessment itself.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
  1. Which of the following is not one of the five standard weight-of-evidence descriptors in EPA’s 2005 Guidelines for Carcinogen Risk Assessment?
    * A. Likely to be Carcinogenic to Humans
    * B. Suggestive Evidence of Carcinogenic Potential
    * C. Possible Human Carcinogen
    * D. Carcinogenic to Humans
A

Correct Answer: C. Possible Human Carcinogen is an outdated term (the old Group C classification) and is not used in the 2005 guidelines . The current standard descriptors are: Carcinogenic to Humans, Likely to be Carcinogenic to Humans, Suggestive Evidence of Carcinogenic Potential, Inadequate Information to Assess Carcinogenic Potential, and Not Likely to Be Carcinogenic to Humans .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  1. Data-Derived Extrapolation Factors (DDEFs) are intended to:
    * A. Replace default interspecies and intraspecies uncertainty factors with factors based on chemical-specific toxicokinetic and toxicodynamic data .
    * B. Add extra conservative safety buffers on top of existing uncertainty factors.
    * C. Eliminate the need for extrapolation in risk assessment entirely.
    * D. Address unrelated data gaps (e.g., missing toxicity studies) with arbitrary values.
A

Correct Answer: A. DDEFs are chemical-specific adjustments that use quantitative data to substitute for default 10× factors used for animal-to-human extrapolation and human variability . By using actual toxicokinetic (TK) and toxicodynamic (TD) data, DDEFs make the extrapolation more scientifically grounded, rather than simply applying the default uncertainty factors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  1. Which statement correctly describes the non-cancer hazard quotient (HQ)?
    * A. It represents the probability (chance) of an individual developing an adverse effect.
    * B. It is calculated by multiplying the exposure dose by a reference value.
    * C. It is the ratio of the estimated exposure level to an established reference dose (RfD) or reference concentration (RfC) for that substance .
    * D. An HQ below 1.0 is interpreted as a significant risk of harm.
A

Correct Answer: C. The hazard quotient (HQ) is defined as the exposure dose (or concentration) divided by the reference dose (or reference concentration) for the chemical . It is a unitless ratio that compares exposure to a level considered safe. An HQ of 1.0 indicates exposure equal to the RfD; HQ < 1 suggests the exposure is below the safe level, and HQ > 1 indicates the exposure exceeds the safe level (potential concern). Importantly, an HQ is not a probability of effect .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  1. A single mouse study showed a small increase in liver tumors at high doses, but other studies are inconclusive or negative. According to EPA’s cancer guidelines, what weight-of-evidence descriptor is most appropriate for this situation?
    * A. Suggestive Evidence of Carcinogenic Potential
    * B. Likely to be Carcinogenic to Humans
    * C. Carcinogenic to Humans
    * D. Not Likely to Be Carcinogenic to Humans
A

Correct Answer: A. Suggestive Evidence of Carcinogenic Potential is used when one or a few studies show a marginal or significant effect but the evidence is not sufficient to conclude “likely” carcinogenicity . In this scenario (a single study with a tumor increase and otherwise weak or inconsistent data), EPA would typically use the “Suggestive” descriptor. Notably, when evidence is only suggestive, EPA generally does not derive a quantitative cancer risk estimate for the chemical .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  1. EPA often conceptually splits the default 10× interspecies uncertainty factor into separate components. These two components are intended to account for differences in:
    * A. Acute versus chronic toxicity.
    * B. Oral versus inhalation exposure routes.
    * C. Laboratory conditions versus environmental conditions.
    * D. Toxicokinetics (TK) and toxicodynamics (TD) between animals and humans .
A

Correct Answer: D. The default 10-fold animal-to-human uncertainty factor is commonly thought of as comprising a ~3.16× factor for toxicokinetic differences and a ~3.16× factor for toxicodynamic differences (3.16 × 3.16 ≈ 10) . This subdivision recognizes that species differences in how a chemical is processed (TK) and in how target tissues respond (TD) both contribute to overall interspecies uncertainty.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q
  1. Two chemicals that both target the liver have hazard quotients of 0.4 and 0.7, respectively, for a particular exposure scenario. What is the combined non-cancer hazard index (HI), and what does it imply?
    * A. 0.28, indicating negligible hazard since HI < 1.
    * B. 0.4, since only the larger HQ is considered for risk.
    * C. 1.1, meaning a 110% probability of liver damage.
    * D. 1.1, indicating the combined exposure modestly exceeds the level considered acceptable (HI > 1) .
A

Correct Answer: D. The hazard index (HI) is the sum of hazard quotients for multiple substances (assuming those substances affect the same target organ or effect) . Here, 0.4 + 0.7 = 1.1. An HI of 1.1 is slightly above 1, suggesting that the combined exposure is slightly above the reference level and thus may pose a concern (it “modestly exceeds” the safe threshold of 1). Note that HI is not a probability – an HI of 1.1 does not mean 110% chance of effect. Rather, HI > 1 signals that the exposure is above the level considered without appreciable risk, warranting further attention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
  1. EPA sometimes assigns different weight-of-evidence descriptors for different exposure routes of the same chemical. For example, an agent might be “Carcinogenic to Humans” by the inhalation route but “Not Likely to Be Carcinogenic” by the oral route. What could justify classifying a chemical as Not Likely to Be Carcinogenic to Humans for a particular exposure route?
    * A. EPA policy does not allow dual classifications; this situation would be an error.
    * B. One descriptor applies to animals and another to humans (route is irrelevant).
    * C. Low doses are more potent than high doses for that route (inverse dose-response).
    * D. The agent is not delivered to or does not reach the target tissue by that exposure route (e.g., not absorbed via that route) .
A

Correct Answer: D. EPA can determine that a substance is Not Likely to Be Carcinogenic to Humans by a certain route of exposure if the data show no meaningful risk via that route. A common example is when a chemical causes tumors by one route of exposure but is not absorbed or biologically effective by another route . In such cases, EPA’s guidelines allow different descriptors for different routes (e.g., carcinogenic via inhalation, but not likely via oral exposure if oral uptake is negligible). This is not an error – it reflects scientific understanding of route-specific risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q
  1. Which of the following correctly distinguishes toxicokinetics (TK) from toxicodynamics (TD) in risk assessment?
    * A. TK refers to a chemical’s effects on the body, while TD refers to the body’s effect on the chemical.
    * B. TK deals with long-term chronic effects; TD deals with short-term acute effects.
    * C. TK describes “what the body does to the chemical” (absorption, distribution, metabolism, excretion), whereas TD describes “what the chemical does to the body” (interaction with biological targets and resulting toxic effects) .
    * D. TK occurs in animals; TD occurs in humans (each species has one or the other).
A

Correct Answer: C. Toxicokinetics (TK) is the study of how a chemical is absorbed, distributed, metabolized, and excreted by an organism – essentially the ADME processes that determine internal doses over time. Toxicodynamics (TD) refers to the biological effects of the chemical – for example, how it interacts with receptors or DNA and the cascade of events leading to toxicity. In short, TK is “the body’s handling of the toxin,” while TD is “the toxin’s action on the body.” These concepts apply to both animals and humans (they are not species-specific) and are critical in extrapolating dose-response data .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
  1. EPA generally considers which range of incremental lifetime cancer risk to be acceptable or tolerable for Superfund site remediation decisions?
    * A. 1 to 10 (unitless)
    * B. 10^-2 to 10^-3 (1 in 100 to 1 in 1,000)
    * C. 10^-8 to 10^-6 (1 in 100,000,000 to 1 in 1,000,000)
    * D. 10^-6 to 10^-4 (one in a million to one in ten thousand)
A

Correct Answer: D. The EPA’s generally acceptable risk range for individual lifetime cancer risk is 1×10^-6 to 1×10^-4. This corresponds to an added risk of between one in a million and one in ten thousand over a lifetime . Risks below 10^-6 are usually considered negligible, while risks above 10^-4 are typically regarded as unacceptable in the Superfund program, absent exceptional circumstances. (For comparison, options A and B are far too high to be acceptable, and option C is overly strict.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
  1. Under EPA’s 2005 cancer guidelines, if a chemical’s mode of action is unknown or not understood, what default approach is used for extrapolating cancer risk to low doses?
    * A. Assume a linear dose-response at low doses (no threshold), extrapolating a straight line from the point of departure through the origin) .
    * B. Assume a nonlinear (threshold) dose-response at low dose.
    * C. Do not estimate risk at all until mode of action data are available.
    * D. Use a biologically motivated model in all cases (even without mode-of-action information).
A

Correct Answer: A. The EPA’s long-standing default is to assume low-dose linearity for carcinogens when credible mode-of-action information is lacking . In practice, this means drawing a straight line from the observed point of departure (POD) down to zero dose, implying even the smallest dose has some risk. This health-protective default is used unless there is sufficient evidence to support a different (nonlinear) extrapolation. (Choice B would apply only if a threshold mode of action is established; without such evidence, EPA does not assume a safe threshold.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q
  1. In the context of uncertainty and variability, which statement best differentiates “variability” from “uncertainty” in risk assessment?
    * A. Variability is just another term for uncertainty – they are synonymous.
    * B. Variability refers to lack of knowledge, while uncertainty refers to true differences among individuals.
    * C. Variability refers to real differences in exposure or sensitivity among organisms (e.g. differences between species or among humans), whereas uncertainty refers to lack of knowledge about factors or processes (what we don’t know or have limited data on) .
    * D. In risk assessment, only variability is considered; uncertainty is generally ignored.
A

Correct Answer: C. Variability and uncertainty are distinct concepts. Variability is the natural heterogeneity in populations or environments – for example, people vary in body weight, genetics, or exposure levels. Uncertainty refers to limitations in our knowledge – for instance, not knowing the true human response at low doses or having to extrapolate from animals to humans introduces uncertainty . Default uncertainty factors in risk assessment are meant to account for both variability (e.g. human variability, species differences) and uncertainty (e.g. gaps in data) . DDEFs attempt to reduce uncertainty by using data to better characterize variability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
  1. Which of the following correctly contrasts cancer risk estimates with non-cancer hazard metrics in EPA risk characterization?
    * A. Both cancer risk and hazard index are interpreted as probabilities of harm.
    * B. Cancer risk is an estimated probability of an individual developing cancer (e.g., 1×10^-6 means a one-in-a-million chance), whereas a hazard index is a unitless ratio (exposure/reference) indicating non-cancer safety margin .
    * C. Cancer risk is a ratio of exposure to toxicity, while hazard index is the product of dose and potency.
    * D. Cancer risk is typically considered “acceptable” if below 1.0, whereas hazard index is acceptable if below 10^-6.
A

Correct Answer: B. Cancer risk is expressed as a probability (or chance) of an individual developing cancer over a lifetime – for example, 1×10^-5 corresponds to a one in 100,000 chance . In contrast, non-cancer hazards are expressed via the hazard quotient or hazard index, which is a ratio of exposure to a reference dose/concentration (HQ or summed HI) . An HQ/HI < 1 suggests the exposure is below the level expected to cause harm (acceptable), whereas an HQ/HI > 1 suggests potential risk. These are fundamentally different metrics: risk is a probability (unitless but often written in scientific notation), whereas hazard index is also unitless but indicates a threshold exceedance rather than a probability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q
  1. Which scenario would most likely lead to the descriptor “Carcinogenic to Humans” under EPA’s weight-of-evidence guidelines?
    * A. Multiple, high-quality epidemiological studies in humans show a causal association between the chemical and cancer .
    * B. No human data, but two animal studies show tumors (with no other evidence).
    * C. A single animal study shows a marginal increase in tumors, others show no effect.
    * D. Conflicting human studies, but clear positive results in a couple of rodent studies.
A

Correct Answer: A. Carcinogenic to Humans is reserved for situations with strong evidence of human carcinogenicity. Typically this means there are epidemiological studies demonstrating a causal relationship between exposure and cancer in humans, or an extremely robust combination of human and animal evidence . Options B and D (strong animal evidence without conclusive human data, or conflicting human data) more fittingly correspond to “Likely to be Carcinogenic to Humans.” Option C (a single marginal study) would fall under “Suggestive…”. Thus, convincing human data (and/or a combination of human, animal, and mechanistic evidence that is unequivocal) is needed for the Carcinogenic to Humans descriptor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
15. If chemical-specific data are available to replace the toxicokinetic part of an interspecies extrapolation but not the toxicodynamic part, how does EPA’s DDEF guidance recommend proceeding? * A. Only use DDEF if both TK and TD data are available; otherwise stick to defaults. * B. Apply the data-derived factor and set all other uncertainty factors to 1. * C. Ignore the partial data and use the full default 10× interspecies factor. * D. Use a data-derived factor for the TK component and retain an appropriate default factor for the TD component (e.g., √10 ≈ 3.16 for TD) .
Correct Answer: D. EPA’s guidance allows for partial use of data in extrapolation. If you have good data on toxicokinetics (TK) but not on toxicodynamics (TD), you can derive a data-informed factor for the TK differences, and still use a default factor for TD differences . In practice, for an oral exposure, one might use a chemical-specific adjustment (for example, based on PBPK modeling or comparative pharmacokinetics) in place of the default TK portion, while using the default ~3.16× for TD. You would not ignore valid data (C) nor insist that you must have both kinds of data to use any (A). The goal is to incorporate available data to the extent possible, rather than all-or-nothing.
26
16. How is a reference dose (RfD) best described? * A. An estimate of a daily exposure level to the human population (including sensitive subgroups) that is likely to be without appreciable risk of adverse effects over a lifetime . * B. The dose that causes a predefined effect in 50% of test animals (median toxic dose). * C. A regulatory limit above which effects are certain to occur. * D. The dose at which no humans will ever experience any effect, guaranteed.
Correct Answer: A. An RfD (reference dose) is an estimate of a continuous daily exposure that is likely to be without appreciable risk of deleterious effects during a lifetime, for even sensitive people . It is typically derived from experimental data (e.g., a NOAEL or BMDL) with uncertainty factors applied. It is not a sharp threshold between safe and harmful (and it’s certainly not an effect-causing dose like LD50). Essentially, RfD is a benchmark of “acceptable” chronic exposure; staying at or below the RfD is expected to be safe, while exposures above the RfD progressively reduce the margin of safety.
27
17. Under what circumstance will EPA derive a reference dose (RfD) or reference concentration (RfC) for a carcinogenic chemical? * A. For all carcinogens, in addition to a cancer slope factor. * B. When the chemical’s mode of action indicates a nonlinear (threshold) carcinogenic process, so a safe dose exists (then a threshold approach is used instead of a low-dose linear extrapolation) . * C. Only if the calculated cancer risk at environmental levels is below 1×10^-6. * D. Never – EPA always uses only slope factors for carcinogens.
Correct Answer: B. If a carcinogen is believed to act via a nonlinear, threshold mode of action – meaning there is some dose below which it does not induce tumors – EPA may treat it similarly to a non-carcinogen by deriving an RfD or RfC (using the POD with uncertainty factors) . In other words, instead of extrapolating linearly to zero risk, they assume no risk below a certain dose and establish that dose (with safety factors) as an RfD. This is done, for example, for chemicals like chloroform that cause cancer only above a toxicity threshold. By contrast, for carcinogens with no known threshold (e.g., mutagens), EPA uses a slope factor and does not provide an RfD for cancer effects. (Thus, options A and D are incorrect generalizations.)
28
18. For intraspecies (within human) variability, which type of data could justify using a data-derived factor smaller than the default 10-fold factor? * A. A high acute toxicity (low LD₅₀) in animal studies (since humans might be assumed equally sensitive). * B. Measured variability in human susceptibility, such as a known distribution of metabolic rates or genetic differences showing that sensitive individuals are at most, say, 3-fold more sensitive than average (allowing a factor less than 10) . * C. An absence of cancer in a small human study (which addresses only cancer, not overall variability). * D. None – the intraspecies UF is always 10 by default regardless of new data.
Correct Answer: B. The default 10× factor for human variability (UFH) can be reduced if there are robust data characterizing how much more sensitive the most vulnerable people are compared to the general population. For instance, if human pharmacokinetic data (or genetic polymorphism data) indicate that the 90th or 95th percentile individual is only about 3-fold different from the median, one might use a factor near 3 instead of 10. Essentially, reliable data on human variability in either TK or TD can justify a smaller UFH. (Option A is about animal data and doesn’t directly quantify human variability; option C is about carcinogenicity outcome, not the magnitude of human variability in response.)
29
19. Which of the following is not typically included as part of the risk characterization step in a Superfund human health risk assessment? * A. Discussion of the major uncertainties and assumptions in the assessment and their potential impact on results. * B. Identification of the key risk drivers (contaminants, pathways) and comparison of estimated risks to regulatory benchmarks (e.g., acceptable risk range). * C. Summary of the quantitative risk estimates (e.g., cancer risk probabilities, hazard indices) for the exposure scenarios evaluated. * D. Recommendation of specific remedial actions to reduce the calculated risks to acceptable levels.
Correct Answer: D. Risk characterization in the context of risk assessment involves synthesizing the findings – presenting the risk estimates and discussing their significance, including uncertainties, assumptions, and the contributors to risk . It does not include deciding or recommending how to manage those risks (that is part of risk management, which comes later). For example, the risk characterization will document the magnitude of risk (e.g., cancer risk of 2×10^-4, HI of 3) and whether additional action may be needed, but it will not itself select a remedy or specify how to reduce the risk . Option D (choosing remedial measures) is a risk management function, not part of the risk assessment characterization.
30
20. For carcinogens with a mutagenic mode of action, what adjustment does EPA’s Supplemental Guidance recommend for cancer risk estimates from early-life exposures? * A. No special adjustment; treat all ages the same for all carcinogens. * B. Apply a universal 10× factor to all exposures regardless of age. * C. Use a 3× factor for infants and a 10× for older children (switching the usual order). * D. Apply age-dependent adjustment factors (ADAFs): a 10-fold adjustment for exposures from birth to <2 years old, a 3-fold adjustment for ages 2 to <16, and no adjustment for exposures at 16 and above .
Correct Answer: D. For chemicals that act via a mutagenic mode of action (direct DNA mutagenicity) and lack chemical-specific age susceptibility data, EPA recommends applying ADAFs to account for increased early-life susceptibility. Specifically, exposures during the first 2 years of life are weighted 10 times higher, exposures from 2 to <16 years are weighted 3 times higher, and exposures at 16+ years use no additional factor . These factors reflect evidence that young children can be significantly more sensitive to mutagen-induced carcinogenesis. The combined effect is that the same exposure in childhood contributes more to lifetime risk than an equivalent exposure in adulthood . (Options A and B are incorrect; EPA does treat early-life exposures differently for mutagens. Option C has the factors reversed.)
31
21. For cross-species scaling of oral doses, what default method does EPA recommend to derive a human-equivalent dose from animal data? * A. Scale the dose by body weight to the 3/4 power (BW^¾), an allometric adjustment to account for species metabolic rate differences . * B. Use a direct 1:1 conversion on a mg/kg basis (no adjustment for body size). * C. Adjust by body surface area (body weight^⅔). * D. Assume humans require the same absolute dose as the test animal.
Correct Answer: A. EPA’s default interspecies dosimetric adjustment for oral systemic doses is to scale doses according to body weight^¾. This means, for example, if a rat dose is X mg/kg, the equivalent human dose is scaled by (BodyWeight_human/BodyWeight_rat)^¾ . This approach is based on well-established allometric relationships showing metabolic rate scales roughly to the 3/4 power of body mass. Using BW^¾ scaling is more scientifically justified than using body weight directly (BW^1) or surface area (BW^⅔) for many systemic toxicants, and it brings animal doses to a human-equivalent basis for risk assessment . (Choices B, C, and D are not current EPA default for oral chronic dosing.)
32
22. In Superfund human health risk assessments, why is the 95% upper confidence limit (UCL) of the mean often used as the exposure point concentration (EPC) for chronic exposure calculations instead of the simple mean or maximum detected concentration? * A. Because the 95% UCL is always lower than the arithmetic mean. * B. Because regulations specifically forbid using arithmetic means. * C. Because using the maximum concentration is not conservative enough. * D. Because the 95% UCL provides a statistically robust, health-protective estimate of the average concentration—ensuring with 95% confidence that the true mean is not underestimated .
Correct Answer: D. The 95% UCL of the mean is used to estimate the reasonable maximum exposure concentration. This statistic accounts for sampling uncertainty and ensures that there is only a 5% chance that the true average concentration exceeds the value chosen . In practical terms, using the 95% UCL means the EPC is a conservative (upper-end) estimate of the average exposure concentration. It’s more realistic than the maximum (which could be a one-time hot-spot reading) yet still protective compared to the plain mean (which might understate chronic exposure if data are limited). The goal is to avoid underestimating the mean concentration a person contacts over time . (Option A is incorrect—the 95% UCL is usually higher than the sample mean. Option C is also off; using the max is typically overly conservative, not insufficiently so.)
33
23. A chemical caused statistically significant tumor increases in two well-conducted rodent studies (different species), but there are no human data. Under EPA’s 2005 guidelines, which descriptor likely applies? * A. Carcinogenic to Humans * B. Likely to Be Carcinogenic to Humans * C. Suggestive Evidence of Carcinogenic Potential * D. Inadequate Information to Assess Carcinogenic Potential
Correct Answer: B. When there is sufficient evidence of carcinogenicity in animals (e.g., tumors in two species or two independent studies) and little or no human data, EPA typically uses Likely to be Carcinogenic to Humans . “Likely” indicates a strong presumption of human carcinogenic potential based on substantial animal evidence. It does not require human data if the animal evidence is compelling. (By contrast, Carcinogenic to Humans usually requires human evidence; Suggestive would be used if the animal evidence was weaker or only one study; Inadequate would be if even the animal data were insufficient or conflicting.)
34
24. Data show that humans are more sensitive to a certain chemical’s toxic effects than the test animals were – for instance, the human equivalent dose needed to cause the effect is half the animal dose. How might this affect the interspecies uncertainty factor in the risk assessment? * A. It wouldn’t – EPA’s interspecies factor is always 10 regardless of data. * B. It suggests using a larger uncertainty factor than 10 (to be more protective), since the default 10× might not be enough to cover humans’ greater sensitivity . * C. It allows using a smaller factor than 10, because animals were less sensitive. * D. It allows dropping the factor entirely.
Correct Answer: B. If evidence shows humans are more sensitive than the test animal, the risk assessor might need an even higher factor than the default 10 to protect humans. The default 10× for interspecies assumes animals and humans could differ by about one order of magnitude on average. But if data indicate, say, humans get the same internal dose at half the external dose of animals (or humans show effects at lower doses), then the default may underestimate risk. In such cases, EPA could choose a larger adjustment factor to adequately protect humans . (Conversely, if animals are more sensitive, one might use a smaller factor or none at all for interspecies – but that’s the opposite scenario.) The key is that DDEFs can go either direction: they may be <10 or >10 depending on what the data show. It’s not fixed at 10 in the face of clear evidence.
35
25. Why does EPA often present both a Reasonable Maximum Exposure (RME) scenario and a central tendency (average) exposure scenario in risk assessments? * A. To provide a range of risk estimates – an upper-bound estimate (RME) that is health-protective and a more typical estimate (central tendency) – thereby informing risk managers about both high-end and average risk levels . * B. EPA is undecided on which approach is better, so it includes both by default. * C. The law requires two estimates for every risk calculation. * D. Central tendency is included only to show that the RME is always higher.
Correct Answer: A. EPA’s guidance recommends including an RME scenario (intended to represent a high-end yet reasonable exposure – roughly the 90th to 95th percentile) and an average or central tendency scenario . This dual presentation gives risk managers context: the RME risk estimate shows a conservative “worst-case” individual risk, while the central tendency shows a more typical risk. Both are useful – the RME is used for making protective decisions (it’s the basis for most risk conclusions), and the average exposure estimate provides perspective on what a typical person’s risk might be . This practice is about transparency and understanding variability, not an indication of indecision or legal mandate per se.
36
26. If no adequate data exist on a chemical’s carcinogenicity (no human or relevant animal studies), which weight-of-evidence descriptor applies under the 2005 guidelines? * A. Likely to be Carcinogenic to Humans * B. Inadequate Information to Assess Carcinogenic Potential * C. Not Likely to be Carcinogenic to Humans * D. Suggestive Evidence of Carcinogenic Potential
Correct Answer: B. Inadequate Information to Assess Carcinogenic Potential is used when available data are insufficient for an informed evaluation of carcinogenicity . This typically means there’s a lack of studies or the studies available are not useful (e.g., major flaws or only irrelevant data). In such cases, EPA cannot conclude anything about carcinogenic hazard. It’s essentially a placeholder indicating “we don’t know”. (By contrast, “Not Likely” implies data show an absence of carcinogenic effect; “Suggestive” implies some hint of effect; “Likely” requires substantial evidence. None of those fit a data gap scenario.)
37
27. The concept of Data-Derived Extrapolation Factors (DDEFs) in EPA’s 2014 guidance is essentially equivalent to which term used by the World Health Organization (IPCS)? * A. Uncertainty Factors (UFs) * B. Benchmark Dose Adjustments (BDA) * C. Default Safety Factors * D. Chemical-Specific Adjustment Factors (CSAFs)
Correct Answer: D. The IPCS (International Programme on Chemical Safety) uses the term Chemical-Specific Adjustment Factors (CSAFs) for essentially the same idea as EPA’s DDEFs. Both involve replacing default uncertainty factors with data-derived values for interspecies and intraspecies differences when chemical-specific information is available . In other words, a CSAF for kinetic or dynamic differences is the same concept as a DDEF for TK or TD. (UFs or “safety factors” are the generic defaults; a DDEF/CSAF is a refined value that plugs into those slots based on data.)
38
28. Why do risk assessors sometimes calculate a hazard index specifically for a target organ or system (e.g., a “liver HI” for multiple hepatotoxins) instead of just a total hazard index for all chemicals? * A. EPA regulations require organ-specific hazard indices exclusively. * B. It usually makes the HI exactly equal to 1.0. * C. Because combining hazard quotients for chemicals that affect different organs can be misleading – summing only those that affect the same organ/system provides a more meaningful indicator of potential risk to that organ . * D. To avoid having a total HI ever exceed 1 (organ-specific grouping mathematically lowers the index).
Correct Answer: C. The hazard index is often more informative when calculated for chemicals that share a common target organ or critical effect. Adding HQs for disparate effects (say, neurotoxicity and liver toxicity) into one lump sum could overstate risk, because reaching an HI > 1 in that case doesn’t mean a single organ is overexposed – it’s mixing apples and oranges. Therefore, risk assessors will sometimes group chemicals by the organ they affect (or by effect) and report separate HIs (e.g., “liver HI,” “kidney HI”) . This approach, recommended by EPA and others, acknowledges that an HI > 1 is most relevant when the effects are additive on the same organ or endpoint. (It’s true that an organ-specific HI will never exceed the total HI, but the reason is scientific clarity, not to force the value lower.)
39
29. In EPA’s cancer guidelines, a “nonlinear” dose-response approach refers to what assumption? * A. Any dose-response curve that isn’t a straight line, even if it extrapolates to some risk at zero dose. * B. A model that assumes a threshold or zero slope at dose zero – i.e., there is some dose below which no increase in cancer risk is expected . * C. A quadratic or higher-order polynomial fit to the dose-response data. * D. Extrapolating risk in a curved line instead of a straight line, but still assuming no dose is risk-free.
Correct Answer: B. In the context of EPA’s cancer risk guidelines, “nonlinear” specifically means not linear at low dose, implying a threshold . In other words, at sufficiently low doses the extrapolated risk is essentially zero (the dose-response curve’s slope is zero at the origin). This is a narrower definition than just any curved model – it denotes the existence of a dose level with no response. By contrast, a “low-dose linear” model has a non-zero slope at dose zero (no threshold) . So, EPA uses “nonlinear” to indicate a threshold model in cancer assessment. (Choice D is just a curved model but still going through origin – that’s actually considered linear at low-dose if it assumes any risk >0 as dose >0.)
40
30. All of the following are examples of toxicokinetic data that could be used in developing a DDEF, except: * A. Blood or plasma concentration–time profiles of the chemical in animals and humans. * B. Enzyme metabolic rates (e.g., liver microsomal clearance) measured for animals vs. humans. * C. Tumor incidence data from a chronic rodent bioassay. * D. Urinary excretion fractions of the chemical or its metabolites in animals vs. humans.
Correct Answer: C. Toxicokinetic data pertain to the absorption, distribution, metabolism, and excretion of a substance (how the body handles the chemical). Options A, B, and D are all TK data: they involve blood concentration curves, metabolic rates, and excretion – all relevant for comparing internal doses across species. Tumor incidence data (option C), however, are toxicodynamic outcomes (an endpoint of toxicity) and are not themselves TK data. While tumor data are critical for dose-response assessment, they don’t inform TK differences like how fast an organism clears the chemical or what internal dose is achieved. Instead, tumor incidence would be used to determine a point of departure on the dose-response curve, but not to adjust interspecies or intraspecies uncertainty factors directly.
41
31. Which of the following factors is not typically used in calculating a chronic daily exposure dose (such as the CDI – Chronic Daily Intake) in EPA’s exposure assessment equations? * A. Exposure frequency (days per year) and exposure duration (years). * B. Contaminant concentration in the medium (e.g., soil or water). * C. Toxicity reference value (e.g., Reference Dose). * D. Body weight and averaging time.
Correct Answer: C. The exposure dose calculation (for instance, CDI – Chronic Daily Intake in mg/kg-day) involves factors like contaminant concentration (C), contact rate (ingestion rate, inhalation volume, etc.), exposure frequency and duration, body weight, and averaging time . Toxicity reference values (RfDs, slope factors, etc.) are not part of the exposure calculation – rather, they are used afterward in risk characterization (to compute HQs or risk). In other words, you first calculate the dose an individual receives (using exposure parameters) and then compare that dose to toxicity criteria. Options A, B, and D are all fundamental exposure variables. The Reference Dose comes into play later, when calculating a hazard quotient = Dose / RfD, but it’s not a factor in the dose equation itself.
42
32. If a carcinogen’s mode of action is uncertain, but there are hints it might operate via a threshold mechanism as well as a linear mechanism, how might EPA address the dose-response extrapolation? * A. Default to a linear model only and ignore the possibility of a threshold. * B. Use the default linear extrapolation, but consider presenting an alternative nonlinear analysis as well, if supported by some data (to illustrate the range of risk estimates) . * C. Switch to a purely nonlinear (threshold) approach by default, even without full mode-of-action clarity. * D. Average the results of linear and nonlinear extrapolations to get a midpoint risk.
Correct Answer: B. When mode-of-action information is incomplete or suggestive of complexity, EPA’s primary approach is still to apply the default linear extrapolation for the sake of public health protection . However, the guidelines acknowledge that assessors may present an additional analysis using a nonlinear approach if there is some biological support for it . This means they might show, for example, that if one assumed a threshold, the risk would be X, whereas under the default linear it’s Y, thereby providing perspective. The default remains linear unless/until a nonlinear MOA is firmly established (option C would require stronger evidence than “hints”). They wouldn’t average the two (D); instead, they’d typically emphasize the linear result but discuss the plausible alternative.
43
33. Under what condition might EPA reduce the default 10-fold intraspecies (UFH) uncertainty factor to 1 (essentially eliminating it)? * A. When the chemical’s toxicity is deemed trivial. * B. When human data (clinical or epidemiological) directly demonstrate the threshold for adverse effect even in sensitive subpopulations, leaving little uncertainty in human variability . * C. Whenever an animal study is used, since UFH is only for human studies. * D. Never – the intraspecies factor is always at least 3 or higher.
Correct Answer: B. The intraspecies UF (UFH) is intended to protect sensitive individuals in the absence of data. If we have robust human data that already include sensitive populations or otherwise pinpoint the dose that causes no harm in even the most sensitive group, then there is essentially no uncertainty left about human variability for that effect. In such cases, EPA can set UFH = 1 (no additional safety factor) . For example, if an epidemiological study identified a clear NOAEL in the most sensitive subgroup, you wouldn’t need the usual 10× factor. This is rare, but possible. Normally, UFH = 10 by default (or sometimes 3 if data partially address variability), but it isn’t inviolable – it can be reduced to 1 if justified.
44
34. Why are different averaging times (AT) used when calculating exposures for cancer risk vs. non-cancer hazard? * A. No particular reason – it’s arbitrary. * B. They actually use the same averaging time for both. * C. Because cancer risk is assessed over a lifetime (70 years is typically used as AT for carcinogens), whereas non-cancer effects are assumed not to “accumulate” beyond the exposure duration, so the AT for non-cancer is just the exposure period (not a full lifetime). * D. Because cancer risk calculations require more data points for averaging.
Correct Answer: C. For carcinogens, EPA assumes risk is proportional to the lifetime average dose. Therefore, even if exposure lasts only, say, 10 years, the dose is averaged over a 70-year lifetime for risk calculations. This effectively prorates shorter exposures over a lifetime, reflecting the idea that cancer process can span decades. In contrast, non-cancer effects (assumed to have thresholds) are evaluated over the actual exposure duration – if someone is exposed for 10 years, the dose is averaged over those 10 years (or expressed as a daily dose during that period). The rationale is that once exposure stops, non-cancer effects presumably stop progressing. Thus, AT for cancer = 70 years (approx. 25,550 days), and AT for non-cancer = exposure duration (in days). This difference ensures cancer risk is a lifetime probability, while hazard index reflects the period of exposure.
45
35. What is a cancer slope factor (CSF)? * A. The dose that causes tumors in 50% of test animals (median cancer dose). * B. The slope of the dose-response curve at high experimental doses. * C. A value representing the potency of a carcinogen’s non-cancer effects. * D. An upper-bound estimate of the incremental cancer risk per unit dose of a carcinogen (often expressed in units of (mg/kg-day)^−1) .
Correct Answer: D. A cancer slope factor is essentially the risk per unit dose – it’s derived from the dose-response data as the slope of the line that extrapolates risk at low dose . It is typically an upper 95% confidence estimate (to be health-protective). For example, a CSF of 2 (mg/kg-day)^−1 means that a lifetime exposure of 1 mg/kg-day is estimated to confer about a 2 (i.e., 200%) increased risk (which is hypothetical, since such high risk would be beyond acceptable range). More practically, 0.1 mg/kg-day would confer ~0.2 (20%) risk in that example. The slope factor is usually calculated from the point of departure (like a BMDL) and has units of inverse dose . It’s not tied to 50% response (that’s LD50/TD50 concept) and is unrelated to non-cancer effects.
46
36. Which of the following types of information would generally be least useful for developing a data-derived extrapolation factor (DDEF) for interspecies or intraspecies differences? * A. Comparative pharmacokinetic data (e.g., blood AUC or clearance in animals vs. humans). * B. Human variability data (e.g., distribution of enzyme activity or receptor sensitivity in the population). * C. In vitro studies comparing animal and human target tissue responses to the chemical. * D. An acute LD₅₀ toxicity value in rats.
Correct Answer: D. An acute LD₅₀ (lethal dose for 50% of animals) provides relatively little insight for refining chronic extrapolation factors. DDEFs focus on quantifying differences in dose delivered (TK) and response (TD) between species or among humans. Useful data include kinetic comparisons (how quickly a chemical is metabolized or what internal dose results) and dynamic comparisons (how sensitive target sites are), as well as distributions of these in humans. An LD₅₀ is a crude end-point (death) usually from an acute study, and doesn’t inform the specific factors contributing to chronic risk differences. It also doesn’t tell us how humans compare to animals except in a very general sense. Therefore, among the options, the rat LD₅₀ is least applicable to calculating a precise DDEF. The other options (A, B, C) directly address either TK or TD differences and would be valuable if available.
47
37. An estimated excess lifetime cancer risk of 1×10^-5 (1e-5) means: * A. Exactly one out of 100,000 exposed people will get cancer, no more no less. * B. Approximately a one in 100,000 chance that an individual will develop cancer over a lifetime due to the exposure . * C. A hazard index of 1×10^-5. * D. The risk is considered de minimis because 10^-5 is below 10%.
Correct Answer: B. A risk of 1×10^-5 is interpreted as a 1 in 100,000 incremental probability of an individual developing cancer over a 70-year lifetime from the specified exposure . In other words, if 100,000 people had that exposure, on average one extra cancer case might occur (statistically) among them. It’s an estimate of probability for an average individual, not a certain outcome for a set number of people (so A is phrased too absolutely). It is not a hazard index (C is mixing concepts). As for acceptability (D), 1×10^-5 is actually within EPA’s risk management range (1×10^-6 to 1×10^-4), often considered acceptable if appropriately justified. But the key understanding is that 1e-5 is a probabilistic risk (0.001% chance for an individual).
48
38. EPA’s approach to cancer risk assessment is intentionally conservative. Which of the following is an example of a default assumption or practice that tends to overestimate (rather than underestimate) true risk? * A. Deriving the slope factor from average results of human studies that showed no cancer increase. * B. Using an upper-bound cancer potency estimate from data on a very sensitive animal strain . * C. Assuming a threshold (safe dose) for all carcinogens by default. * D. Using central (mean) estimates of risk rather than upper confidence bounds.
Correct Answer: B. EPA’s risk assessments generally err on the side of health protection. For cancer, one example is that they often derive the slope factor using the most sensitive relevant data – for instance, if multiple animal studies exist, they might pick the one with the strongest response (or use the most sensitive species/strain) . That yields a higher (more conservative) slope factor. Additionally, they use upper 95% confidence bounds on risk rather than best estimates . So option B – basing the assessment on sensitive animal data – is a clear instance of a practice that could overestimate actual human risk (since humans might not be as sensitive as the most sensitive lab strain). In contrast, assuming a threshold for all carcinogens (C) would underestimate risk for true non-threshold carcinogens, and using central estimates (D) would be less conservative than using upper-bounds. Option A (using a null human study’s average) would likely underestimate risk if there really is a risk (a null result could mean the study wasn’t sensitive enough, not that risk is zero).
49
39. All of the following could be considered toxicodynamic data for extrapolation purposes, except: * A. In vitro studies comparing how a chemical binds to a target receptor or enzyme in human cells vs. animal cells. * B. Dose-response data for a biochemical effect (e.g., enzyme inhibition) in humans and animals. * C. Histopathological changes in animal organs at a given internal dose (indicative of tissue response). * D. Plasma concentration vs. time profiles of the chemical in humans.
Correct Answer: D. Toxicodynamic (TD) data relate to the interaction of the chemical with the organism’s biological systems and the resulting effects. Options A, B, and C all deal with effects or responses: (A) is about receptor/enzyme binding affinity (how strongly the chemical affects a biological target) in different species, (B) is a direct measure of effect magnitude at different doses in humans vs animals, and (C) is observing tissue-level effects (like cell damage or precancerous changes) at given doses – all of these inform differences in sensitivity or effect (TD differences). Option D, however, is a classic toxicokinetic dataset – plasma concentration over time is about how the chemical moves through the body (ADME), not the effect it has. So (D) is not toxicodynamic; it’s toxicokinetic data. In summary: TD = effect; TK = concentration. Plasma concentration profiles are TK.
50
40. Which equation is used to estimate non-cancer hazard for chronic exposure? * A. Risk = LADD × CSF * B. HQ = \frac{\text{Dose (or exposure level)}}{\text{Reference Dose}} * C. HQ = Dose × Reference Dose * D. Hazard Index = (Dose)^2 × (Reference Dose)^-1
Correct Answer: B. The hazard quotient (HQ) is calculated as the ratio of the exposure dose to the reference dose: HQ = Exposure ÷ RfD (for inhalation, similarly, concentration ÷ RfC) . If multiple substances or pathways are involved, their HQs are summed to get a hazard index (HI). Choice A is the equation for cancer risk (Lifetime Average Daily Dose × Cancer Slope Factor). Choices C and D are incorrect manipulations. So, for example, if an exposure is 0.0005 mg/kg-day and the RfD is 0.001 mg/kg-day, HQ = 0.0005/0.001 = 0.5. An HQ of 0.5 indicates the dose is 50% of the RfD (no significant risk expected). An HQ of 2 would mean the dose is twice the RfD (potential concern).
51
41. EPA’s cancer slope factors are generally derived as upper-bound estimates on risk for an average individual. Which of the following is true regarding the implication of this for highly susceptible individuals? * A. The slope factor accounts for the most sensitive individuals, so it overestimates risk for everyone else. * B. Highly susceptible individuals (e.g., with genetic vulnerabilities) could experience higher risk than predicted by the slope factor, because the slope factor typically does not specifically cover the extreme end of human sensitivity . * C. The slope factor underestimates risk for the average person. * D. Slope factors are designed to protect the 100th percentile individual explicitly.
Correct Answer: B. EPA slope factors are usually developed to be protective of the general population – they are often an upper 95% confidence limit on the risk for an “average” individual, given the data . This means they are intended to not underestimate the risk for the typical person. However, someone who is highly susceptible (due to genetics, pre-existing conditions, etc.) might have a greater response than average. The slope factor doesn’t explicitly account for the most extreme sensitivity (it’s not a “worst-case individual” estimate; it’s more like an upper-bound for the population average risk) . Therefore, option B is correct: a very sensitive person could theoretically have higher risk than the slope-factor-based estimate (which is one reason additional safety factors like ADAFs or UFH may be considered). Option A is not true because slope factors are not explicitly set to cover the absolute worst case (and certainly not every individual). Option D is false – that would require additional adjustments beyond the slope factor itself.
52
42. Which tool or method can be used to improve the toxicokinetic extrapolation from animals to humans (potentially supporting a DDEF for TK)? * A. A physiologically based pharmacokinetic (PBPK) model that predicts human tissue doses from animal data (allowing species-specific dosimetry) . * B. Structure–activity relationship (SAR) modeling of carcinogenic potential. * C. The Ames mutagenicity test. * D. A high-dose acute toxicity test in rodents.
Correct Answer: A. PBPK models are a powerful tool for extrapolating across species. They incorporate species-specific physiology and biochemical parameters to predict how a chemical distributes and is eliminated in humans based on animal data. By doing so, they can estimate the human-equivalent dose (HED) corresponding to a given animal dose, or compare internal dose metrics (like blood AUC) between species . This directly informs the TK part of extrapolation and can justify a data-derived factor. Options B, C, and D do not address interspecies kinetic differences: SAR models predict hazard qualitatively, the Ames test checks mutagenicity (mode of action info), and acute tests at high doses don’t inform chronic kinetic scaling. A PBPK model, however, is tailored for quantitative TK extrapolation.
53
43. Put the major components of a baseline human health risk assessment in order: 1. Risk Characterization 2. Exposure Assessment 3. Toxicity Assessment 4. Data Collection and Analysis * A. 4 → 2 → 3 → 1 * B. 4 → 3 → 2 → 1 * C. 2 → 3 → 4 → 1 * D. 1 → 2 → 3 → 4
Correct Answer: A. The four main steps (after planning) are: Data Collection and Analysis; Exposure Assessment; Toxicity Assessment; Risk Characterization . In practice: (4) gather and analyze site data (contaminant concentrations, etc.), (2) estimate how people are exposed and how much they take in, (3) compile toxicity values and determine dose-response info (RfDs, slope factors), and finally (1) characterize the risk by combining exposure and toxicity information and discussing uncertainty. So the correct sequence is 4 → 2 → 3 → 1.
54
44. Why is understanding a chemical’s mode of action (MOA) important in EPA’s cancer risk assessment guidelines? * A. Because MOA informs whether low-dose extrapolation should assume a linear no-threshold model or can consider a nonlinear (threshold) approach, which fundamentally affects how risk is estimated . * B. It isn’t important; EPA uses the same approach for all carcinogens. * C. MOA only matters for non-cancer effects, not for cancer. * D. MOA is used to determine the chemical’s half-life in the body.
Correct Answer: A. The mode of action – the sequence of key events by which a chemical causes cancer – is crucial in deciding how to extrapolate to doses below the experimental range . For example, a genotoxic mutagen (direct DNA-damager) is generally assumed to have no safe dose (so a linear model is used) . On the other hand, a carcinogen that operates via a threshold mechanism (like causing cell injury that leads to proliferation) might be assessed with a nonlinear approach (possibly even an RfD concept if a threshold can be identified). Thus, MOA drives the choice between linear extrapolation vs. establishing a reference dose for cancer. EPA’s guidelines put heavy emphasis on analyzing MOA. It definitely matters for cancer (option C is wrong), and while MOA considerations can also apply to non-cancer, the question context is clearly cancer guidelines. Option D is mixing it up with TK (half-life is a kinetic concept, not MOA).
55
45. The EPA’s 2014 DDEF guidance specifically focuses on replacing which default uncertainty factors in risk assessment? * A. The database completeness and subchronic-to-chronic factors. * B. The LOAEL-to-NOAEL adjustment factor. * C. The water exposure and food exposure factors. * D. The interspecies (animal-to-human) factor and the intraspecies (within-human) variability factor .
Correct Answer: D. The DDEF guidance is about developing data-derived values for the two areas historically covered by default 10× factors: interspecies differences (UFA) and intraspecies human variability (UFH) . These are the factors where TK and TD data can be used to get chemical-specific numbers. The guidance does not address other uncertainty factors like those for incomplete data (database UF) or subchronic to chronic or LOAEL to NOAEL – those remain separate issues. So the correct answer is the animal-to-human and human-to-human extrapolation factors. (Option B refers to LOAEL-to-NOAEL UF, which is not the focus of DDEF; A and C are irrelevant here.)
56
46. Which of the following is required for a complete exposure pathway to exist in a risk assessment (select the element that is not required)? * A. A source of contamination. * B. A transport medium and exposure point (where people contact the contamination). * C. An exposure route (e.g., ingestion, inhalation) and an exposed receptor population. * D. A remediation system in place to address the contamination.
Correct Answer: D. A complete exposure pathway requires: a contaminant source, a mechanism or medium of contaminant transport to an exposure point, a route by which the contaminant can enter the body (ingestion, inhalation, dermal), and the presence of receptors (people) who can be exposed. If all these elements are in place and connected, the pathway is considered complete and can lead to risk. A remediation system (cleanup) is not part of an exposure pathway – in fact, the assumption in baseline risk assessment is usually that no remediation is in place (“no action” scenario) . If a pathway is incomplete (missing any of A, B, or C), then exposure (and hence risk via that pathway) will not occur.
57
47. EPA generally does not derive a numerical cancer risk estimate (slope factor) when a chemical is classified as: * A. Carcinogenic to Humans. * B. Likely to be Carcinogenic to Humans. * C. Inadequate Information to Assess Carcinogenic Potential. * D. Suggestive Evidence of Carcinogenic Potential .
Correct Answer: D. When the evidence is in the “Suggestive” category, EPA’s 2005 guidelines state that typically no quantitative risk assessment is conducted . In other words, if the data only suggest a potential for carcinogenicity (e.g., a single small positive finding), EPA generally will not develop a slope factor or unit risk because the evidence is too limited or unreliable for a confident dose-response analysis. For chemicals classified as Carcinogenic or Likely, EPA does quantify risk (slope factors, unit risks are developed). If data are Inadequate, there’s nothing to quantify either – but “Suggestive” is the descriptor that explicitly indicates an observed effect but not enough to quantify risk. Therefore “Suggestive Evidence of Carcinogenic Potential” is the correct answer.
58
48. To derive a data-derived toxicokinetic factor for interspecies extrapolation, which of the following data would be most relevant? * A. Acute toxicity (LD₅₀) values in mice and rats. * B. Histopathology comparison of organ damage in animals vs. humans. * C. Measurements of internal dose metrics (e.g., blood AUC or peak concentration) in both the test animal and humans at equivalent external exposures . * D. In vitro screening assays for mutagenicity.
Correct Answer: C. For a chemical-specific adjustment of the toxicokinetic portion of interspecies differences, you want data that compare how animals and humans process the chemical. The ideal is something like blood concentration-time data or total exposure (AUC) in animals vs. humans. If, for instance, humans have a higher or lower internal dose than animals at the same external dose, that ratio can be used to adjust the interspecies factor. Option C describes exactly such data. Option A (LD₅₀s) and D (mutagenicity assays) don’t provide quantitative info on dosimetry differences. Option B (organ damage comparison) is more of a toxicodynamic outcome; while informative for hazard, it doesn’t directly give a ratio for kinetic extrapolation. So internal dose measurements (TK data) are most useful for TK DDEF.
59
49. The baseline risk assessment evaluates risks under which assumption about site conditions? * A. That cleanup measures have already been implemented (post-remediation scenario). * B. That exposure is occurring only in the present, not the future. * C. That no remedial actions or controls are in place – i.e., it examines the “no action” scenario with current/future exposures as if the site were left untreated . * D. That all possible pathways are active even if they are not plausible.
Correct Answer: C. The baseline human health risk assessment is conducted prior to remediation and considers the site in its existing condition (and reasonably anticipated future conditions) without any cleanup or controls in effect . It essentially asks, “What is the risk if we do nothing and people are exposed?” This provides the basis for determining if remediation is needed. It is not done assuming cleanup is done (A is wrong). It typically considers both current and potential future exposures (so B is wrong; future use like residential use may be included if relevant). Only complete and plausible exposure pathways are evaluated (they don’t force all possible pathways if they’re not applicable, so D is wrong). In summary, baseline risk = no action scenario risk .
60
50. Which situation would likely lead EPA to classify a chemical as “Not Likely to Be Carcinogenic to Humans”? * A. Robust evidence indicates the chemical is not carcinogenic in humans or animals, or it causes tumors only at doses or by modes of action not relevant to human exposures . * B. One high-dose animal study showed tumors, but human relevance is unclear. * C. Suggestive animal evidence of carcinogenicity is present, but not definitive. * D. The chemical is genotoxic in assays (mutagenic).
Correct Answer: A. The descriptor Not Likely to Be Carcinogenic to Humans is used when the evidence strongly indicates no carcinogenic hazard for humans. This could be because well-conducted studies are negative for carcinogenicity, or because the chemical causes tumors only through a species-specific mechanism or at doses that are not relevant to real-world human exposure . For instance, if a chemical induces tumors in rodents only by overwhelming a detoxification pathway that humans don’t have or at extremely high doses producing a mode of action not operative in humans, EPA might say “Not Likely” for typical exposures . Options B and C describe limited positive evidence – those would more likely fall under “Suggestive” or “Likely” depending on strength. Option D (genotoxic) would generally push toward assuming risk, not “Not Likely.” The key for “Not Likely” is a robust dataset indicating negligible carcinogenic risk to humans.
61
51. What is the primary goal of implementing data-derived extrapolation factors (DDEFs) in chemical risk assessment? * A. To add additional layers of conservatism beyond default uncertainty factors. * B. To simplify risk assessment by avoiding complex data analysis. * C. To maximize the use of available empirical data and thereby improve the scientific support for risk estimates – in essence, to replace generic default factors with more accurate, chemical-specific values when possible. * D. To ensure all risk assessments use exactly the same factors regardless of new data.
Correct Answer: C. The DDEF approach is fundamentally about using the best available science to inform extrapolations. By employing actual data on toxicokinetics or toxicodynamics, the goal is to make risk assessments more accurate and reduce uncertainty. This aligns with recommendations to move away from rigid defaults when good data exist. In practice, this means potentially lowering an uncertainty factor if data show the default is overly conservative, or confirming a factor if data support it (or even raising it if data suggest greater differences). The focus is on scientific justification, not simply adding more conservatism or ignoring data. Thus, DDEFs improve risk estimates’ credibility and relevance.
62
52. Data-Derived Extrapolation Factors – the term DDEF – stands for: * A. Detailed Data Evaluation Framework. * B. Dose-Derived Extrapolation Function. * C. Developmental Dose Enhancement Factor. * D. Data-Derived Extrapolation Factors (for interspecies and intraspecies extrapolation) .
Correct Answer: D. DDEF is an acronym introduced by EPA to refer to Data-Derived Extrapolation Factors. These are chemical-specific factors replacing default uncertainty factors for animal-to-human differences and human variability . In the EPA’s 2014 guidance, this term is used exactly in that context. (It does not mean the other phrases listed.)
63
53. EPA generally considers a non-cancer Hazard Index (HI) at or below 1.0 to indicate what? * A. That even sensitive individuals are unlikely to experience adverse health effects (the exposure is at or below the reference level) . * B. That approximately 1% of the population will be affected. * C. That the risk of cancer is one in a million. * D. The point at which remedial action is automatically required.
Correct Answer: A. A Hazard Index of 53. EPA generally considers a non-cancer Hazard Index (HI) at or below 1.0 to indicate: * A. That even sensitive individuals are unlikely to experience adverse health effects at the evaluated exposure level (exposure is at or below the reference dose/concentration)【23†L1-L8】. * B. That roughly 1% of exposed people will be harmed. * C. That the cancer risk is one-in-a-million. * D. That remediation is automatically required. Correct Answer: A. A Hazard Index ≤ 1.0 suggests that the exposure does not exceed the reference level (RfD/RfC) for those chemicals. In general, if HI ≤ 1, the exposure is considered within the “safe” range, even for sensitive sub-populations【23†L1-L8】. It implies a low likelihood of non-cancer health effects. (HI is not a probability, so option B is incorrect; HI pertains to non-cancer risk and is unrelated to cancer probabilities like 10^-6, eliminating C. While risk managers use HI > 1 as a flag, HI ≤ 1 doesn’t automatically trigger action – it usually indicates acceptable non-cancer risk.)