Path7.Mod1.d - Responsible AI Dashboard - Model Performance and Fairness Flashcards

Augmented learning from: https://learn.microsoft.com/en-us/azure/machine-learning/concept-fairness-ml?view=azureml-api-2

1
Q

A QoS

Two types of AI-caused Harms

A

Harm of Allocation: when the system extends or witholds opportunities, resources or information from certain cohorts.

Harm of Quality-of-Service: when the system doesn’t work as well for one group as it does for another.

Cohort means “subgroup or grouping of people with a one or more commonality”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe Group Fairness and Sensitive Features

A

Group Fairness the process of accessing and identifyhing cohorts that are at risk for experiencing harm.

Sensetive Features are the Features that system designers need to consider when assessing Group Fairness.

Cohort means “subgroup or grouping of people with a one or more commonality”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

MP SR

  • Two Disparity Metrics classes in the RAI Dashboard used to quantify Fairness
  • Define Selection Rate

Disparity means “Difference”

A
  • Disparity in Model Performance: Disparity in values of the selected metric across cohorts. This could be things like Disparity in Accuracy Rate, Error Rate, Precision, Recall, MAE, etc.
  • Disparity in Selection Rate: The difference in favorable predictions among cohorts i.e. how likely it gives a positive for the cohort. ex. Asians positively identified as “good at math”.
  • Selection Rate - The fraction of data points that are identified/selected/predicted positively by a model

Cohort means “subgroup or grouping of people with a one or more commonality”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The Fairlearn open source project helps you with both quantitative and qualitiative assessment of your ML Models (T/F)

A

False. Fairlearn has functionality to identify quantitative metric. But you still have to do the qualitiative assessment (ex. determining which Features are Sensitive)

Quantitative means “anything that can be measured or counted”.
Qualitative means “anything that can be described or observed, but not measured”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The Fairlearn project can provide strategies to reduce unfairness, but they don’t necessarily eliminate unfairness.

A

True. Fairlearn can suggest mitigation strategies, but it is still up to devs to determine if the suggested strategy sufficiently reduces unfairness. As you’ll learn in later decks, there will be instances where you still have to choose the final Model…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define Parity Constraints

A

Constraints or criteria that enforce equality across cohorts.

Parity means “Equality or the state of being equal”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define each Parity Constraint’s Purpose (Mitigate or Diagnose) and its corresponding ML Task (Binary Classification or Regression)
- Demographic Parity
- Equalized Odds (Conditional Demographic Parity)
- Equal Opportunity
- Bounded Group Loss

A

Demographic Parity:
Mitigate allocation harms | Binary Classification, Regression

Equalized Odds, Equal Opportunity:
Diagnose allocation and Quality of Service harms | Binary Classification

Bounded Group Loss:
Migitate Quality of Service harms | Regression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly