ECM 1407 Offline Social Issues Flashcards

1
Q

What is a model in the context of computer science?

A
  • A computer model (or simulation) is a system designed to predict the behaviour or outcome of a real-world phenomenon
  • All models are wrong, but some are useful!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What must be queried when being introduced to a model?

A

1) How well do the modelers understand the underlying science or theory of the system they are studying?

2) What are the assumptions and simplifications in the model?

3) How closely do the model predictions correspond with results from real experience?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Use of algorithms in healthcare

A

Prioritization of patients
○ A great idea: if we could optimize the use of resources in a hospital, we could save money!
§ Developing an algorithm to determine who needs to have follow-up care
§ This algorithm should identify patients with the greater medical need

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Use of algorithms in the judicial system

A

Predicting recidivism
○ A great idea: If we optimize the judicial system, we would help the system in terms of fairness as well as we would save money!
COMPAS
○ Correctional Offender Management Profiling for Alternative Sanctions.
Risk assessments: they can be used at every stage of the criminal justice system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The limits of predictions

A
  • Uncertainty
  • Phenomenon
  • Data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is explainable AI?

A

Explainable AI consists of aset of tools/frameworks that enable us to understand and interpret the prediction made by models
- Example: COMPAS: A closed box with 180+ factors predicting recidivism
- Example: Prototypical structures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Difference between black box and closed-source

A

Black-box: not human-readable, testing outputs against inputs and not looking at the inner processes of the model, either for testing purposes or because the code is too complex. Closed-source is human-readable, but its access is limited to a small party

Long, deep neural networks are black box. Many matrices that won’t get explained

How well did you know this?
1
Not at all
2
3
4
5
Perfectly