Responsible AI Flashcards

1
Q

HIC

A

High-Income Countries

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

LMIC

A

Low- and Middle- Income Countries

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Artificial Intelligence (AI)

A

The ability of algorithms encoded in technology to learn from data so that they can perform automated tasks without every step in the process having to be programmed explicitly by a human.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

6 key ethical principles for the use of AI for health

A
  • -Protecting human autonomy
  • -Promoting human well-being and safety and the public interest
  • -Ensuring transparency, explainability, and intelligibility
  • -Fostering responsibility and accountability
  • -Ensuring inclusiveness and equity
  • -Promoting AI that is responsive and sustainable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Protecting human autonomy

A

One of the 6 key ethical principles for the use of AI for health that stipulates that:

the use of AI or other computational systems does not undermine human autonomy - i.e., that humans remain in control of health care systems and medical decisions.

providers have the information necessary to make safe, effective use of AI systems and that people understand the role that
such systems play in their care.

there is protection of privacy and confidentiality and obtaining valid informed consent through appropriate legal frameworks for data protection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Promoting human well-being and safety and the public interest

A

One of the 6 key ethical principles for the use of AI for health that stipulates that:

AI should not harm people nor result in mental or physical harm that could be avoided by use of an alternative practice or approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Ensuring transparency, explainability and intelligibility

A

One of the 6 key ethical principles for the use of AI for health that stipulates that:

AI technologies should be intelligible or understandable to developers, medical professionals, patients, users and regulators.

Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology and that such information facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.

AI technologies should be explainable according to the capacity of those to whom they are explained.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Fostering responsibility and accountability

A

One of the 6 key ethical principles for the use of AI for health that stipulates that:

AI stakeholders are responsible for ensuring that AI can perform its tasks and that AI is used under appropriate conditions and by appropriately trained people.

Responsibility can be assured by application of “human warranty”, which implies evaluation by patients and clinicians in the development and deployment of AI technologies. Human warranty requires application of regulatory principles upstream and downstream of the algorithm by establishing points of human supervision.

If something goes wrong with an AI technology, there should be accountability. Appropriate mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Ensuring inclusiveness and equity

A

One of the 6 key ethical principles for the use of AI for health that stipulates that:

AI for health be designed to encourage the widest possible appropriate, equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes. AI technologies should:

be available for the needs in HIC and LMIC.

avoid biases to the disadvantage of identifiable groups, especially groups that are already marginalized.

minimize inevitable disparities in power that arise between providers and patients, between policy-makers and people and between companies and governments that create and deploy AI technologies and those that use or rely on them.

be monitored and evaluated to identify disproportionate effects on specific groups of people.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Promoting AI that is responsive and sustainable

A

One of the 6 key ethical principles for the use of AI for health that stipulates that:

designers, developers and users continuously, systematically and transparently assess AI applications during actual use.

determine whether AI responds adequately and appropriately and according to communicated, legitimate expectations and requirements

AI systems should be designed to minimize their environmental consequences and increase energy efficiency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Who are the primary stakeholders for responsible AI?

A

The development, adoption and use of AI requires an integrated, coordinated approach among these stakeholders

Gov’t health agencies - determine how to introduce, integrate and harness these technologies for the
public good while restricting or prohibiting inappropriate use

Gov’t Regulatory agencies - validate and define whether, when and how such technologies are to be used

Gov’t Educational agencies - teach current and future health-care workforces how such technologies function and are to be integrated into everyday practice

Gov’t Information Technology - facilitate the appropriate collection and use of health data and narrow the digital divide

Government Legal systems - ensure that people harmed by AI technologies can seek redress

Non Gov’t medical researchers, scientists, health-care workers and, especially, patients.

Technologists and software developers

Companies, universities, medical associations and international organizations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are some examples where AI can improve the delivery of health care?

A

Prevention
Diagnosis and treatment of Disease
Augment the ability of health-care providers to improve patient care
Optimize treatment plans
Support pandemic preparedness and response
Inform the decisions of health policy-makers or allocate resources within health systems
Empower patients and communities to assume control of their own health care and better understand their evolving needs
Enable resource-poor countries, where patients often have restricted access to health-care workers or medical professionals, to bridge gaps in access to health services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

supervised learning

A

A subcategory of Machine Learning (ML) where data used to train the model are labelled (the outcome variable is known), and the model infers a function from the data that can be used for predicting outputs from different inputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Unsupervised learning

A

A subcategory of Machine Learning (ML) that does not involve labelling data (like with supervised learning) but involves identification of hidden patterns in the data by a machine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Reinforcement learning

A

A subset of Machine Learning (ML) that involves machine learning by trial and error to achieve an objective for which the machine is “rewarded” or “penalized”, depending on whether its inferences reach or hinder achievement of an objective

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Deep learning or Deep structured learning

A

A subcategory of Machine Learning (ML) that is based on the use of multi-layered models to progressively extract features from data. Deep learning can be supervised, unsupervised or semi-supervised. Deep learning generally requires large amounts of data to be fed into the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the dimensions of big data?

A

volume - data that is voluminous, big, petabytes
velocity - the speed at which the data is created and according to which data needs to be stored and analyzed
variety - a form of scalability that refers to diversity of the data. Data comes in different forms -structured, unstructured, etc.

veracity - refers to the quality of data
variability - data’s meaning is constantly changing. For example, language processing by computers is exceedingly difficult because words often have several meanings. Data scientists must account for this variability by creating sophisticated programs that understand context and meaning.
valence - refers to connectedness. the more connected the more valence. valence measures the ratio of actually connected data items to the possible number of connections that could occur within the collection.
value - main purpose between collecting, storing, analyzing and all the other things we do is to extract “Value” from Big Data.

18
Q

How might AI be used in Diagnosis and prediction-based diagnosis?

A

Currently, AI is being evaluated for use in radiological diagnosis in oncology (thoracic imaging, abdominal and pelvic imaging, colonoscopy, mammography, brain imaging and dose optimization for radiological treatment), in non-radiological applications
(dermatology, pathology), in diagnosis of diabetic retinopathy, in ophthalmology and for RNA and DNA sequencing to guide immunotherapy.

In LMIC, AI may be used to improve detection of tuberculosis in a support system for interpreting staining images (12) or for scanning X-rays for signs of tuberculosis, COVID-19 or 27 other conditions.

AI might be used to predict illness or major health events before they occur. For example, an AI technology could be adapted to assess the relative risk of disease, which could be used for prevention of lifestyle diseases such as cardiovascular disease and diabetes.

AI prediction could identify individuals with tuberculosis in LMIC who are not reached by the health system and therefore do not know their status.

Predictive analytics could avert other causes of unnecessary morbidity and mortality in LMIC, such as birth asphyxia. An expert system used in LMIC is 77% sensitive and 95% specific for predicting the need for resuscitation

19
Q

How might AI be used in Clinical Care?

A

Clinicians might use AI to integrate patient records during consultations, identify patients at risk and vulnerable groups, as an aid in difficult treatment decisions and to catch clinical errors.

In LMIC, for example, AI could be used in the management of antiretroviral therapy by predicting resistance to HIV drugs and disease progression, to help physicians optimize therapy

AI could eventually change how patients self-manage their own medical conditions, especially chronic diseases such as cardiovascular diseases, diabetes and mental problems via through conversation agents (e.g. “chat bots”), health monitoring and risk prediction tools and technologies designed specifically for individuals with disabilities

Telemedicine is part of a larger shift from hospital- to home-based care, with use of AI technologies to facilitate the shift. They include remote monitoring systems, such as video-observed therapy for tuberculosis and virtual assistants to support patient care.

Wearables will create more opportunities to monitor a person’s health and to capture more data to predict health risks, often with greater efficiency and in a timelier manner. This could generate data to predict or detect health risks or improve a person’s treatment when necessary

AI is being considered for use to assist in decision-making about prioritization or allocation of scarce resources. AI version,
“DeepSOFA” (Sequential Organ Failure Assessment), has been developed.

It has been suggested that machine-learning algorithms could be trained and used to assist in decisions to ration supplies, identify which individuals should receive critical care or when to discontinue certain interventions, especially ventilator support

20
Q

What are some applications of AI for health research?

A

An important area of health research with AI is based on use of data generated for electronic health records for biomedical research, quality improvement and optimization of clinical care

AI can help to identify clinical best practices before the customary pathway of scientific publication, guideline development and clinical support tools.

AI can also assist in analyzing clinical practice patterns derived from electronic health records to develop new clinical practice models

AI is expected to play an important role in genomics. In health research, for example, AI could improve human understanding of disease or identify new disease biomarkers

21
Q

What are some applications of AI in drug development?

A

AI could change drug discovery from a labor-intensive to a capital- and data-intensive process with the use of robotics and models of genetic targets, drugs, organs, diseases and their progression, pharmacokinetics, safety and efficacy.

AI could be used in drug discovery and throughout drug development to shorten the process and make it less expensive and more effective. AI was used to identify potential treatments for Ebola virus disease, although, as in all drug development, identification of a lead compound may not result in a safe, effective therapy

22
Q

What are some applications of AI in health systems management and planning?

A

AI can be used to assist personnel in complex logistical tasks, such as optimization of the medical supply chain, to assume mundane, repetitive tasks or to support complex decision-making.

Some possible functions of AI for health systems management include: identifying and eliminating fraud or waste, scheduling patients, predicting which patients are unlikely to attend a scheduled appointment and assisting in identification of staffing requirements

For example, researchers in South Africa applied machine-learning models to administrative data to predict the length of stay of health workers in underserved communities

23
Q

What are some applications of AI in public health and public health surveillance?

A

improve identification of disease outbreaks and support surveillance

AI can be used for health promotion or to identify target populations or locations with “high-risk” behavior and populations that would benefit from health communication and messaging (micro-targeting)

AI has also been used to address the underlying causes of poor health outcomes, such as risks related to environmental or occupational health.

surveillance itself is changing, especially real-time surveillance. For example, researchers could detect a surge in cases of severe pulmonary disease associated with the use of electronic cigarettes by mining disparate online sources of information and using Health Map, an online data-mining tool

The possible uses of AI for different aspects of outbreak response have also expanded during the COVID-19 pandemic.

24
Q

What is an Ethical Principle for the application of AI for health?

A

An ethical principle is a statement of a duty or a responsibility in the context of the development, deployment and continuing assessment of AI technologies for health. Ethical principles are grounded in basic ethical requirements that apply to all persons and that are considered noncontroversial.

25
Q

What are the 4 Ethical Principles for the application of AI for health?

A
  • Avoid harming others (sometimes called ”Do no harm” or non-maleficence).
  • Promote the well-being of others when possible (sometimes called “beneficence”). Risks of harm should be minimized, while maximizing benefits. Expected risks should be balanced against expected benefits.
  • Ensure that all persons are treated fairly, which includes the requirement to ensure that no person or group is subject to discrimination, neglect, manipulation, domination or abuse (sometimes called “justice” or “fairness”).
  • Deal with persons in ways that respect their interests in making decisions about their lives and their person, including health-care decisions, according to informed understanding of the nature of the choice to be made, its significance, the person’s interests and the likely consequences of the alternatives (sometimes called “respect for persons” or “autonomy”)
26
Q

What are human rights?

A

Human rights are intended to capture a basic set of moral and legal requirements for conduct to which every person is entitled regardless of race, sex, nationality, ethnicity, language, religion or any other feature. These rights include human dignity, equality, non-discrimination, privacy, freedom, participation, solidarity and accountability.

human rights are legally binding and provide a powerful framework by which governments, international organizations and private actors are obligated to abide.

27
Q

What are some areas of ethical challenges to use of AI for health care?

A

–Assessing Whether AI should be used

–AI and the Digital Divide

–Data Collection and Use

–Accountability and responsibility for decision-making with AI

–Bias and Discrimination with AI

–Risks of AI to safety and cybersecurity

–Impacts of AI on labor and employment

–Commercialization of AI in Health Care

–AI and climate change

28
Q

What is the “digital divide”?

A

Uneven distribution of access to, use of or effect of information and communication technologies among any number of distinct groups.

In the USA, for example, millions of people in rural areas and in cities still lack access to high-speed broadband services, and 60% of health-care facilities outside metropolitan areas also lack broadband

29
Q

What are some areas of ethical challenges related to assessing whether artificial intelligence should be used

A

Overstating what AI can accomplish, unrealistic estimates of what could be achieved as AI evolves and uptake of unproven products and services that have not been subjected to rigorous evaluation for safety and efficacy

30
Q

What are some areas of ethical challenges related to AI and the Digital Divide?

A

Although the cost of digital technologies is falling, access has not become more equitable. For example, 1.2 billion women (327 million fewer women than men) in LMIC do not use mobile Internet services because they cannot afford to or do not trust the technology, even though the cost of the devices should continue to fall

The human and technical resources required to realize the benefits of digital technologies fully are also unequally distributed, and infrastructure to operate digital technologies may be limited or inexistent.

31
Q

What are some areas of ethical challenges related to AI and data collection and use?

A

Danger that poor-quality data will be collected for AI training, which may result in models that predict artefacts in the data instead of actual clinical outcomes

training data will always have one or more systemic biases because of under-representation of a gender, age, race, sexual orientation or other characteristic.

safeguarding individual privacy. The collection, use, analysis and sharing of health data have consistently raised broad concern about individual privacy, because lack of privacy may either harm an individual (such as future discrimination on the basis of one’s health status) or cause a wrong, such as affecting a person’s dignity if sensitive health data are shared or broadcast to others

health data collected by technology providers may exceed what is required and that such excess data, so-called “behavioral data surplus”, is repurposed for uses that raise serious ethical, legal and human rights concerns.

—-Such data may include “mundane” data that were not originally characterized as “health data”; however, machine learning can elicit sensitive details from such ordinary personal data and thus transform them into a special category of sensitive data (136) that may require protection

—-Concern about the commercialization of health data includes individual loss of autonomy, a principle stated in section 5, loss of control over the data (with no explicit consent to such secondary use), how such data (or outcomes generated by such data) may be used by the company or a third party, with concern that companies are allowed to profit from the use of such data, and concern about privacy, as companies may not meet the duty of confidentiality, whether purposefully or inadvertently (for example due to a data breach)

Data colonialism - data are used for commercial or non-commercial purposes without due
respect for consent, privacy or autonomy

—-biomedical big data is that it may foster a divide between those who accumulate, acquire, analyze and control such data and those who provide the data but have little control over their use. This is especially true with respect to data collected from underrepresented groups

Mechanisms for safeguarding privacy may not be sufficient protection

  • —The scale and complexity of biomedical big data make it impossible to keep track of and make meaningful decisions about all uses of personal data
  • —Anonymization may not be possible during health data collection. For example, in predictive AI, time-course data must be collected from a single individual at several times, obviating anonymization until data at all time points are collected. Furthermore, while anonymization may minimize the risks of (re-)identification of a person, it can reduce the positive benefits of health data
32
Q

What are some areas of ethical challenges related to accountability and responsibility for decision-making
with AI?

A

There may not be appropriate or enforceable regulations, stakeholder participation or oversight, all of which are required to ensure that ethical and legal concerns can be addressed and human rights are not violated

There may be enough ethical concern about a use case or a specific AI technology, even if it provides accurate, useful information and insights, to discourage a particular use. An AI technology that can predict which individuals are likely to develop type 2 diabetes or HIV infection could provide benefits to an at-risk individual or community but could also give rise to unnecessary stigmatization of individuals or communities, whose choices and behavior are questioned or even criminalized, result in over-medicalization of otherwise healthy individuals, create unnecessary stress and anxiety and expose individuals to aggressive marketing by pharmaceutical companies and other for-profit health-care services

AI “control problem” - developers and designers of AI may not be held responsible, as AI-guided systems function independently of their developers and may evolve in ways that the developer could claim were not foreseeable. This creates a responsibility gap, which could place an undue burden on a victim of harm or on the clinician or health-care worker who uses the technology but was not involved in its development or design

If AI programming is automated (BigML, Google AutoML and Data Robot), the checks and balances provided by the involvement of a human developer to ensure safety and identify errors would also be automated, and the control problem is abstracted one step further away from the patient.

“many hands problem” or the “‘traceability” of harm - Diffusion of responsibility may mean that an individual is not compensated for the harm he or she suffers, the harm itself and its cause are not fully detected, the harm is not addressed and societal trust in such technologies may be diminished if it appears that none of the developers or users of such technologies can be held responsible

The issuance of ethics guidance by technology companies, separately or jointly may be just “ethics washing”

—-such guidelines tend to apply to the prospective behavior of companies for the technologies they design and deploy (role responsibility) and not historic responsibility for any harms for which responsibility should be allocated. This creates a responsibility gap, as it does not address causal responsibility or retrospective harm

—-monitoring of whether companies are complying with their own guidance tends to be done internally, with little to no transparency, and without enforcement by institutions or mechanisms empowered to act independently to evaluate whether the commitments are being met. Finally, these commitments are not legally enforceable if violated

Accountability for AI-related errors and harm - clinicians and physicians may be held accountable for AI errors when the developers or testers may be more appropriately accountable

33
Q

What are some areas of ethical challenges related to autonomous decision-making with AI?

A

“peer disagreement” between two competent experts – an AI machine and a doctor. There are also no clear rules for determining who is right

ethically challenging for doctors to rely on the judgement of AI, as they have to accept decisions based on black-box algorithms. AI should therefore be transparent and explainable

Loss of human control by assigning decision-making to AI-guided technologies could affect various aspects of clinical care

  • —the patient, the clinician–patient relationship (and whether it interrupts communication between them), the relation of the health-care system to technology providers and the choices that societies should make about standards of care
  • —As more personal data are collected by such technologies and used by clinicians, patients might increasingly be excluded from shared decision-making and left unable to exercise agency or autonomy in decisions about their health
  • —individuals may feel unable to refuse treatment, partly also because the patient cannot speak with or challenge the recommendation of an AI-guided technology (e.g. a notion that the “computer knows best”) or is not given enough information or a rationale for providing informed consent
  • —There is no precedent for seeking the consent of patients to use technologies for diagnosis or treatment.
  • —Physicians who are left out of decision-making between a patient and an AI health technology may also feel loss of control
  • —a risk of surrendering decision-making to an AI technology, which may be more likely if the technology is presented to the patient as providing better insight into their health status and prognosis than a physician
  • — if an AI technology reduces contact between a provider and a patient, it could reduce the opportunities for clinicians to offer health promotion interventions to the patient and undermine general supportive care, such as the benefits of human–human interaction when people are often at their most vulnerable

Risks of using AI for resource allocation and prioritization include

  • —managing conflicts between human and machine predictions,
  • —difficulty in assessing the quality and fitness for purpose of software,
  • —identifying appropriate users and the novel situation in which a decision for a patient is guided by a machine analysis of other patients’ outcomes
  • —bias leading to allocation of resources that discriminates against, for example, people of color; decisions related to gender, ethnicity or socioeconomic status might similarly be biased
  • —bias at the population level could encourage use of resources for people who will have the greatest net benefit, e.g. younger, healthier individuals, and divert resources and time from costly procedures intended for the elderly.

Use of AI for predictive analytics in health care
—-Prediction technologies could be inaccurate because an AI technology bases its recommendations on an inference that optimizes markers of health rather than identifying an underlying patient need. An algorithm that predicts mortality from training data may have learnt that a patient who visits a chaplain is at increased risk of death \

—-efficacy and accuracy in long-term predictions may be more difficult or impossible to achieve. The risk of harm therefore increases dramatically, as predictions of limited reliability could affect an individual’s health and well-being and result in unnecessary expenditure of scarce resources

—-risks of bias and discrimination for individuals because of a predisposition to certain health conditions (183), which could manifest itself in the workplace, health insurance or access to health-care resources

—use of AI, combined for example with “nudging”, could transform an application for promoting healthy behavior into a technology that could exert powerful control over the choices people make in their daily lives. If AI predicts that an individual is at high risk of a certain disease, will that individual still have the right to engage in behavior that increases the likelihood of the disease?

Use of AI for prediction in drug discovery and clinical development could allow pharmaceutical companies to take “regulatory shortcuts” and conduct fewer clinical trials and with fewer patient data

34
Q

What are some areas of ethical challenges related to Bias and discrimination associated with AI?

A

data sets used to train AI models are biased, as many exclude girls and women, ethnic minorities, elderly people, rural communities and disadvantaged groups so that, in unequal societies, AI may be biased towards the majority
and place a minority population at a disadvantage

Existing bias and established discrimination in health-care provision and the structures and practices of health care are captured in the data with which machine-learning models are trained and manifest in the recommendations made by AI-guided technologies.

If an AI technology is based on a racially homogenous dataset, biomarkers that an AI technology identifies and that are responsive to a therapy may be appropriate only for the race or gender of the dataset and not for a more diverse population

Bias from the digital divide - e.g. women in LMIC are less likely to have mobile or internet access and therefore women not only contribute fewer data to data sets used to train AI but are less likely to benefit from services

Biases often depend on who funds and who designs an AI technology. AI-based technologies have tended to be developed by one demographic group and gender, increasing the likelihood of certain biases in the design

Bias can also arise from insufficient diversity of the people who label data or validate an algorithm

Bias may also be due to the origin of the data with which AI is designed and trained. It may not be possible to collect representative data if an AI technology is initially trained with data from local populations that have a different health profile from the populations in which the AI technology is used.

Bias can also be introduced during implementation of systems in real-world settings. If the diversity of the populations that may require use of an AI system, due to variations in age, disability, co-morbidities or poverty, has not been considered, an AI technology will discriminate against or work improperly for these populations.

35
Q

What are some areas of ethical challenges related to risks of AI technologies to safety and cybersecurity?

A

Patient safety could be at risk from use of AI that may not be foreseen during regulatory review of the technology for approval.

  • —Errors in AI Systems like incorrect recommendations, recommendations based on false-negative or false-positive results
  • —Risk of AI errors can impact patients in large scale in short periods of time
  • —Human programming mistakes (bugs, defects) can provide wrong guidance

Cybersecurity

  • —AI systems may be targeted for malicious attacks and hacking in order to shut down certain systems, to manipulate the data used for training the algorithm, thereby changing its performance and recommendations, or to “kidnap” data for ransom
  • —An algorithm, especially one that runs independently of human oversight, could be hacked to generate revenue for certain recipients,
36
Q

What are some areas of ethical challenges related to the impact of AI technologies on labor and employment in health and medicine?

A

AI to augment and possibly replace the daily tasks of health-care workers and physicians could also remove the need for maintaining certain skills, such as the ability to read an X-ray

dependence on AI systems could erode independent human judgement and, in the worst-case scenario, could leave providers and patients incapable of acting if an AI system fails or is compromised

automate many of the jobs and tasks of health-care personnel, resulting in significant loss of jobs in nearly every part of the health workforce, including certain types of doctors

if patients interact more frequently and directly with AI, result in doctors spending less time in direct contact with patients and more time in administering technology, analyzing data and learning how to use new technologies

37
Q

What are some areas of ethical challenges in commercialization of AI for health care?

A

A general problem is lack of transparency. Practices remain hidden partly because of commercial secrecy agreements or the lack of general obligations for transparent practices, including the role these firms play in health care and the data that are collected and used to train and validate an AI algorithm.

Overall business model of the largest technology firms includes both aggressive collection and use of data to make their technologies effective and use of surplus data for commercial practices, considered by Professor Shoshana Zuboff as “surveillance capitalism”

growing power that some companies may exert over the development, deployment and use of AI for health (including drug development) and the extent to which corporations exert power and influence over individuals and governments and over both AI technology and the health-care market. Monopoly power can concentrate decision-making in the hands of a few individuals and companies, which can act as gatekeepers of certain products and services (221) and reduce competition, which could eventually translate into higher prices for goods and services, less consumer protection or less innovation.

38
Q

What are some areas of ethical challenges with AI and climate change?

A

Extending the use of AI for health and in other sectors of the global economy could, however, contribute directly to dangerous climate change and poor health outcomes, especially of marginalized populations

Researchers at the University of Massachusetts Amherst, USA, found that the emissions associated with training a single “big language” model were equal to approximately 300 000 kg of carbon dioxide or 125 round-trip flights between New York City and Beijing

39
Q

What is “Design for Values”

A

“Design for values” is explicit transposition of moral and social values into context-dependent design requirements. It is an umbrella term for several pioneering methods, such as value-sensitive design, values in design and participatory design. Design for values presents a roadmap for stakeholders to translate human rights into context-dependent design requirements through a structured, inclusive, transparent process, such that abstract values are translated into design requirements and norms (properties that a technology should have to ensure certain values), and the norms then become a socio-technical design requirement. The process of identifying design requirements permits all stakeholders, including individuals affected by the technology, users, engineers, field experts and legal practitioners, to debate design choices and identify the advantages and shortcomings of each choice.

40
Q

What are 3 approaches for promoting inclusivity?

A

Citizen Science - Citizen science is defined by the Alan Turing Institute as the direct contribution of non-professional scientists to scientific research, for instance, by contributing data or performing tasks

Open-source software - Transparency and participation can be increased by the use of open-source software for the underlying design of an AI technology or making the source code of the software publicly available.

Increased diversity - Too often, efforts to increase the diversity of AI technologies involve increasing the diversity of the data on which they are based. Although this is necessary, it is not sufficient and might even amplify any biases inherent in the design. Minimizing and identifying potential biases requires greater involvement of people who are familiar with the nature of potential biases, contexts and regulations throughout software development, from its design to consultation with stakeholders, labelling of data, testing and deployment

41
Q

How can not-for-profit AI developers benefit responsible AI?

A

Not-for-profit developers, who are not constrained by internal or external revenue targets, can adhere to ethical principles and values more readily than private developers. Not-for-profit developers may include treatment providers, hospital systems and charities.

42
Q

What are the WHO’s 5 recommendations for ethical transparent design?

A
  1. Potential end-users and all direct and indirect stakeholders should be engaged from the early stages of AI development in structured, inclusive, transparent design and given opportunities to raise ethical issues, voice concerns and provide input for the AI application under consideration. Relevant ethical considerations should inform the design and translation of moral values into specific context-dependent design requirements.
  2. Designers and other stakeholders should ensure that AI systems are designed to perform well-defined tasks with the accuracy and reliability necessary to improve the capacity of health systems and advance patient interests. Designers and other stakeholders should also be able to predict and understand potential secondary outcomes.
  3. Designers should ensure that stakeholders have sufficient understanding of the task that an AI system is designed to perform, the conditions necessary to ensure that it can perform that task safely and effectively and conditions that might degrade system performance.
  4. The procedures that designers use to “design for values” should be informed and updated by the consensus principles stated in this report, best practices (e.g. privacy preserving technologies and techniques), standards of ethics by design and evolving professional norms (transparency of access to codes, processes that allow verification and inclusion).
  5. Continuing education and training programs should be available to designers and developers to ensure that they integrate evolving ethical considerations into design processes and choices. The establishment of formal accreditation procedures could ensure that designers and developers abide by ethical principles similar to those required of health-care workers.