Practices For Responsible AI At Microsoft Flashcards

1
Q

At Microsoft we have established our own governing practices for responsible and I will stop this includes former governing body is developing our policies and providing engineering teams with actionable guidance and tools. Finally we aim to empower people and organisations to use AI to improve lives around the world

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Understanding the Microsoft governance model - aether + office of responsible AI

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Microsoft responsible governance structure:
Our governance structure today uses a hub-and-spoke model to provide the accountability and authority to drive initiatives while also enabling responsible AI policies To Be implemented at scale

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Centralised governance:
The senior leadership team is ultimately accountable for the company’s Direction on responsible as setting the companies are principles values and human rights committee’s. Building off of our culture and integrity and trust this group is the final decision maker on the most sensitive novel and significant AI development and deployment matters.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The office of responsible tires for key functions:
Internal policy in new line enablement
Case management
Public policy

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Internal policy: Setting the company-wide rules for an acting responsible as well as defining roles and responsibilities for teens involved in this effort

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Enablement:
Readiness to adopt responsibility practices both within our company and among our customers and Partners

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Case management:
Review of sensitive use cases to help ensure that our AI principles are upheld in our development and deployment work

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Public policy:
Help to shape new laws norms and standards that will be needed to ensure that the promise of AI technology is realised for the benefit of society at large

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The Aether committee serves and advisory role to the senior leadership and the office of responsible guy on questions challenges and opportunities with the development and Fielding of AI technologies.
If they’re also provides guidance to teams across the company to ensure that our products and services align with our principles.
The committee brings together the top Talent and technology is ethics law and policy from across Microsoft formulate recommendations on policies processes and best practices

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The Aether committee has six working groups that focus on specific topics grounded in our our principles.
The working group’s play a key role in developing tools best practices and tailored implementation guidance through the related to their respective areas of expertise

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Decentralized governance

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Policies and procedures turn on new line to help every Microsoft employee live up to our commitment to developing and deploying responsible as we have created principles a sensitive use framework and company-wide rules to help employees develop a better understanding of the company’s commitment with respect to AI development and deployment

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

We expect every Microsoft employee to:
Develop a general understanding of our our principles
Report and escalate sensitive uses
Contact their responsible and champlain they need guidance on responsible

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Responsible as standard:
In order to implement responsible AI practices the policy requirements procedures and Tools need to be tightly embedded with the AI development lifestyle and organisation already uses.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Responsible are frameworks in action:
As discussed in the previous unit Microsoft has been developing and refining its own internal process to govern responsibly

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Microsoft sensitive use case framework:
For responsible governance documentation and development or employment scenario was considered a sensitive us if it falls into one or more of the following categories:

A

Denial of consequential services
Risk of harm
Infringement on human rights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Niall of consequential services:
The scenario involves the use of AI in a way that made directly result in the denial of consequential services or support to an individual for example financial housing insurance education employment health care services etc

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Risk of harm:

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Infringement on human rights

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Microsoft sensitive use case review process:
Identification
Assessment
Mitigation

A
22
Q

Identification

A
23
Q

Assessment

A
24
Q

Mitigation

A
25
Q

Establishing design principles and guidelines in engineering:
If you are developing implementing or managing a internally you may want to consider how to honour your organization’s guiding principles at every step of the AI life cycle.
To empower technical employees to do so we have found that it helps to translate your principles into an actionable guidance

A
26
Q

Principles and guidelines

A
27
Q

Security and privacy:
Security and privacy of key pillars of trustful stop there are a number of emerging tools to help protect security and privacy in AI systems will stop techniques like homomorphic encryption multi-party computation and differential privacy make it possible to train a r models using private data without sharing it.

A

Once the AI system is built you can use technologies like counterfeit to conduct a security risk assessments

28
Q

Resource type:
Guidelines
Technology tools
Third-party tools

A
29
Q

Guidelines

A
30
Q

Technology tools

A
31
Q

Third-party tools

A
32
Q

Fairness:
Ai system to treat everyone fairly and avoid affecting similarly-situated groups of people in different ways
Fairness is a fundamentally social socio-technical challenge Sophia classification tools are not be all and end all solutions for stop however there are two key steps for reducing unfairness:
assessment and mitigation

A
33
Q

Implementing fairness in your organisation:
Resourcetype:
guidelines
Technology tools
Third-party tools

A
34
Q

Guidelines:

A
35
Q

Technology tools

A
36
Q

Third-party tools

A
37
Q

Inclusiveness:
Inclusive design practices help ensure that AI models perform well for all users

A

Resource type:
Guidelines:
Reference Microsoft inclusive design practices inclusive design toolkit and algorithmic green lining paper.

38
Q

Reliability and safety

A
39
Q

Transparency

A
40
Q

Accountability

A
41
Q

Implementing accountability in your organisation:
Guide line
Management tools
Technology tools

A
42
Q

Guideline

A
43
Q

Management tools

A
44
Q

Technology tools

A
45
Q

Engaging externally-ai for good

A
46
Q

Contributing solutions to societal challenges

A
47
Q

AI for accessibility

A
48
Q

AI for Earth

A
49
Q

AI for humanitarian action

A
50
Q

AI for cultural heritage

A
51
Q

AI for health

A