W3: Knowledge Clips Flashcards
(31 cards)
Triple bottom line
Not only measuring success by financial gains, but also by social and environmental impact
Moral and ethical dilemmas
Involve deeply ingrained societal norms, cultural differences, and philosophical debates. These complexities make it difficult for AI to provide meaningful solutions
Behavioural ethics
Refers to individual behaviour that is judged according to socially accepted moral norms
Trolley problem
Do nothing and it hits 5 people, or change the outcome and hit 1 person
Utilitarianism
Consequences are what matters; the greatest good for the greatest number of people
Deontology
Judges morality based on whether somebody adheres to different rules, duties, or expectations. It looks at motives
Footbridge dilemma
A trolley is coming down the tracks. There are 5 people standing on the track and 1 on the bridge. If you push them off the bridge they can stop the trolley and save 5 people. Often, people get more uncomfortable and start to question utilitarianism. This shows how important context is in guiding our behaviour based on our ethical or moral approach
Machine learning
Helps us get from place to place, gives us suggestions, translates stuff, even understands what you say to it. Computers learn the solution by finding patterns in the data. However, just because something is automatic, doesn’t mean it’s neutral. It’s impossible to separate ourselves from our human biases, so they become part of the technology we create
Traditional programming
People hand code the solution to a problem step by step
Interaction bias
E.g. people were asked to draw shoes. As more people interacted with the game, the computer didn’t recognise other types of shoes
Latent bias
Training a computer based on what a physicist looks like and using past pictures, leads to a latent bias skewing towards men
Selection bias
E.g. you’re training a model to recognise faces. Are you making sure to select photos that represent everyone?
Robustness
Can you assure end users nobody can hack such an AI model such that a person could disadvantage other people and or benefit one perosn over another?
“Social”
Meaning people
“Holistically”
There’s three major things to think about; people, process or governance, and tooling
People
The culture of the organisation, the diversity of your teams
“Wisdom of crowds”
The more diverse your group of people, the less chance for error
Tooling
What are the tools, engineering methods, or frameworks that you can use to ensure the five pillars?
Process or governance
What are you going to promise employees and the market in terms of standards for the AI model for fairness, explainability, accountability, etc.
Moral machine experiment
Looks at who autonomous cars should save. The goal was to open the discussion to the public. How do people’s culture and background affect the decisions they make? It became a viral website
Anthropomorphism
When humans attribute human characteristics, behaviours, and emotions to non-human entities. It can be a way of making sense and relating to the world around us. It is often used in storytelling design and interaction with technology as it helps to explain certain phenomena. It also serves as a form of psychological projections where humans project their own feelings and experiences
Artifical General Intelligence (AGI)
Can potentially become as intelligent as humans. It would possess the ability to understand, learn, and apply knowledge to solve unfamiliar problems in diverse contexts. This approach is anthropomorphic, taking human intelligence as a golden standard of ultimate intelligence
Intelligence
Broadly defined as the ability to acquire and apply knowledge, solve problems, adapt to new situations, and learn from experience. This includes thinking of creative solutions, flexibly using contextual and background information, being able to think and understand, and take into account emotions in an ethical consideration
Spatial awareness
Involves understanding and mentally manipulating objects in space