W3: Rodgers et al. (2023) Flashcards

(45 cards)

1
Q

Article’s purpose

A

Focuses on the integration of ethical considerations into algorithmic decision-making within HRM processes. It addresses the increasing use of AI in HRM and the necessity for a framework that ensures ethical accountability in these AI-driven processes. This study proposes grounding AI-driven HR in six ethical frameworks to create more accountable systems. It introduces a throughput model and provides tools for auditing AI decisions in pay equity, diversity initiatives, and performance management, offering HR professionals a blueprint for ethical AI implementation that protects both organisational goals and employee rights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Current HRM AI implementations

A

Often prioritise efficiency over ethical considerations, particularly in gig economy applications where algorithmic management replaces human oversight

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Throughput Model

A

Integrates six ethical frameworks principles into AI design, enabling organisations to balance automation’s benefits with fairness. It offers insights from cognitive and social psychology into a descriptive model of how human constituents make decisions within organisations. In the first stage, both P and I influence J, then, in the second stage, P and J influence decision choice. It provides a broad conceptual framework for examining the interconnected processes that influence decision choices within organisations. It offers a framework for communication and understanding AI-driven HRM decisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

“Environmental variables”

A

Internal environment, encompassing natural social, and economic factors. An organisation’s intent is shaped by its internal environment. The integration of AI technology, incorporating these into HRM algorithms, provides an opportunity for post-decision evaluation through root cause analysis (RCA)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Solutionism

A

The failure to recognise that the optimal solution to a problem may not always involve technology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The ripple effect

A

The failure to fully understand how the incorporation of technology into an existing social system alters the behaviours and embedded values of that system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Formalism

A

The inability to account for overall connotation of social concepts, such as fairness, which are procedural, contextual, and contestable, and thus cannot be fully captured through mathematical formalisms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Portability

A

The failure to comprehend that algorithmic solutions developed for one social context may be misleading, erroneous, or detrimental when applied to a different context

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Framing

A

The failure to model the complete system, including the social criteria, such as fairness, that will be enforced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

AI

A

A technology that seeks to simulate human reasoning in computers and other machines

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Algorithms

A

Sets of unambiguous specifications for performing tasks such as calculations, data processing, and automated reasoning. They are fundamental to AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Time pressure decisions

A

The cost of unhurried decisions is high (spedd being essential)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Accuracy

A

The cost of wrong decision choices is minimised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Allocation of resources

A

The data size is too large for manual analysis or traditional algorithms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Decision accountability framework

A

Study indicates a need for introducing it whereby HRM practitioners have a pathway to consider and account for components of the organisational environment, employee engagement, and ethics when incorporating AI decision-making to assist in achieving organisational goals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Anomaly detection

A

Identify items, events, or observations that do not conform to an expected pattern or other items in a pool of job applicants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Background verification

A

Machine learning-powered predictive models can extract meaning and highlight issues based on structured and unstructured data points from applicants’ resumes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Employee attrition

A

Find employees who are at high risk of attrition, enabling HR to proactively engage with and retain them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Content personalisation

A

Provide a more personalised employee experience by using predictive analytics to recommend career paths, professional development programs, or optimise a workplace environment based on prior employee actions

20
Q

Deep learning

A

A branch of machine learning that trains a computer to learn from large amounts of data through neural network architecture. It is a more advanced form of machine learning that breaks down data into layers of abstraction

21
Q

Image and video recognition

A

Can classify candidates based on objective data and predict fraudulent behaviour using behavioural analytics. HRM practitioners need to incorporate ethical frameworks when using real-time AI psychological profiling systems that analyse non-verbal behaviour

22
Q

Speech recognition

A

Enables virtual assistants and speech analytics software for compliance, fraud detection, and communication review. However, the data collected may contain sensitive information, necessitating ethical guidelines for its collection, processing, and storage

23
Q

Chatbots

A

Utilising Natural Language Processing (NLP) are becoming crucial for automating HRM service delivery

24
Q

Recommendation engines

A

In digital learning, personalise learning pathways and provide managers with training suggestions. However, reliance on AI decision-making in performance management may lead to a replication of human-machine contact and a deferral of responsibility, potentially resulting in a negative employee experience

25
Perception (P)
Shaped by experience, education, and biases. It is a process of individuals framing their problem-solving set or view of the world
26
Information (I)
Continuously updates perception, similar to Bayesian learning, including the set of technical, managerial, economic, political, social, and environmental information available to a decision maker for problem-solving
27
Judgement (J)
Weighs and analyses inputs, containing the process by which individuals implement and analyse incoming information and the influences from their perception
28
Decision (D)
The final decision taken (or not taken)
29
P <--> I relationship
Functions like a neural network, where perception adjusts based on new data, enabling AI systems to learn iteratively
30
Preference-based (ethical egoism)
Decision maximise self-interest, ignoring external information. This theory posits that individuals should act in their own self-interest. In an AI-driven HRM context, this could translate to algorithms that prioritise decisions that benefit the organisation, potentially at the expense of individual employees. P -> D
31
Algorithmic pathways
Six can be used by a decision maker. They influence a decision choice and reflect the problem statement in the introduction, whereby the modelling process may help arrest problems of transmitting and receiving HRM knowledge and information due to organisations seeking different and comparative ethical solutions to a problem.
32
Rule-based (deontology)
Decisions follow moral rules (e.g. laws, policies). This rule-based theory emphasises adherence to moral duties and obligations. In AI-driven HRM, this would involve algorithms that follow pre-defined ethical, such as fairness and respect for employee rights. P -> J -> D
33
Principles-based (utilitarianism)
Decisions maximise societal good using objective data. This principle-based theory advocates for actions that maximise overall happiness or well-being. AI-driven HRM systems based on it would aim to make decisions that benefit the greatest number of employees, even if some individuals are negatively affected. I -> J -> D
34
Relativism-based
Ethics depend on situational/cultural context. This theory suggests that ethical standards are relative to individual cultures or societies. In AI-driven HRM, this could lead to algorithms that adapt to different cultural norms, potentially resulting in inconsistent ethical standards across different contexts. I -> P -> D
35
Virtue ethics-based
Decisions reflect moral character (e.g. integrity, fairness). This theory focuses on developing good character traits. AI-driven HRM systems based on virtue ethics would prioritise decisions that reflect virtues such as fairness, honesty, and compassion. P -> I -> J -> D
36
Ethics of care-based (stakeholders)
Decisions prioritise stakeholder well-being (e.g. empathy, equity). This stakeholder-centred theory emphasises the importance of relationships and caring for the well-being of others. In AI-driven HRM, this would involve algorithms that consider the impact of decisions on all stakeholders, including employees, employers, and the broader community, prioritising the needs of the most vulnerable. I -> P -> J -> D
37
Type 1 errors
False positive. May fuel inefficiencies and increase transaction costs, which can cause inadequate algorithms, as depicted by an AI system
38
Type 2 error
False negative. May engender inappropriate workforce individuals to receive opportunities
39
Decision-making
Involves assessing action initiation and comparing expected versus actual results based on the decision maker's intent. Accountability for AI decisions is crucial, requiring understanding organisational hierarchies and the influence of organisational culture on ethical AI use
40
AI opacity
Due to this, HRM needs to mediate human involvement levels for accountability. The interaction between people and organisations is key, and the TP model can analyse AI decision-making at the organisational level
41
Human involvement
Depends on understanding objectives within the organisational hierarchy and external environment
42
AI neural networks
Learn continuously
43
AI algorithmic ethics framework
Fundamental to minimise bias, and human intervention combined with AI can foster unbiased HRM practices
44
Root-Cause Analysis (RCA)
Help assess the level of human involvement and detachment from AI decisions. E.g., in cases where software is adopted under time pressure without adequate staff training, unanticipated negative outcomes can arise, underlining the need for HRM to evaluate decision pathways holistically
45
Decision Dashboard
Ultimately enables a continuous, iterative process to refine AI integration in alignment with ethical considerations and organisational goals