W1: Faraj et al. (2018) Flashcards

(31 cards)

1
Q

Learning algorithms

A

AI systems that mimic human decision-making. They generate responses, classifications, or predictions resembling human knowledge work. They represent a transformative shift in work and organising. They enable unprecedented performance in tasks requiring human judgement. Their rapid deployment raises critical societal and organisational questions, e.g. decision-making authority and ethical implication. The term refers to an emergent family of technologies that build on machine learning, computation, and statistical techniques, and relies on large data sets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Article’s purpose

A

Explores how learning algorithms are reshaping work, organisations, and society, highlighting their unique challenges and ethical implications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Traditional algorithms

A

Have fixed instructions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Learning algorithms debate

A

Centres on the extent to which algorithms can take over many aspects of human work. What will matter is the capacity of contemporary workers to adapt their ways of knowing and working and embrace novel technologies, with augmentative effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Black-boxed performance

A

Learning algorithms are often opaque, inaccessible, and unmodifiable, with decision-making processes that cannot be easily explained, redressed, or adjusted even to their developers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Proprietary secrecy

A

Algorithms like Google’s search or Amazon’s recommendations are protected as intellectual property. They are the “secret sauce” that is crucial to business success

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Complexity

A

Machine learning models refine weights and connections dynamically, making their logic difficult to trace. Understanding them would be limited only to a select professional class of knowledge workers with highly specialised skills and technical training for comprehending code of immense size and logical complexity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Comprehensive digitisation

A

The performance and effectiveness of learning algorithms are fundamentally dependent on access to large, diverse, accurate datasets. A critical aspect is the integration of comprehensive data extracted from multiple contexts and sources of varying nature. This includes digitised markers of the physical environment, e.g. GPS location data, outputs from IoT devices, records of economic transactions, traces of human interactions, and mobile device activity logs; this is called “the digitisation of just about everything”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

“The digitisation of just about everything”

A

Enables algorithms to work with an unprecedentedly rich set of inputs to generate more accurate predictions, e.g. Facebook combines user activity with third-party data for targeted advertising, or Amazon uses digitised social and professional data to refine recommendations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Bias amplification

A

Algorithms may replicate societal biases (e.g. racist autocomplete suggestions), due to the sensitivity of the learning algorithm to historical search activities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Fake news propagation

A

Logic of sharing information that aligns with the users’ expressed opinions and preferences (induced from their tracked behaviour on social media) can lead to a narrowing of intellectual horizon and a propagation of the phenomenon of “fake news”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Anticipatory quantification

A

Refers to how learning algorithms predict outcomes by analysing vast amounts of digital trace data. As data collection becomes cheaper and more widespread, nearly every aspect of life can be digitally measured and represented, emphasising correlations. However, these predictive models are reductionist; they simplify people into measurable traits without accounting for their growth, intentions, or context

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Reductionism (stereotypical actions)

A

People are categorised into categories based on surface traits (e.g. parole decisions based on crime likelihood models rather than actual behaviour)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Hidden politics

A

Algorithms reflect designers’ values, beliefs, and dataset biases. The decisions that go into algorithm design, like which criteria to include or how to weigh different factors, are often made informally and individually rather than through collective or ethical review. This means that algorithms can silently influence power and visibility, highlighting some people, ideas, or events over others. Beyond designers’ intentions, algorithms may have political consequences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Politics

A

Politics also play out in the classification, selection, and pre-processing of the data used to train algorithms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Design chocies

A

Search engines prioritise certain content, e.g. Google favouring its own services

17
Q

Data selection

A

Credit-scoring algorithms may discriminate if trained on biased historical data. E.g. fake news algorithms struggle to assess veracity objectively, highlighting the non-neutrality of data

18
Q

Automation of tacit tasks

A

Algorithms now perform tasks like image recognition with human-level accuracy. This encroachment into traditionally human domains challenges the value of professional expertise

19
Q

Occupational resistance

A

Professionals may resists algorithmic encroachment (e.g. salespeople relying on relational knowledge vs. predictive analytics) and struggle to adapt if they cannot understand or manage algorithmic outputs

20
Q

Training challenges

A

Reduced hands-on experience threatens skill development. If routine tasks are outsourced to machines, trainees may lose opportunities to learn through hands-on experiences

21
Q

Accountability gaps

A

If an algorithm makes a wrong diagnosis or legal prediction, who is responsible? Despite their capabilities, algorithms cannot fully replicate human judgement, especially in nuanced or exceptional cases. Human oversight remains essential to ensure ethical, legal, and contextual appropriateness

22
Q

Role evolution

A

Jobs may shift toward oversight. Professionals then may reallocate their attention toward tasks requiring deeper interpretation, creativity, or empathy

23
Q

Identity redefinitions

A

Professionals adapt, e.g. librarians transitioning from search providers to “knowledge connectors”

24
Q

Algorithmic management

A

Learning algorithms are also creating new systems of organisational control, particularly through anticipatory quantification. In many workplaces, employee performance is now tracked and evaluated using algorithm-generated data

25
Gaming strategies
Employees manipulate systems, e.g. journalists write clickbait to "reset" performance scores
26
Participation limits
Workers may self-censor to avoid unfavourable algorithmic assessments
27
Task allocation
Learning algorithms are transforming how work is coordinated, particularly in digital and distributed environments
28
Team assembly
Tools like Slack, using AI to determine which messages are most important connect people with relevant expertise, and track employee participation in key discussions. This automation reduces coordination costs, accelerates communication, and improves task allocation. However, such tightly structured coordination can also limit creativity and innovation. Algorithms that prioritse relevance may narrow workers' exposure to diverse perspectives, reducing opportunities for serendipitous discoveries or knowledge recombination
29
Decision authority
When should algorithms override human judgement
30
Morality
Ethical frameworks are needed for technologies like self-driving cars
31
Digital iron cage
Ubiquitous quantification may constrain autonomy, akin to Weberian bureaucracy