week 1 + 2: intro + AI history Flashcards
(34 cards)
what is Artificial intelligence
- ‘The science and engineering of making intelligent machines’
- The study of how to make computers do things at which, at the moment people are better
- Examples:
○ Siri
○ Recommender systems
○ Object recognition
○ Autonomous cars
AI summers and winters
This diagram shows the boom-and-bust cycle of AI (in notes)
AI Summers =
Times of hype, big investments, and progress
AI Winters =
Times of disappointment and funding cuts when AI fails to meet expectations
The message:
Right now (2025), AI is booming — but we don’t know what’s next:
It might become super powerful and everywhere (green line),
Or just be useful in some areas (orange line),
Or crash again in another AI winter (red line).
GOFAI vs. machine learning
This diagram shows the key difference between GOFAI and Machine Learning in a very simple way (in notes):
GOFAI (Good Old-Fashioned AI):
You give the input + rules (function) → it gives the output
The intelligence is hand-coded by humans
Example: If X, then do Y
🟦 follows fixed logic – no learning involved
🧠 Machine Learning:
You give the input + output examples → the system learns the function (rules) by itself
It figures out patterns from data instead of being told the rules
🟩 learns from examples, not explicit instructions
📌 Simple Summary:
GOFAI = Rules → Answers
ML = Answers → Learns the rules ✅
machine learning vs. deep learning
Machine Learning:
1. Input → (e.g. image of kitten)
2. Manual step: Humans extract features (e.g. ears, eyes, fur)
3. Then the model learns to classify (e.g. kitten, puppy, none)
🟠 Human does feature extraction, model does classification
🧠 Deep Learning:
1. Input → goes straight into a neural network
2. The system automatically does feature extraction + classification
3. No need for humans to pick features — it learns everything on its own
🟢 Fully automatic — learns directly from raw data
Generative AI
What:
- Text (Chatgpt)
- Audio (Eurovision AI song contest)
- Image
- Code
📱 Example autocomplete on Phone:
1. Open your smartphone keyboard in a text field
2. Tap one of the auto-complete suggestions
3. Repeat this about 10 times
4. You’ll have a full sentence that sounds like you wrote it
🤖 How It Works:
Autocomplete checks how often certain words follow others in what you usually type — and generates predictions based on that.
💬 Simple Summary:
Your keyboard’s autocomplete is a basic example of generative AI. It predicts what you might say based on past patterns — just like ChatGPT, but simpler.
differences between AI and human intellegence
- AI models don’t ‘understand’ the world as we do
- Mainly based on a lot of data and stats
Weaknesses/issues of modern AI
Risk of hallucination
○ The AI makes something up that sounds correct — but it’s false, misleading, or not based on reality.
Sometimes contains serious biases:
e.g. white male - good scientist
Not (yet) very good in logical reasoning
other ethical issues:
○ Huge carbon footprint
○ Monopoly of big tech companies
○ Lack of transparency on algorithms and data collection
○ Requires human workers
AI + law and history sliedes
Law & Computers – Two Research Directions
This slide explains two ways law and technology relate:
- Law about Computers:
How should the law deal with technology?
Computers raise new legal questions (e.g., privacy, cybercrime)
So:
Computers → create new legal challenges
- Computers for Law:
Can computers help the legal field?
Can we use AI to improve how the law works?
So:
Law → can be shaped using computers - AI & Law – The History
This slide gives a quick history of the AI & Law field:
Started in the 1970s (very early research)
Grew in the 1980s:
First big conference in 1981 (Italy)
Big conference in 1987 (Boston)
Key books and research from 1986–1987
Since 1988, yearly international conferences
By 1992, they launched the AI & Law journal
AI back in 1995/2006
- “When one of the authors once told a chemist he was working in the field of IT and Law, the first reaction was: “Is there any connection between the two at all?”
○ This was back in 1995.- The influence of IT and in particular the Internet on law has become ever greater since, and also the use of IT and in particular the Internet by lawyers (the side of the IT and Law diptych this book focuses on) has increased significantly.
- Currently there is indeed a connection between IT and Law that is also clear to people outside the field, viz.
- IT plays a central role in law, legal practice, and legal research”
AI Developments - from back in the days to today
- Legal expert systems (e.g. Susskind 1987)
○ Rule-based- 1990s some Neural Network experiments
- From mid 2000s shift in AI from rule-based to data driven
- From 2013 onwards major progress in (deep) learning algorithms
- 2 important drivers;
○ Huge availability and generation of data
○ Ever increasing computing power
AI systems : rules and reasoning
reasoning
- Both deep learning models and rule based systems have input data and output data
- Deep learning models do NOT reason
○ Deep learning models just correlate input data to output data
○ In reasoning input data is connected to output data based on arguments (premises and conclusions)
rules
- Rules are only guiding,
○ terms are open-textured,
○ there can be more answers,
○ and things can change.
AI Examples from 1940s, 1950s, 1960s, 1970s, 1980s, 1990s, 2000s, 2010s, 2020s
1940s : Lee Loevinger (1949), Jurimetrics
- Statistics and mathematics could enhance understanding of law and help in predicting legal outcomes
1950s :
○ 1956: computer was better than human (students)
- Retarded child –> exceptional child
1960s :
○ Layman Allen, “Logic-Language-Law.” In Computers and the Law: An Introductory Handbook, edited by R. P. Bigelow. New York: 3Commerce Clearing House, 1966.
○ Towards computerized legal text and multi interpretation
1980s :
Nuances and exceptions made it difficult
1990s:
○ CBR
○ Rule based, argumentation, dialogue models
○ Default reasoning (w exceptions)
2000s:
○ BEST-project (2005 -2011)
- BATNA Establishment using Semantic web Technologies
- b/w 2004 - 2008 Lodder and Van Harmelen worked on BEST (bringing Europeans and Scholars Together) project
□ Aiming to utilize semantic web technology to present tort cases to laymen
□ Project aimed to make legal info more accessible and understandable to non-experts by leveraging capabilities of semantic web
2010s:
○ more and more deep learning applications
○ Intro in legal practice of simple tools
2020s:
○ LLMs, generative AI
They do smth and nobody understands what exactly and why
Law, IT, and AI – Early Relationship
○ 1980s and 1990s: many lawyers viewed IT as irrelevant/risky for legal practice
▪ Lawyers seen as cautious, traditional, and slow to adopt technology
▪ Typewriters and Dictaphones still commonly used well into 1980s
○ Scepticism persisted despite growing reliance on databases and word processors IT initially perceived as error-prone and less user-friendly than older technologies
Two Perspectives on IT and Law
- Information Technology Law (IT Law)
▪ Legal field addressing regulation of technology (e.g., data protection, cybercrime)
▪ Involves adapting legal systems to new digital phenomena
▪ Lawyers in this domain mainly focus on embedding technology in traditional legal frameworks2. IT for Lawyers (Legal Informatics / AI and Law) ▪ Technological approach: using IT tools to support legal work ▪ Includes document automation, legal databases, and expert systems ▪ Seen as interdisciplinary – requires cooperation between lawyers, computer scientists, and other specialists ▪ Faced slow adoption due to lawyers’ discomfort and time constraints
- Tools
○ Word Processing (long been used)
○ Document information systems
○ Flow management systems
○ Databases
Case law, legislations, easier to find, find what lawyer needs
- Tools
→ this abundance of information and confucian highlights the enduring importance of lawyers
Evolution of AI in Legal Contexts
1950s–1970s: Information Retrieval
▪ AI applications began with using computers to index and search legal texts
▪ Legal domain helped pioneer text-based computer applications
1980s–1990s: Knowledge-Based Systems & Expert Systems
▪ Early legal expert systems modeled after rule-based systems
▪ Examples: decision aids for welfare and tax law
▪ Ambitions included possibly replacing judges with AI – met with resistance
▪ AI seen as overpromising and underdelivering
Basic AI Techniques in Legal Contexts
- Case-Based Reasoning (CBR)
▪ Uses precedent cases to solve new legal issues
precedents are key factors
new case is compared, find similarities or differences
▪ Limitations:
□ domain-specific
□ sometimes too simplistic2. KBS (Knowledge-Based Systems) Three parts: 1. Knowledge base in which knowledge is represented as IF_THEN rules (production rules) 2. Inference mechanism that makes it possible to reason with the rules - Forward chaining: system starts with condition of rules If conditions of rule are satisfied, the system can chain the conclusion Helpful if outcome of a case is not known - Backward chaining: Helpful to underpin a possible outcome First conclusions of rules are looked at
- Most systems combine forward and backward chaining
3. Neural Networks ▪ Learn from examples (training data) without predefined rules. ▪Perform well in specific domains, but lack transparency (can’t explain decisions)
Taxonomies of Legal Technology
○ Three-tier model:
▪ Office Automation – word processors, templates.
▪ Databases – legislation, case law.
▪ KBS: – legal reasoning tools.
○ Another taxonomy based on data flow: (1) writing phase (2) case management systems (3) storing data (4) transporting data (internet, email) (5) applying data ○ A more modern view includes communication and collaboration tools (e.g., shared files, remote access).
Knowledge management: 3 taxonomies together
- IT pushes legal organisations to think more efficiently
For effective knowledge management, you need to find out what knowledge you need for what task.
Applications of AI in Legal Practice
○ Drafting systems to improve legal writing and searchability.
○ Legal expert systems to offer advice or predict outcomes. ○ Online legal resources empower clients but still need expert interpretation. ○ AI can standardize decisions, but raises concerns about flexibility and discretion.
Key Legal AI Challenges from Early Views
○ Difficulty modelling nuanced legal concepts and human judgment.
○ Resistance to automation in sensitive areas (e.g., criminal sentencing). ○ Importance of validation – ensuring AI tools do not distort legal interpretation or reduce justice.
Impact of AI and IT on Legal Education and Practice
○ Law schools now increasingly include IT and AI training.
○ Shift from fearing replacement to collaboration with tech tools. ○ Emphasis on hybrid expertise: tech-literate lawyers and legally informed developers.
new perspective on AI (reading) - Introduction: From Guillotine to Robot Judges
○ Historical example: The guillotine was once seen as a humane innovation
○ Today’s equivalent: AI technologies (e.g. self-driving cars, automated legal decisions) -> Real-world case: Drunk driver asleep in self-driving car — raises legal and ethical issues about autonomy and responsibility ○ AI progress is impressive but raises concerns: ▪ privacy violations, fake news, biased decision-making ▪ France banned using data to evaluate judge behavior (Article 33, 2019), emphasizing ethical boundaries
Legal Technology Today
○ Widespread government use of legal tech
▪ (e.g. online access to laws, court rulings, fines)
▪ Dutch tax system: user-friendly, semi-automated — seen as a form of old-school AI
○ Legal tech categories (CodeX/Stanford): ▪ Marketplace ▪ Document automation ▪ Practice management ▪ Legal research ▪ Legal education ▪ Online dispute resolution ▪ E-discovery ▪ Analytics ▪ Compliance
Most legal tech isn’t “AI” in the modern machine-learning sense — it’s rule-based or knowledge-based
Why AI & Law is Difficult
○ Five metaphors for AI:
1. AI as mathematics (formal systems)
2. AI as technology (system design)
3. AI as psychology (modeling minds)
4. AI as sociology (multi-agent systems)
5. AI as law (hybrid critical discussion systems)
AI should mirror how law works: weighing arguments, justifying outcomes, embracing complexity
Key Areas in AI (with a Legal Lens)
- Reasoning
▪ Emphasis on argumentation and defeasibility (arguments can be overridden)
□ Example: Dispute over bike ownership resolved via argument logic (Mary vs. John)
▪ Tools like ArguMed model arguments, counterarguments, and legal reasoning structures
▪ Critique of Dung’s abstract argumentation: too formal, lacks practical relevance for lawyers
▪ Verheij’s case model formalism attempts to ground reasoning in legal practice2. Knowledge ▪ Legal reasoning depends on structured knowledge: □ Argumentation schemes (e.g. witness testimony) □ Legal norms (deontic logic, rights/duties) □ Ontologies (structured categories of legal concepts) ▪ Commonsense knowledge is still a major challenge. ▪ Legal ontologies must capture subtle distinctions (e.g., types of juristic facts) ▪ Example: Contract signing process modelled from act → legal bond → obligation → duty
- Learning
▪ Machine learning lacks transparency and justification — crucial in legal contexts.
▪ Mock example: prediction machine always says “guilty” – accurate but unjustified.
▪ Real cases:
□ US Supreme Court: 70% prediction accuracy (Katz et al., 2017).
□ European Court of Human Rights: 79% accuracy using text (Aletras et al., 2016).
▪ Learning systems must integrate legal reasoning, not just outcomes.
▪ Hybrid systems combining rules, cases, and arguments are more promising. - Language
▪ NLP tools like IBM Watson can identify legal terms (e.g., “eminent domain”).
▪ But success often comes from structured, human-curated sources (Wikipedia, procon.org).
▪ Argument mining is still in early stages; full understanding of legal text remains elusive.
▪ Hybrid systems that connect language to knowledge representation (e.g., Bayesian networks) are key to progress.
- Learning