4: AI & Liability Flashcards
(10 cards)
How can liability steer human behavior?
Risk of facing liability makes producers internalize this and be more careful about developing products
If the risk of liability is too high: no more products are developed
Difference between civil liability and criminal liability?
- Criminal liability: you don’t need damage for criminal liability (Ex. Speeding ticket) => about behavior that society considers so important that you want to penalize them
- Civil Liability: there needs to be damage
In what can civil liability be divided?
- Contract law: risks are allocated in a contract
- tort law
What is tort law?
Tort law: no contract, no allocated risks
Divide in:
1) Fault liability: you have to prove the fault that caused the damage
Ex. You could not get loan with a bank and you suspect that you got discriminated by the model they used, then you have to prove that the model is outdated or that they used biased data
2) Strict liability: product liability: producer is liable for damage caused by a defective product => producer did not have to make a fault, just the fact that the product is defective and causes damage is sufficient => fault NOT required
3) Vicarious liability: liable for acts of someone or something else
Ex. Parents are liable for the acts of their children; dogs; employers and employees => employers DON’T have to make a fault, but if there employees do then they are liable
=> fault liability requires a fault, tort law does not necesseraly
What are the general challenges in tort law and AI?
1) Who can be held liable? And who should be held liable?
* Cheapest cost avoider: person that can avoid the accident by incurring the least cost ?
=> Who is that in AI context? Producer, developer, operator (driver of self-driving car) ?
* The one with the most financial means?
2) tort law is national and AI is transnational: There are no harmonized EU liability rules
=> tort liability is different in every country: requirement of fault can be different
==> harmonization is challenging
3) Procedural elements: burden of proof for damage involving AI
=> victim has to show fault of AI provider/developer/…
=> victim has to show defect in AI-system
=> victim has to show that AI caused damage
==> Possible solutions:
* reverse burden of proof: weaker party doesn’t have to proof damage, but the stronger party has to proof that it hasn’t done anything wrong
=> was an ai liability directive but is recently revoked
* strict liability: fault is no longer required
Ex. High risk AI-system is always liable (or producer of) -> was not approved, but possible solution to lessen the burden of proof for the weaker party
* legal personality AI/robots: for highly automated AI-systems, legal personalities should be possible
=> you can blame the robot, hold liable
Explain: The more autonomous a system becomes, the less liability there is for the user
Categories for human machine interaction & Fault-based liability:
1) No automation (control): human has full control and decides everything =>human user remains responsible for everything, because it is the human that makes a fault and can be held liable
3) Full automation (no control): human user has no control so it is not possible to hold the user liable
2) Decision made by system, but human user required to intervene (supervision): interesting for fault liability
=> QUESTIONS:
* when do you commit a fault (for taking over, or for taking over too soon/late?)?
* who is the driver?
* Attribution to the driver: If a self-driving car passes a red light, the human did not intervene -> the violation is comitted by the car; however the user did not intervene the car from violating the legal rule of conduct
=> can you attribute the act of the machine towards the user for something the user did not do?
=> criminal law: you cannot be held liable for something you did not do
* force majeure: unforeseen event
Ex. Heart stroke
=> Could you see a fault of a self-driving car as a force majeure? (answer is no i think)
=> 2 requirements for a force majeure:
1) Act should be unforeseeable for the driver
2) And impossible to intervene
* negligence user: forseeable damage? autonomy increases risky behavior
=> proving negligence is challenging
===> category 2 = problematic
What is the definition of a driver?
2 criteria to define a driver:
1) Puts the vehicule into motion
2) Responsible for the movement (motion of the vehicule)
Does not have to be in the car, the manufacturer can also claim responsibility
=> someone just has to claim that responsability
What is a product in product liability?
All movables + electricity
=> Is software a product? Software in a car, yes it is a product because it is movable
However, software as such is it a product?
Debate
* Yes, because of the “+ electricity”
* No, “+ electricty” is too narrow; software is not tangible
Now: software is a product if provided in a commercial context
Explain expectations in terms of liability
Presentation product: including commercials, if you put in commercials that self-driving cars will be safer and decrease car accidents, then the safety expectations will be higher and bigger risk of liability
=> from liability perspective you have to tell that your system is not good, but from commercial perspective that is not the best thing to do
=> expectations
What is the defense for the producers with regard to product liability?
producers cannot be held liable for damages caused by a defective product when the product was not defective when put into circulation
=> challenges AI?
What with self-learning and updates?
==> revised product liability: economic operator is not exempted from liability where defectiveness of product is due to:
* related service
* software including updates
* lack of software updates to maintain safety
=> however these things have to be in control of the manufacturer