14 Concurrency Control Techniques Flashcards
(22 cards)
What is a lock in a database?
A label placed on data (X) to show it’s in use, written as Lock(X)
.
What is the role of the lock manager?
Tracks all locks, controls locking/unlocking, ensures safety and consistency.
What are the two states in binary locking?
- Locked (1): item is in use
- Unlocked (0): item is free
What happens if a transaction tries to access a locked item?
It must wait in a queue until the lock is released.
What is a shared lock (read-lock)?
Allows multiple transactions to read X, but no one can write X.
What is an exclusive lock (write-lock)?
Only one transaction can read/write X, others must wait.
What are the system rules for multiple-mode locks?
- Lock before read/write
- Unlock after done
- No double locking
- Only unlock if you hold the lock
What is a deadlock?
When two transactions wait on each other’s locks, causing a freeze.
Can locking alone guarantee serializability?
No, not always.
What is Basic 2-Phase Locking (2PL)?
- Growing phase: acquire locks only
- Shrinking phase: release locks only
- Guarantees serializability, but not deadlocks
What is a lock upgrade and downgrade?
- Upgrade: shared → exclusive
- Downgrade: exclusive → shared
What is Conservative (Static) 2PL?
Locks all items at the start; avoids deadlocks and cascading rollbacks.
What is Strict 2PL?
Holds all locks until commit/abort; prevents cascading rollbacks but not deadlocks.
What is serializability in concurrency control?
Ensures final DB state is same as if transactions ran one after another.
What is cascade rollback prevention?
Stops one failure from causing others to roll back due to uncommitted data.
What are the three deadlock control techniques?
- Avoidance (pre-check)
- Detection (periodic check)
- Prevention (lock all items at start)
What is a livelock?
A transaction is stuck retrying without progressing while others run.
How to fix livelock?
Use fair policies like first-come-first-serve or increasing priority with wait time.
What is granularity of data items?
The size of data items (row, field, block, file, DB) a transaction interacts with.
What is the tradeoff between small and large granularity?
- Small = more concurrency, more overhead
- Large = less concurrency, less overhead
When is small granularity better?
When transactions use few, isolated records.
When is large granularity better?
When many records from the same area are accessed.