chapter 29: lock-base concurrent data structures Flashcards Preview

Csc369 > chapter 29: lock-base concurrent data structures > Flashcards

Flashcards in chapter 29: lock-base concurrent data structures Deck (14):

how to make a data structure thread safe?

add locks to it


which 4 data structures are covered?

counters, linked list, queues, hash tables


how to make counter concurrent?

simply add a lock. lock - action - unlock
actions: increment, decrement, get
whenever a value is changed. Critical section


whats wrong with the approach above?

Performance. having 2 threads each update the counter 1 million times concurrently leads to a MASSIVE slowdown. it gets worse with more threads.


what is perfect scaling?

threads complete just as quickly on multiple processors as the single thread does on one.


how can we create scalable counting?

sloppy counter.


how does sloppy counter work? and how does the size of S(threshold) affect the scalability?

representing a single logical counter via numerous local physical counters, one per CPU core, as well as a single global counter. a lock for each local counter and a lock for global counter.
how it works?
local counter counts for each one and because they are seperate, there is no slowing down.
local values are then periodically updated to global counter by acquiring global lock and incrementing it. local counter then set to 0.
how often it is updated is determined by S(threshold)
smaller S, less scalable
bigger S, global lags behind


how well does sloppy counter work in terms of scalability?

excellent, hardly higher than one processor.


key features of locking linked list?

lock critical section, also remember if there is error you unlock there too.
lock is called once.
unlock called twice, one error, one no error


how to scale linked list?

hand-over-hand locking


what is hand-over-hand locking?

add a lock per node of the list.
when traversing the list, first grabs next node's lock and then release current node's lock.
high degree of concurrency but low performance
too many locks -> slow


Michael and scott concurrent queue approach

2 locks.
headlock for dequeue
taillock for enqueue
dummy node enables the separation of head and tail operation
related to CV


concurrent hashtable - simple hashtable that does not resize

lock per hash bucket
concurrent hashtable scales magnificently!


correctness first performance after