slides29 Flashcards
(7 cards)
resource allocation is initialisation (RAII)
It is a good way of ensuring that resource leaks cannot occur
With no burden on the programmer to remember the deallocation
wiki
Typical uses
The RAII design is often used for controlling mutex locks in multi-threaded applications. In that use, the object releases the lock when destroyed. Without RAII in this scenario the potential for deadlock would be high and the logic to lock the mutex would be far from the logic to unlock it. With RAII, the code that locks the mutex essentially includes the logic that the lock will be released when execution leaves the scope of the RAII object.
Another typical example is interacting with files: We could have an object that represents a file that is open for writing, wherein the file is opened in the constructor and closed when execution leaves the object’s scope. In both cases, RAII ensures only that the resource in question is released appropriately; care must still be taken to maintain exception safety. If the code modifying the data structure or file is not exception-safe, the mutex could be unlocked or the file closed with the data structure or file corrupted.
Ownership of dynamically allocated objects (memory allocated with new in C++) can also be controlled with RAII, such that the object is released when the RAII (stack-based) object is destroyed. For this purpose, the C++11 standard library defines the smart pointer classes std::unique_ptr for single-owned objects and std::shared_ptr for objects with shared ownership. Similar classes are also available through std::auto_ptr in C++98, and boost::shared_ptr in the Boost libraries.
Exercise. The Rust compiler guarantees that a mutable (writable) memory location can never be accessed by more than one thread at a time. How might the compiler use this knowledge to optimise operations on that memory location?
meh
Streams and Iteration in a Single Assignment Language
implicit parallelism
It distinguishes carefully between loops where the computations in the loop body are independent (thus parallelisable, they call them for-loops) and those where they are not independent (they call these iterations)
A spreadsheet is a simple example of the concept: change the value in a cell and this triggers various (re)computations
Strand
declarative
There is a single, shared global namespace and threads communicate by writing and reading variables
If a thread tries to read a variable before it is set, that thread will block
Thus we get both message passing and synchronisation And so variables are also a bit like single-use channels
if one expression does not depend on another, that can be run in parallel
Again allowing automatic parallelism
parallel rules
If a rule is selected, then new processes evaluate the body
Swift Parallel +
Grand Central Dispatch (GCD) is a queue based API that allows to execute closures on workers pools in the First-in First-out order
Apart from main queue, system provides several global concurrent queues. When sending tasks to the global concurrent queues, you specify a Quality of Service (QoS) class property.
In past, GCD has provided high, default, low, and background global concurrent queues for prioritizing work.
Rust Parallel +
Rayon is a data-parallelism library for Rust. It is extremely lightweight and makes it easy to convert a sequential computation into a parallel one. It also guarantees data-race freedom. (You may also enjoy this blog post about Rayon, which gives more background and details about how it works, or this video, from the Rust Belt Rust conference.)
PROMISE: You may have heard that parallel execution can produce all kinds of crazy bugs. Well, rest easy. Rayon’s APIs all guarantee data-race freedom, which generally rules out most parallel bugs (though not all). In other words, if your code compiles, it typically does the same thing it did before.
Go Parallel +
In Go, concurrency is achieved by using Goroutines. Goroutines are functions or methods which can run concurrently with others methods and functions. They are very much similar like threads in Java but light weight and cost of creating them is very low.
ADVANTAGES:
- Goroutines have a faster startup time than threads.
- Goroutines come with built-in primitives to communicate safely between themselves called as channels(We will come to it later).
- Goroutines are extremely cheap when compared to threads. They are only a few kb in stack size and the stack can grow and shrink according to needs of the application whereas in the case of threads the stack size has to be specified and is fixed.
Channels provide a way for goroutines to communicate with one another and synchronize their execution.
Buffered channels can be created by passing an additional capacity parameter to the make function which specifies the size of the buffer.
If the writes on channel are more than its capacity, then the writes are not processed till its concurrent reading is done from one of the goroutines, and once that is done, it will write new values to the channel.