C191-Terms-Chapter-3 Flashcards

1
Q

process

A

An instance of a program being executed by an OS. Ex: When a user opens a new application like a web browser or text editor, the OS creates a new process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

process control block (PCB)

A

The OS keeps track of each process using a process control block (PCB): A data structure that holds information for a process, including the current instruction address, the execution stack, the set of resources used by the process, and the program being executed.

The PCB is the concrete representation of a process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

new state

A

A newly created process is placed into the new state before the process is allowed to compete for the CPU.

Ex: The OS may want to regulate the number of processes competing for the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

terminated state

A

A process is placed into the terminated state when execution can no longer continue but before the PCB is deleted.

Ex: The OS may want to examine the final state of a process that committed a fatal error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

suspended state

A

A process may be placed into the suspended state even though the CPU and all resources are available.

Ex: The OS may want to stop a process to allow debugging or to regulate performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

context switch

A

The CPU is always running one process at a time. Multiple processes can share a single CPU by taking turns.

A context switch is the transfer of control from one process to another. Each time a p stops running to allow another one to resume, the OS must save all info about the stopped p. This info is restored when p gets to run again.

The info that needs to be saved is the CPU state, which consists of all values held in any CPU regs and hardware flags.

One of the regs is the pc, which determines the next instruction to execute after p is restored.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

physical CPU

A

A real hardware instance of a CPU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

virtual CPU

A

A CPU that the process assumes is available only to itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Structuring an application as processes allows independence from the:

A

Number of CPUs: A physical CPU is a real hardware instance of a CPU. Multiple processes may run on one physical CPU using a technique known as time sharing. Each process is given a virtual CPU: A CPU that the process assumes is available only to itself.

Type of CPU: A virtual CPU can be just an abstraction of the physical CPU or it can be software that emulates the behavior of a different CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Benefits of virtual CPUs

A

Independence from the number and type of CPUs provides several crucial benefits:

Multi-user support: Multiple users, each represented by one or more separate processes, can share the same machine without being aware of each other.

Multi-CPU transparency: An app written to use multiple CPUs will run correctly, perhaps more slowly if only one CPU is available.

Portability: An application compiled for one type of CPU can run on a different CPU without being modified or even recompiled.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Using multiple cooperating processes instead of one has several important advantages:

A

The interfaces between the processes are simple and easy to understand.

Each process can be designed and studied in isolation.

The implementation reduces idle time by overlapping the execution of multiple processes.

Different processes can utilize separate CPUs, if available, thus speeding up the execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Two ways exist to organize all PCBs:

A

An array of structures. The PCBs are marked as free or allocated, which eliminates the need for any dynamic memory management. The main drawback is a lot of wasted memory space to maintain a sufficient number of PCB slots.

An array of pointers to dynamically allocated PCBs. The pointer array wastes little space and can be made much larger than the array of structures. The drawback is the overhead of dynamic memory management to allocate each new PCB and to free the memory when the process terminates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

waiting list

A

A waiting list is associated with every resource and contains all processes blocked on that resource because the resource is not available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

ready list (RL)

A

A list containing all processes that are in the ready state and thus are able to run on the CPU.

The RL also includes the currently running process. The RL maintains all processes sorted by their importance, which is expressed by an integer value called priority.

The RL can be a linked list where the priority of a process is the current position in the list. The RL can also maintain processes in different lists sorted by priorities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

process creation hierarchy

A

A graphical representation of the dynamically changing parent-child relationships among all processes. The process creation hierarchy changes each time a process is created or destroyed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

create process function

A

Allocates a new PCB, fills the PCB entries with initial values, and links the PCB to other data structures in the system:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

destroy process function

A

Destroys a process by freeing the PCB data structure and removing any references to the PCB from the system.

Depending on the OS, the destroy function may also destroy all of the process’s descendants to prevent having “orphan” processes in the system.

The destruction of the entire hierarchy of descendants is accomplished by calling destroy(c) recursively on all children c of p.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The destroy() function performs the following steps:

A
  1. After calling destroy(c) on all child processes, remove p from either the RL or from the waiting list of a resource.
  2. Remove p from the list of the calling process’s children.
  3. Release all memory and other resources, close all files, and deallocate the PCB.
  4. Call the scheduler to choose the next p to run. The call must be made outside of the destroy(p) function to ensure that the scheduler executes only once, after the entire hierarchy of processes has been destroyed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

resource control block (RCB)

A

A data structure that represents a resource. Its fields include resource_description, state, and waiting_list.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

request resource function

A

Allocates a resource r to a process p or blocks p if r is currently allocated to another process.

If r is currently free, the state of r is changed to allocated and a pointer to r is inserted into the list of other_resources of p.

If r is not free, the calling process p is blocked. p’s PCB is moved from the RL to the waiting_list of r. Since the calling process, p, is now blocked, the scheduler must be called to select another process to run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

release resource function

A

Allocates the resource r to the next process on the r’s waiting list. If the waiting list is empty, r is marked as free.

If r’s waiting_list has no processes then the state of r is changed to free and p continues executing.

If the waiting list of r is not empty then the process q at the head of the list is allocated r, the state of q is changed to ready, and q is moved from the waiting_list to RL.

Since a new process (q) is now on RL, the scheduler must be called to decide which process (p or q) should continue to run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

scheduler function

A

determines which process should run next and starts the process. The scheduler function is called at the end of each of the process and resource management functions: create, destroy, request, and release.

Assuming the RL is implemented as a priority list, the scheduler() function performs the following tasks:

  1. Find the highest priority process q on the RL.
  2. Perform a context switch from p to q if either of the following conditions is met:

The priority of the running process is less than the priority of another process. This condition is true when the scheduler() is called from create() or release(). In these cases, process q could have a higher priority than p.

The state of the running process p is blocked. This is true when the scheduler() is called from request() and the resource is unavailable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

thread

A

A thread is an instance of executing a portion of a program within a process without incurring the overhead of creating and managing separate PCBs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

thread control block (TCB)

A

A data structure that holds a separate copy of the dynamically changing information necessary for a thread to execute independently.

The replication of only the bare minimum of information in each TCB, while sharing the same code, global data, resources, and open files, is what makes threads much more efficient to manage than processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

independent process

A

If it does not share data with any other processes executing in the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

cooperating process

A

If it can affect or be affected by the other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

interprocess communication (IPC)

A

The mechanism that will allow them to exchange data— that is, send data to and receive data from each other. There are two fundamental models of interprocess communication: shared memory and message passing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

shared memory

A

In the shared-memory model, a region of memory that is shared by the cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

message passing

A

In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

browser process

A

Responsible for managing the user interface as well as disk and network I/O. A new browser process is created when Chrome is started. Only one browser process is created.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Renderer processes

A

Contain logic for rendering web pages. Thus, they contain the logic for handling HTML, Javascript, images, and so forth.

As a general rule, a new renderer process is created for each website opened in a new tab, so several renderer processes may be active at the same time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

plug-in process

A

Created for each type of plug-in (such as Flash or QuickTime) in use. Plug-in processes contain the code for the plug-in as well as additional code that enables the plug-in to communicate with associated renderer processes and the browser process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

sandbox

A

A contained environment (e.g., a virtual machine).

Renderer processes run in a sandbox, which means that access to disk and network I/O is restricted, minimizing the effects of any security exploits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

unbounded buffer

A

Places no practical limit on the size of the buffer. The consumer may have to wait for new items, but the producer can always produce new items.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

bounded buffer

A

Assumes a fixed buffer size. In this case, the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

producer

A

A process role in which the process produces information that is consumed by a consumer process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

consumer

A

A process role in which the process consumes information produced by a producer process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

communication link

A

If processes P and Q want to communicate, they must send messages to and receive messages from each other: a communication link must exist between them.

Here are several methods for logically implementing a link and the send()/receive() operations:

Direct or indirect communication
Synchronous or asynchronous communication
Automatic or explicit buffering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

direct communication

A

Each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme, the send() and receive() primitives are defined as:

send(P, message)—Send a message to process P.
receive(Q, message)—Receive a message from process Q.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

symmetry addressing

A

Both the sender process and the receiver process must name the other to communicate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

asymmetry in addressing

A

Here, only the sender names the recipient; the recipient is not required to name the sender. In this scheme, the send() and receive() primitives are defined as follows:

send(P, message)—Send a message to process P.
receive(id, message)—Receive a message from any process. The variable id is set to the name of the process with which communication has taken place.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

The disadvantage of symmetry and asymmetry addressing

A

the limited modularity of the resulting process definitions. Changing the identifier of a process may necessitate examining all other process definitions.

All references to the old identifier must be found so that they can be modified to the new identifier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

indirect communication

A

The messages are sent to and received from mailboxes, or ports. A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed. Each mailbox has a unique identification.

A process can communicate with another process via a number of different mailboxes, but two processes can communicate only if they have a shared mailbox. The send() and receive() primitives are defined as follows:

send(A, message)—Send a message to mailbox A.
receive(A, message)—Receive a message from mailbox A.

44
Q

Blocking send

A

The sending process is blocked until the message is received by the receiving process or by the mailbox.

45
Q

Nonblocking send

A

The sending process sends the message and resumes operation.

46
Q

Blocking receive

A

The receiver blocks until a message is available.

47
Q

Nonblocking receive

A

The receiver retrieves either a valid message or a null.

48
Q

rendezvous

A

When both send() and receive() are blocking, we have a rendezvous between the sender and the receiver.

49
Q

Zero capacity

A

The queue has a maximum length of zero; thus, the link cannot have any messages waiting in it. In this case, the sender must block until the recipient receives the message.

50
Q

Bounded capacity

A

The queue has a finite length of n; thus, at most n messages can reside in it. If the queue is not full when a new message is sent, the message is placed in the queue (either the message is copied or a pointer to the message is kept), and the sender can continue execution without waiting.

The link’s capacity is finite, however. If the link is full, the sender must block it until space is available in the queue.

51
Q

Unbounded capacity

A

The queue’s length is potentially infinite; thus, any number of messages can wait in it. The sender never blocks.

52
Q

blocking

A

In interprocess communication, a mode of communication in which the sending process is blocked until the message is received by the receiving process or by a mailbox and the receiver blocks until a message is available. In I/O, a request that does not return until the I/O completes.

53
Q

nonblocking

A

A type of I/O request that allows the initiating thread to continue while the I/O operation executes.

In interprocess communication, a communication mode in which the sending process sends the message and resumes operation and the receiver process retrieves either a valid message or a null if no message is available.

In I/O, a request that returns whatever data is currently available, even if it is less than requested.

54
Q

synchronous

A

In interprocess communication, a mode of communication in which the sending process is blocked until the message is received by the receiving process or by a mailbox and the receiver blocks until a message is available. In I/O, a request that does not return until the I/O completes.

55
Q

asynchronous

A

In I/O, a request that executes while the caller continues execution.

56
Q

advanced local procedure call

A

The message-passing facility in Windows is called the advanced local procedure call (ALPC) facility.

It is used for communication between two processes on the same machine. It is similar to the standard remote procedure call (RPC) mechanism that is widely used, but it is optimized for and specific to Windows.

Like Mach, Windows uses a port object to establish and maintain a connection between two processes. Windows uses two types of ports: connection ports and communication ports.

57
Q

When an ALPC channel is created, one of three message-passing techniques is chosen:

A

For small messages (up to 256 bytes), the port’s message queue is used as intermediate storage, and the messages are copied from one process to the other.

Larger messages must be passed through a section object, which is a region of shared memory associated with the channel.

When the amount of data is too large to fit into a section object, an API is available that allows server processes to read and write directly into the address space of a client.

58
Q

pipe

A

A pipe acts as a conduit allowing two processes to communicate. Pipes were one of the first IPC mechanisms in early UNIX systems. They typically provide one of the simpler ways for processes to communicate with one another, although they also have some limitations.

59
Q

Ordinary pipes

A

Ordinary pipes allow two processes to communicate in standard producer-consumer fashion: the producer writes to one end of the pipe (the write end) and the consumer reads from the other end (the read end).

As a result, ordinary pipes are unidirectional, allowing only one-way communication. If two-way communication is required, two pipes must be used, with each pipe sending data in a different direction.

An ordinary pipe cannot be accessed from outside the process that created it. Typically, a parent process creates a pipe and uses it to communicate with a child process that it creates via fork().

60
Q

anonymous pipes

A

Ordinary pipes on Windows systems are termed anonymous pipes, and they behave similarly to their UNIX counterparts: they are unidirectional and employ parent-child relationships between the communicating processes.

In addition, reading and writing to the pipe can be accomplished with the ordinary ReadFile() and WriteFile() functions.

The Windows API for creating pipes is the CreatePipe() function, which is passed four parameters. The parameters provide separate handles for (1) reading and (2) writing to the pipe, as well as (3) an instance of the STARTUPINFO structure, which is used to specify that the child process is to inherit the handles of the pipe. Furthermore, (4) the size of the pipe (in bytes) may be specified.

61
Q

Named pipes

A

Named pipes provide a much more powerful communication tool. Communication can be bidirectional, and no parent-child relationship is required.

Once a named pipe is established, several processes can use it for communication. In fact, in a typical scenario, a named pipe has several writers.

Additionally, named pipes continue to exist after communicating processes have finished. Both UNIX and Windows systems support named pipes, although the details of implementation differ greatly.

62
Q

Named pipes on UNIX

A

Named pipes are referred to as FIFOs in UNIX systems. Once created, they appear as typical files in the file system.

A FIFO is created with the mkfifo() system call and manipulated with the ordinary open(), read(), write(), and close() system calls. It will continue to exist until it is explicitly deleted from the file system.

Although FIFOs allow bidirectional communication, only half-duplex transmission is permitted. If data must travel in both directions, two FIFOs are typically used. Additionally, the communicating processes must reside on the same machine.

63
Q

Named pipes on Windows

A

Named pipes on Windows systems provide a richer communication mechanism than their UNIX counterparts.

Full-duplex communication is allowed, and the communicating processes may reside on either the same or different machines.

Additionally, only byte-oriented data may be transmitted across a UNIX FIFO, whereas Windows systems allow either byte- or message-oriented data. Named pipes are created with the CreateNamedPipe() function, and a client can connect to a named pipe using ConnectNamedPipe().

Communication over the named pipe can be accomplished using the ReadFile() and WriteFile() functions.

64
Q

message

A

In networking, a communication, contained in one or more packets, that includes source and destination information to allow correct delivery.

In message-passing communications, a packet of information with metadata about its sender and receiver.

65
Q

port

A

A communication address; a system may have one IP address for network connections but many ports, each for a separate communication.

In computer I/O, a connection point for devices to attach to computers. In software development, to move code from its current platform to another platform (e.g., between operating systems or hardware systems). In the Mach OS, a mailbox for communication.

66
Q

bootstrap port

A

In Mach message passing, a predefined port that allows a task to register a port it has created.

67
Q

bootstrap server

A

In Mach message passing, a system-wide service for registering ports.

68
Q

connection port

A

In Windows OS, a communications port used to maintain connection between two processes, published by a server process.

69
Q

communication port

A

In Windows OS, a port used to send messages between two processes.

70
Q

section object

A

The Windows data structure that is used to implement shared memory.

71
Q

multicore

A

Multiple processing cores within the same CPU chip or within a single system.

72
Q

concurrency

A

A concurrent system supports more than one task by allowing all the tasks to make progress.

73
Q

parallelism

A

A parallel system can perform more than one task simultaneously. Thus, it is possible to have concurrency without parallelism.

Before the advent of multiprocessor and multicore architectures, most computer systems had only a single processor, and CPU schedulers were designed to provide the illusion of parallelism by rapidly switching between processes, thereby allowing each process to make progress. Such processes were running concurrently, but not in parallel.

74
Q

Identifying tasks

A

A challenge for multicore systems.

This involves examining applications to find areas that can be divided into separate, concurrent tasks. Ideally, tasks are independent of one another and thus can run in parallel on individual cores.

75
Q

Balance

A

A challenge for multicore systems.

While identifying tasks that can run in parallel, programmers must also ensure that the tasks perform equal work of equal value.

In some instances, a certain task may not contribute as much value to the overall process as other tasks. Using a separate execution core to run that task may not be worth the cost.

76
Q

Data splitting

A

A challenge for multicore systems.

Just as applications are divided into separate tasks, the data accessed and manipulated by the tasks must be divided to run on separate cores.

77
Q

Data dependency

A

A challenge for multicore systems.

The data accessed by the tasks must be examined for dependencies between two or more tasks. When one task depends on data from another, programmers must ensure that the execution of the tasks is synchronized to accommodate the data dependency.

78
Q

Testing and debugging

A

A challenge for multicore systems.

When a program is running in parallel on multiple cores, many different execution paths are possible. Testing and debugging such concurrent programs is inherently more difficult than testing and debugging single-threaded applications.

79
Q

Data parallelism

A

Focuses on distributing subsets of the same data across multiple computing cores and performing the same operation on each core.

80
Q

Task parallelism

A

Involves distributing not data but tasks (threads) across multiple computing cores. Each thread is performing a unique operation.

Different threads may be operating on the same data, or they may be operating on different data.

An example of task parallelism might involve two threads, each performing a unique statistical operation on the array of elements.

The threads again are operating in parallel on separate computing cores, but each is performing a unique operation.

81
Q

user threads

A

User threads are supported above the kernel and are managed without kernel support.

82
Q

kernel threads

A

Kernel threads are supported and managed directly by the operating system. Virtually all contemporary operating systems—including Windows, Linux, and macOS—support kernel threads.

83
Q

Green threads

A

A thread library available for Solaris systems and adopted in early versions of Java—used the many-to-one model.

84
Q

two-level model

A

One variation on the many-to-many model still multiplexes many user-level threads to a smaller or equal number of kernel threads but also allows a user-level thread to be bound to a kernel thread.

85
Q

Many-to-many model

A

Multiplexes many user-level threads to a smaller or equal number of kernel threads.

The number of kernel threads may be specific to either a particular application or a particular machine (an application may be allocated more kernel threads on a system with eight processing cores than on a system with four cores).

86
Q

Many-to-one model

A

Maps many user-level threads to one kernel thread. Thread management is done by the thread library in user space, so it is efficient.

However, the entire process will block if a thread makes a blocking system call. Also, because only one thread can access the kernel at a time, multiple threads are unable to run in parallel on multicore systems.

Very few systems continue to use the model because of its inability to take advantage of multiple processing cores, which have now become standard on most computer systems.

87
Q

One-to-one model

A

Maps each user thread to a kernel thread. It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call. It also allows multiple threads to run in parallel on multiprocessors.

The only drawback to this model is that creating a user thread requires creating the corresponding kernel thread, and a large number of kernel threads may burden the performance of a system.

Linux and Windows OSs use this model.

88
Q

signal

A

Used in UNIX systems to notify a process that a particular event has occurred. A signal may be received either synchronously or asynchronously, depending on the source of and the reason for the event being signaled.

All signals, whether synchronous or asynchronous, follow the same pattern:

A signal is generated by the occurrence of a particular event.

The signal is delivered to a process.

Once delivered, the signal must be handled.

89
Q

A signal may be handled by one of two possible handlers:

A
  1. A default signal handler.
  2. A user-defined signal handler.
90
Q

default signal handler

A

The signal handler that receives signals unless a user-defined signal handler is provided by a process.

A handler that the kernel runs when handling that signal.

91
Q

user-defined signal handler

A

The signal handler created by a process to provide non-default signal handling.

The default signal handler can be overridden by a user-defined signal handler that is called to handle the signal.

Signals are handled in different ways. Some signals may be ignored, while others (for example, illegal memory access) are handled by terminating the program.

92
Q

asynchronous procedure calls (APCs)

A

The APC facility enables a user thread to specify a function that is to be called when the user thread receives a notification of a particular event.

As indicated by its name, an APC is roughly equivalent to an asynchronous signal in UNIX.

However, whereas UNIX must contend with how to deal with signals in a multithreaded environment, the APC facility is more straightforward, since an APC is delivered to a particular thread rather than a process.

93
Q

Examples of synchronous signals:

A

include illegal memory access and division by 0. If a running program performs either of these actions, a signal is generated.

Synchronous signals are delivered to the same process that performed the operation that caused the signal.

94
Q

Examples of asynchronous signals:

A

When a signal is generated by an event external to a running process, that process receives the signal asynchronously.

Examples of such signals include terminating a process with specific keystrokes (such as <control><C>) and having a timer expire. Typically, an asynchronous signal is sent to another process.</C></control>

95
Q

Thread cancellation

A

Involves terminating a thread before it has completed. For example, if multiple threads are concurrently searching through a database and one thread returns the result, the remaining threads might be canceled.

96
Q

target thread

A

A thread that is to be canceled. Cancellation of a target thread may occur in two different scenarios:

Asynchronous cancellation. One thread immediately terminates the target thread.

Deferred cancellation. The target thread periodically checks whether it should terminate, allowing it an opportunity to terminate itself in an orderly fashion.

97
Q

Asynchronous cancellation

A

One thread immediately terminates the target thread.

98
Q

Deferred cancellation

A

The target thread periodically checks whether it should terminate, allowing it an opportunity to terminate itself in an orderly fashion.

99
Q

cancellation point

A

The default cancellation type is deferred cancellation. However, cancellation occurs only when a thread reaches a cancellation point.

Most of the blocking system calls in the POSIX and standard C library are defined as cancellation points, and these are listed when invoking the command man pthreads on a Linux system.

For example, the read() system call is a cancellation point that allows cancelling a thread that is blocked while awaiting input from read().

100
Q

cleanup handler

A

A function that allows any resources a thread has acquired to be released before the thread is terminated.

Pthread function that’s invoked when a thread is canceled. This function allows any resources a thread may have acquired to be released before the thread is terminated.

101
Q

thread-local storage (or TLS)

A

Threads belonging to a process share the data of the process. Indeed, this data sharing provides one of the benefits of multithreaded programming.

In some circumstances, each thread might need its own copy of certain data, and such data is called thread-local storage.

102
Q

lightweight process

A

An intermediate data structure between the user and kernel threads.

To the user-thread library, the LWP appears to be a virtual processor on which the application can schedule a user thread to run.

Each LWP is attached to a kernel thread, and it is kernel threads that the operating system schedules to run on physical processors.

If a kernel thread blocks (such as while waiting for an I/O operation to complete), the LWP blocks as well. Up the chain, the user-level thread attached to the LWP also blocks.

103
Q

scheduler activation

A

It works as follows: The kernel provides an application with a set of virtual processors (LWPs), and the application can schedule user threads onto an available virtual processor. Furthermore, the kernel must inform an application about certain events.

104
Q

upcall

A

Used by the kernel to inform an application about certain events.

105
Q

upcall handler

A

Upcalls are handled by the thread library with an upcall handler, and upcall handlers must run on a virtual processor.

106
Q

signal

A

In UNIX and other operating systems, a means used to notify a process that an event has occurred.