2.xii - dependability and security Flashcards

1
Q

term proposed by Laprie (1995) to cover the related systems attributes of availability, reliability, safety, and security

A

dependability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

enumeration

4 reasons dependability of systems is now MORE important than their detailed functionality

A
  1. system failures affect a large number of people
  2. users often reject systems that are unreliable, unsafe, or insecure
  3. system failure costs may be enormous
  4. undependable systems may cause information loss
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

If this functionality were left out of the system, only a small number of users would be affected. System failures, which affect the availability of a system, potentially affect all users of the system. Failure may mean that normal business is impossible.

A

system failures affect a large number of people

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

If users find that a system is unreliable or insecure, they will refuse to use it. Furthermore, they may also refuse to buy or use other products from the same company that produced the unreliable system, because they believe that these products are also likely to be unreliable or insecure.

A

users often reject systems that are unreliable, unsafe, or insecure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

For some applications, such as a reactor control system or an aircraft navigation system, the cost of system failure is orders of magnitude greater than the cost of the control system.

A

system failure costs may be enormous

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Data is very expensive to collect and maintain; it is usually worth much more than the computer system on which it is processed. The cost of recovering lost or corrupt data is usually very high.

A

undependable systems may cause information loss

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

is always a part of a broader system

A

software

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

enumeration

3 considerations when designing a dependable system

A
  1. hardware failure
  2. software failure
  3. operational failure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

system hardware may fail because of mistakes in its design, because components fail as a result of manufacturing errors, or because the components have reached the end of their natural life.

A

hardware failure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

System software may fail because of mistakes in its specification, design, or implementation.

A

software failure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

human users may fail to use or operate the system correctly. As hardware and software have become more reliable, failures in operation are now, perhaps, the largest single cause of system failures.

A

operational failure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Some classes of system are (1) where system failure may result in injury to people, damage to the environment, or extensive economic losses

A

critical systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

a property of the system that reflects its trustworthiness

A

dependability of a computer system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

degree of confidence a user has that the system will operate as they expect

A

trustworthiness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

[true or false]

it is meaningful to express dependability numerically

A

false; we use relative terms such as ‘not dependable,’ ‘very dependable,’ and ‘ultra-dependable’ to reflect the degrees of trust that we might have in a system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

enumeration

4 principal dimensions to dependability

A
  1. availability
  2. reliability
  3. safety
  4. security
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

probability that it will be up and running and able to deliver useful services to users at any given time.

A

availability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

probability, over a given period of time, that the system will correctly deliver services as expected by the user.

A

reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

a judgment of how likely it is that the system will cause damage to people or its environment.

A

safety

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

judgment of how likely it is that the system can resist accidental or deliberate intrusions.

A

security

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

[true or false]

these 4 principal dependability properties are not all applicable to all systems

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

enumeration

4 other system properties as dependability properties

A
  1. repairability
  2. maintainability
  3. survivability
  4. error tolerance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

(1) in software is enhanced when the organization using the system has access to the source code and has the skills to make changes to it. Open source software makes this easier but the reuse of components can make it more difficult.

A

repairability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

the software can be adapted economically to cope with new requirements, and where there is a low probability that making changes will introduce new errors into the system.

A

maintainability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

is the ability of a system to continue to deliver service whilst under attack and, potentially, whilst part of the system is disabled.

A

survivability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

enumeration

3 strategies used to enhance survivability

A
  1. resistance to attack
  2. attack recognition
  3. recovery from the damage cause by an attack
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

This property can be considered as part of usability and reflects the extent to which the system has been designed so that user input errors are avoided and tolerated. When user errors occur, the system should, as far as possible, detect these errors and either fix them automatically or request the user to reinput their data.

A

error tolerance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

enumeration

ensure these 4 to develop a DEPENDABLE software

A
  1. avoid the introduction of accidental errors into the system during software specification and development.
  2. design verification and validation processes that are effective in discovering residual errors that affect the dependability of the system.
  3. design protection mechanisms that guard against external attacks that can compromise the availability or security of the system.
  4. configure the deployed system and its supporting software correctly for its operating environment.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

[true or false]

assume that your software is not perfect and no software failures may occur

A

false; assume that your software is not perfect and THAT software failures may occur

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

dependable systems have to include redundant code to help them monitor themselves, detect erroneous states, and recover from faults before failures occur.

A

need for fault tolerance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

are high for systems that must be ultra-dependable such as safetycritical control systems.

A

validation costs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

[true or false]

As testing is very expensive, this dramatically decreases the cost of high-dependability systems.

A

false; As testing is very expensive, this dramatically INCREASES the cost of high-dependability systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

probability of failure-free operation over a specified time, in a given environment, for a specific purpose.

A

reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

probability that a system, at a point in time, will be operational and able to deliver the requested services.

A

availability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

One of the practical problems in developing reliable systems

A

our intuitive notions of reliability and availability are sometimes broader than these limited definitions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

[true or false]

The standard definitions of availability and reliability do not take into account
the severity of failure or the consequences of unavailability

A

true; people often accept minor system failures but are very concerned about serious failures that have high consequential costs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Human behavior that results in the introduction of faults into a system. For example, in the wilderness weather system, a programmer might decide that the way to compute the time for the next transmission is to add 1 hour to the current time. This works except when the transmission time is between 23.00 and midnight (midnight is 00.00 in the 24-hour clock).

A

human error or mistake

38
Q

A characteristic of a software system that can lead to a system error. The fault is the inclusion of the code to add 1 hour to the time of the last transmission, without a check if the time is greater than or equal to 23.00.

A

system fault

39
Q

An erroneous system state that can lead to system behavior that is unexpected by system users. The value of transmission time is set incorrectly (to 24.XX rather than 00.XX) when the faulty code is executed.

A

system error

40
Q

An event that occurs at some point in time when the system does not deliver a service as expected by its users. No weather data is transmitted because the time is invalid.

A

system failure

41
Q

[true or false]

System faults do always result in system errors and system errors do not necessarily result in system failures

A

System faults DO NOT always result in system errors and system errors do not necessarily result in system failures

42
Q

enumeration

3 reasons why system faults =/= system errors and system errors =/= system failures

A
  1. not all code in a program is executed
  2. errors are transient
  3. system may include fault detection and protetion mechanisms
43
Q

The code that includes a fault (e.g., the failure to initialize a variable) may never be executed because of the way that the software is used.

A

not all code in a program is executed

44
Q

A state variable may have an incorrect value caused by the execution of faulty code. However, before this is accessed and causes a system failure, some other system input may be processed that resets the state to a valid value.

A

errors are transient

45
Q

ensure that the erroneous behavior is discovered and corrected before the system services are affected

A

system may include fault detection and protetion mechanisms

46
Q

enumeration

3 complementary approaches to improve reliability of a system

A
  1. fault avoidance
  2. fault detection and removal
  3. fault tolerance
47
Q

Development techniques are used that either minimize the possibility of human errors and/or that trap mistakes before they result in the introduction of system faults. Examples of such techniques include avoiding error-prone programming language constructs such as pointers and the use of static analysis to detect program anomalies.

A

fault avoidance

48
Q

The use of verification and validation techniques that increase the chances that faults will be detected and removed before the system is used. Systematic testing and debugging is an example of a faultdetection technique.

A

fault detection and removal

49
Q

These are techniques that ensure that faults in a system do not result in system errors or that system errors do not result in system failures. The incorporation of self-checking facilities in a system and the use of redundant system modules are examples of fault tolerance techniques.

A

fault tolerance

50
Q

are systems where it is essential that system operation is always safe

A

safety-critical systems

51
Q

simpler to implement and analyze
than software control

A

hardware control of safety-critical systems

52
Q

enumeration

2 classes of safety-critical software

A
  1. primary safety-critical software
  2. secondary safety-critical software
53
Q

This is software that is embedded as a controller in a system. Malfunctioning of such software can cause a hardware malfunction, which results in human injury or environmental damage. The insulin pump software, introduced in Chapter 1, is an example of a primary safety-critical system. System failure may lead to user injury.

A

primary safety-critical software

54
Q

This is software that can indirectly result in an injury. An example of such software is a computer-aided engineering design system whose malfunctioning might result in a design fault in the object being designed. This fault may cause injury to people if the designed system malfunctions. Another example of a secondary safety-critical system is the mental health care management system, MHC-PMS. Failure of this system, whereby an unstable patient may not be treated properly, could lead to that patient injuring themselves or others.

A

secondary safety-critical software

55
Q

enumeration

4 reasons software systems that are reliablie =/= safe

A
  1. never 100% sure that a system is fault-free and fault-tolerant
  2. specification may be incomplete and not describe required behavior of the system
  3. hardware malfunctions may cause system to behave in unpredictable way
  4. system operators may generate inputs that are not individually incorrect but can lead to a system malfunction
56
Q

An unplanned event or sequence of events which results in human death or injury, damage to property, or to the environment. An overdose of insulin is an example of an accident.

A

accident (or mishap)

57
Q

A condition with the potential for causing or contributing to an accident. A failure of the sensor that measures blood glucose is an example of a hazard.

A

hazard

58
Q

A measure of the loss resulting from a mishap. Damage can range from many people being killed as a result of an accident to minor injury or property damage. Damage resulting from an overdose of insulin could be serious injury or the death of the user of the insulin pump.

A

damage

59
Q

An assessment of the worst possible damage that could result from a particular hazard. Hazard severity can range from catastrophic, where many people are killed, to minor, where only minor damage results. When an individual death is a possibility, a reasonable assessment of hazard severity is ‘very high.’

A

hazard severity

60
Q

The probability of the events occurring which create a hazard. Probability values tend to be arbitrary but range from ‘probable’ (say 1/100 chance of a hazard occurring) to ‘implausible’ (no conceivable situations are likely in which the hazard could occur). The probability of a sensor failure in the insulin pump that results in an overdose is probably low.

A

hazard probability

61
Q

This is a measure of the probability that the system will cause an accident. The risk is assessed by considering the hazard probability, the hazard severity, and the probability that the hazard will lead to an accident. The risk of an insulin overdose is probably medium to low.

A

risk

62
Q

enumeration

3 ways to assure safety through minimal to zero accident consequences

A
  1. hazard avoidance
  2. hazard detection and removal
  3. damage limitation
63
Q

The system is designed so that hazards are avoided. For example, a cutting system that requires an operator to use two hands to press separate buttons simultaneously avoids the hazard of the operator’s hands being in the blade pathway.

A

hazard avoidance

64
Q

The system is designed so that hazards are detected and removed before they result in an accident. For example, a chemical plant system may detect excessive pressure and open a relief valve to reduce these pressures before an explosion occurs.

A

hazard detection and removal

65
Q

The system may include protection features that minimize the damage that may result from an accident. For example, an aircraft engine normally includes automatic fire extinguishers. If a fire occurs, it can often be controlled before it poses a threat to the aircraft.

A

damage limitation

66
Q

An analysis of serious accidents by (1) suggests that they were almost all due to a combination of failures in different parts of a system

A

(Perrow, 1984)

67
Q

a system attribute that reflects the ability of the system to protect itself from external attacks, which may be accidental or deliberate

A

security

68
Q

Something of value which has to be protected. The asset may be the software system itself or data used by that system.

A

asset

69
Q

Possible loss or harm to a computing system. This can be loss or damage to data, or can be a loss of time and effort if recovery is necessary after a security breach.

A

exposure

70
Q

A weakness in a computer-based system that may be exploited to cause loss or harm.

A

vulnerability

71
Q

An exploitation of a system’s vulnerability. Generally, this is from outside the system and is a deliberate attempt to cause some damage.

A

attack

72
Q

Circumstances that have potential to cause loss or harm. You can think of these as a system vulnerability that is subjected to an attack.

A

threats

73
Q

A protective measure that reduces a system’s vulnerability. Encryption is an example of a control that reduces a vulnerability of a weak access control system.

A

control

74
Q

example give the term

The records of each patient that is receiving or has received treatment.

A

asset

75
Q

example give the term

Potential financial loss from future patients who do not seek treatment because they do not trust the clinic to maintain their data. Financial loss from legal action by the sports star. Loss of reputation.

A

exposure

76
Q

example give the term

A weak password system which makes it easy for users to set guessable passwords. User ids that are the same as names.

A

vulnerability

77
Q

example give the term

An impersonation of an authorized user.

A

attack

78
Q

example give the term

An unauthorized user will gain access to the system by guessing the credentials (login name and password) of an authorized user.

A

threat

79
Q

example give the term

A password checking system that disallows user passwords that are proper names or words that are normally included in a dictionary.

A

control

80
Q

enumeration

3 types of security threats

A
  1. Threats to the confidentiality of the system and its data
  2. Threats to the integrity of the system and its data
  3. Threats to the availability of the system and its data
81
Q

These can disclose information to people or programs that are not authorized to have access to that information.

A

Threats to the confidentiality of the system and its data

82
Q

These threats can damage or corrupt the software or its data.

A

Threats to the integrity of the system and its data

83
Q

These threats can restrict access to the software or its data for authorized users.

A

Threats to the availability of the system and its data

84
Q

enumeration

3 controls to put in place to enhance system security

A
  1. Vulnerability avoidance
  2. Attack detection and neutralization
  3. Exposure limitation and recovery
85
Q

Controls that are intended to ensure that attacks are unsuccessful. The strategy here is to design the system so that security problems are avoided. For example, sensitive military systems are not connected to public networks so that external access is impossible. You should also think of encryption as a control based on avoidance. Any unauthorized access to encrypted data means that it cannot be read by the attacker. In practice, it is very expensive and time consuming to crack strong encryption.

A

Vulnerability avoidance

86
Q

Controls that are intended to detect and repel attacks. These controls involve including functionality in a system that monitors its operation and checks for unusual patterns of activity. If these are detected, then action may be taken, such as shutting down parts of the system, restricting access to certain users, etc.

A

Attack detection and neutralization

87
Q

Controls that support recovery from problems. These can range from automated backup strategies and information ‘mirroring’ to insurance policies that cover the costs associated with a successful attack on the system.

A

Exposure limitation and recovery

88
Q

can lead to large economic losses, serious information loss, physical damage, or threats to human life.

A

Failure of critical computer systems

89
Q

a system property that reflects the user’s degree of trust in the system. The most important dimensions of dependability are availability, reliability, safety, and security.

A

dependability of a computer system

90
Q

related to the probability of an error occurring in operational use. A program may contain known faults but may still be experienced as reliable by its users. They may never use features of the system that are affected by the faults.

A

perceived reliability

91
Q

system attribute that reflects the system’s ability to operate, normally or abnormally, without injury to people or damage to the environment.

A

safety of a system

92
Q

[true or false]

If a system is unreliable, it is easier to ensure system safety or security, as they may be compromised by system failures.

A

false; If a system is unreliable, it is DIFFICULT to ensure system safety or security, as they may be compromised by system failures.