Final Review Flashcards

1
Q

Entry Criteria For Peformance Testing

A
  • Quantitative and measurable performance requirements.
  • Reasonably stable system.
  • Test environment representative of customer site.
  • Tools:
    a. Load Generator
    b. Resource Monitor
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Load May Reflect

A
  • Different Volumes of Activity

- Different Mixes of Activity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why should load be varied for relevant use cases and response time tracked?

A

It assists in verifying performance requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is identified when resource usage is tracked?

A

The ability to identify potential bottlenecks or sources of performance problems..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

(True or False): Performance testing involves varying the load on the system and comparing the results against the performance requirements.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Question 3

True or False? Analysis of resource usage helps identify potential sources of performance issues.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What type of testing? Verify the behavior of the system meets its requirements when its resources are saturated and pushed beyond their limits.

A

Stress Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does Stress Testing attempt to find?

A

It attempts to find the stress points and ensure the system perfoms as specified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the 4 Stress Testing Steps?

A
  1. Identify Stress Points.
  2. Develop a strategy to stress the system at points identified in step 1.
  3. Verify that intended stress is actually generated.
  4. Observe Behavior.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

True or False? Stress testing should be scheduled during the last couple of weeks of the project.

A

False. Stress Testing should begin earlier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does Volume Testing? Achieve

A

Verification that the behavior of the system meets its requirements when the system is subjected to a large volume of acitvity over an extended period of time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What Types of Testing would lead to these types of Errors;

a. Memory Leaks
b. Counter Overflow
c. Resource Depletion.

A

Volume Testing will lead to these types of errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

True or False? Volume testing looks to verify that a system meets its requirements when it is subjected to a large volume of activity over a short amount of time.

A

False. Volume Testing is done over an extended period of time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

True or False? Configuration Testing should be done for every potential configuration of a system.

A

False.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the steps needed for configuration Testing?

A
  1. Identify the parameters that define a configuration. (These parameters are ones that have an impact on the system’s ability to meet its functional and performance requirements)
  2. Partition. (Grouping of similar parameters, this helps in reducing number of configurations)
  3. Identify configuration combinations to test.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are some of the criterias needed when creating configuration combinations?

A

a. Using boundaries (Maximum and Minimum)
b. Risk Based.
c. Design of Experiments Pairwise combinations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

True or False? Configuration testing looks to verify that the functional and performance requirements of a system are met for different configurations of the system.

A

True.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Define Configuration Testing?

A

Configuration testing looks to verify that the functional and performance requirements of a system are met for different configurations of the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

True or False? Modifying existing software is a lowrisk-activity.

A

False. The modification of software is a high risk-activity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Why are modifications needed?

A
  • Error Fixes.

- The addition of new functions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are some of the reasons Modifications introduce errors?

A

a. Code Ripple Effects
b. Unintended feature interactions.
c. Changes in performance synchronization, resource sharing, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the purpose of Regression testing?

A

Regression testing is used to ensure that previously developed and tested functions continue to work as specified after software modifications have been made.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

When should regression testing be done?

A

At a multi-level:

a. Unit Level.
b. Integration Level.
c. System Level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the overall process for incremental development and testing with regression Testing?

A

a. Implement first set of functions and release a build.
b. Test the build, create regression tests and report problems.
c. Incorporate the fixes into a new build. Implement next set of functions into a new build.
d. Test Fixes, Run regression tests for old functions, test new functions, and finally update regression test set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is a Full Regression Testing?

A

Rerun all existing tests in response to a code modification.
Note: This is normally impractical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is Selective Regression Testing?

A

Rerun a selected subset of tests based on the modifications made.
Finally, execute a standard confidence test irregardless of the modification.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Information of Code Deltas

A

Requires coverage tool for mapping test cases to code at desired granularity level.

Requires configuration management tool to identify code change deltas.

Strategy suggest rerunning test that traversed changed or deleted code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is Ripple Effect Analysis?

A

This requires developers to identify the impact of changes on other requirements or features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Selective Regression Testing Using a Confidence Test Suite. How is this accomplished?

A
  • Selects a subset of tests to execute to verify previous functionality.
  • Include Tests Addressing:
    a. High Frequency use cases.
    b. Critical Functionality.
    c. Functional breadth.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is the ‘Revalidation Issue’?

A

Regression tests must be revalidated to ensure they are consistent with the software modification.

This is accomplished by ensuring test inputs and expected outputs are re-examined for correctness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is a system’s verall reliability and availability is

dependent upon?

A

It is dependent on the ability of the system to Detect and Recover from a variety of failures.

Examples:

  • User
  • Hardware,
  • Software
  • Other Systems
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

(True or False) It is essential to have a list of the errors to recover from specified
in the requirements

A

True.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is the usual approach for Error and Recovery Testing?

A

The approach will be error injection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Why is Serviceability Testing Important?

A

It is important for systems availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is the purpose of Serviceabiltiy testing?

A

Objective is to verify serviceability requirements are

being met.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What does Serviceability Include?

A
  • Problem reporting.
  • Isolation
  • Correction
  • Verification
  • Fix Release
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What does Usability Testing provide?

A

Verify the behavior of the system meets its requirements when its resources are saturated and pushed beyond their limits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What test will find stress points in a system and ensures the system performs as specified?

A

Usability Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

The usability requirements of a system is stated in what terms?

A
  • Learnability
  • Memorability
  • Errors
  • Efficiency
  • Subjective Satisfaction
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

The type and amount of training required to bring users to a desired level of performance

A

Learnability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

The addressing of the ability to retain skills in using a product once it is learned

A

Memorability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

The measure of the number of incorrect actions a user makes in trying to accomplish a task

A

Errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

The measure of the speed with which tasks can be performed

A

Efficiency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

The user’s overall feeling about the product

A

Subjective Satisfaction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Explain the concern with Reliability where it comes to usability testing?

A

Are we able to get the same results if the test is repeated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Explain the concern with Validity where it comes to usability testing?

A

Is the test measuring something of relevance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Usability Testing: Formative Evaluation

A
  • Learning which aspects of interface are good and bad

Answers the question of what can be improved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Usablity Testing: Summative Evaluation

A
  • The assessing of the overall quality of the interface.

These are like measurement test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What are the concerns of a Test Plan

A
  • Who are the Users?
  • What task will they perform?
  • What user aids will be available?
  • What data is to be collected?
  • What criteria will be used to determine success?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

When should test procedures be tried out? Explain what is accomplished.

A

Test procedures must be tried out in a pilot study.

The pilot test will evaluate instructions, success criteria, the time to perform tasks, and evaluation criteria.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What kind of test users should be used?

A

Users must be representative. Evaluation should be done of both novice and expert users.

Note: We should be prepared to train users to achieve expert level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

What type of Usability Comparison is prefered?

A

“Within Subject Testing” is preferable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What are the stages of a Test

A
  • Preparation: Ensure Environment is set-up
  • Introduction: Welcome, Purpose and Overview.
  • Running the Test.
  • Debriefing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What percentage of an application is composed of the user interface?

A

Around 50%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What does Usability is the degree in which users are able to:

A

The ability to perform tasks the product is intended to support in intended environment.

The satisfaction by the procedures the must follow and the resultant output.

The protection from consequences of their actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Define Reliability:

A

The probability that a system or a capability functions without failure for a specified time or number of natural units in a specified environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Define Availability:

A

The Probability at any given time that a system or capability of a system functions satisfactorily in a specified environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What is the Formula for Availability?

A

(MTTF / (MTTF + MTTR) ) x 100%

MTTF - Mean Time To Failure
MTTR - Mean Time To Repair

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

What does High Reliability and Availability depend on?

A

Fault Prevention and Fault Tolerance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

What is an Operational Profile?

A

An Operational Profile describes how users utilize a product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

What does an operational profile consists?

A

A set of major functions performed by the system and their occurrence probabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

What is needed for Reliability Prediction?

A

An Operational Profile is needed for Reliability Prediction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Three Steps needed for Operational Profile Creation:

A
  1. Identify the major functions performed by system.
  2. Identify the occurrence rates.
  3. Calculate the occurrence probability.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

True or False: Tests are developed based on operational profiles.

A

True.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

What is the Goal of Development Testing?

A

The goal is to remove faults that have caused failures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

What is the Goal of Certification Testing?

A

The goal is to determine whether a software component or system should be accepted or rejected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

True or False. Operational Profiles assist in the development of priorities and assists in performance analysis

A

True.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Which method is not needed for obtaining operational profile data?

a. Marketing
b. Competing Systems
c. Existing Systems
d. Tester Opinions

A

Tester Opinions.

Operational Profiles are created based upon user data and a tester opinions would not always be representative of users of the system.

69
Q

What does Reliability Growth Model gives insight into?

A

It will give users to see how reliability changes over time.

70
Q

What question does Software Modeling help to answer?

A

When should we stop testing?

71
Q

Describe and explain Failure Intensity?

A

The number of failures per natural or time unit.

72
Q

What is needed for an effective model prediction?

A

To ensure effective reliability model is chosen it should be used in conjunction with a operational profile.

73
Q

What is statistical Testing?

A

It is the testing software for reliability rather than fault detection.

This reliability should be specified and the software tested and amended until that specific level is reached.

74
Q

What are some of the Problems that arise with Reliability Testing?

A

Operational Profile Uncertainty: It is very difficult to test at times if the operational profile being used is truly representative of the real use of the system.

High Costs of Test Data Generation.

Statistical Uncertainty: It may be impossible to generate enough failures to draw statistically valid conclusions

75
Q

Selecting Growth Model Selection:

A
  • Many Different reliability growth models have been proposed.
  • No universally applicable growth model
  • Reliability should be measured and observed data should be fitted to several models.
  • Best-fit model should be used for reliability prediction.
76
Q

True or False: Reliability Growth models utilize historic data to predict system reliability.

A

False.

Data for system reliability doesn’t come from historic data, but from system testing.

77
Q

True or False: Software Correctness and Security are the same.

A

False.

78
Q

What is the ultimate goal of security testing?

A

The goal in security testing is to ensure private data is protected from unauthorized users.

79
Q

Security Fundamentals and Testing Areas.

A

Confidentiality:

  • Application
  • Data

Integrity

  • Data Modification.
  • Functions Performed

Availability
- Denial of Service

80
Q

Why is it important to do security testing?

A

Software may have unintended or unknown functionality that may produce side-effects contributing to security problems.

81
Q

GUI Security Risks

A

Verify Access Control

  • Entry to system
  • Access to functions and data

Look for all possible access methods to data

  • Cut and Paste
  • Screen capture

Evaluate malicious input
- Denial of Service

82
Q

File System Security Risks

A

Evaluate how data is stored and retrieved

Focus on encryption and data protection.

83
Q

OS Security Risks

A

Evaluate decrypted data storage in memory

Stress test with low memory
- System under memory stress may leave data unprotected.

84
Q

Security Testing Strategies

A
  • Deny Application access to libraries it needs.
  • Try to overflow input buffers by inputting long strings.
  • Try special characters as inputs
  • Try default or common user names and passwords
  • Attempt to fake the source of data.
  • Force system to use default values
  • Test all routes to perform a task
  • Produce each error message and ensure that it does not compromise security
85
Q

True or False? System availability is affected by the ability of the system to detect and recover from failures.

A

True

86
Q

True or False? Use cases can help with developing quantitative and measurable usability tests

A

True

87
Q

True or False? Reliability models require system testing to be performed with an operational profile.

A

True

88
Q

True or False? The goal of certification testing is to remove faults that have caused failures.

A

False

89
Q

True or False? Test plans should be written for all testing levels.

A

True.

Unit, Integration, System, Beta, and Acceptance.

90
Q

When should System Test Planning Begin?

A

System test planning must begin early and used during all phases of software development..

91
Q

What should a test plan reflect?

A

It should reflect an in-depth understanding of the objective of the system test as well as project constraints.

92
Q

What does the System Test Plan Address:

A
  • System test objectives
  • Dependencies and assumptions
  • Adopted test strategy
  • Specification of the test environment
  • Specification of system test entry and exit criteria
  • Schedule
  • Risk Management
93
Q

True or False. The system plan should give a general understanding of system test activity.

A

False.

The system plan must Clearly Define the objective.

94
Q

How should dependencies be accounted for?

A

When creating a system test plan all dependencies and assumptions need to be identified.

95
Q

What is Testing Strategy

A

Testing strategy defines how testing objectives will be met within project constraints.

96
Q

What is a major criteria that testing strategy is based on?

A

Risk.

97
Q

What does Testing Strategy determine?

A
  • Techniques to be used for test data generation.
  • Test Environment
  • Entry and Exit Criteria
  • Schedule
98
Q

What does Test Environment Include?

A
  • Platforms to test on. (Windows, Linux, Mac, etc.)
  • Simulators
  • Testing tools
99
Q

How is an environment selected?

A

It is selected based on the objective of testing and the testing strategy.

Example:

  • Performance testing objective may require load generation tools.
  • Configuration testing objective may require additional resources and simulation tools
100
Q

System Test Entry Criteria

A

It is established based on test strategy to maximize test effectiveness.

Possible Entry Criteria:

  • Code under configuration management
  • Completion of integration test
  • No outstanding high priority problems
  • Successful completion of system test readiness assessment
101
Q

What issues may arise if system testing starts to early?

A
  • Inability to run all tests.
  • Excessive communication with developers on problem fixes.
  • High degree of retest
102
Q

System Test Readiness Assessment. What does it provide.

A

It identifies functions and code stability needed to effectively begin system test.

It Provides a concrete entry criteria for system test.

Provides a way for development to prioritize their activies as athe start of system test grows near.

Note: This is developed early in the project in conjunction with development.

103
Q

What activities are needed for the creation of System Test Schedule?

A
  • Identify all of the testing tasks to be performed.
  • Identify dependencies among the testing tasks.
  • Estimate the effort and resources needed to perform each task
  • Assign tasks to individuals or groups.
  • Map testing tasks to a time line.
104
Q

Why are Test Plan Risk Management important?

A

It helps to identify the risk that correspond to scenarios that could impact testing schedule and effectiveness.

105
Q

True or False? Testing risks can be identified from previous projects

A

True.

Testing risks can be identified via checklists or previous project “lessons learned”

106
Q

True or False? Testing Risks do not have to prioritized.

A

False.

Testing risks must be prioritized and mitigated.

Prioritization is based on likelihood of risk occurring and consequences.

107
Q

What does Risk Mitigation accomplish?

A

Risk mitigation involves reducing the likelihood of the risk occurring and/or developing contingency plans to minimize impact of the risk should it occur.

108
Q

True or False? Test Planning is only needed for system testing.

A

False.

109
Q

True or False? System test planning involves both an understanding of the test objectives and the constraints

A

True.

110
Q

What kind of Chart documents dependencies in a system?

A

PERT Chart documents dependencies.

PERT - Program Evaluation and Review Technique

111
Q

What is the Critical Path in a PERT Chart?

A

It is the path with no slack time.

112
Q

What are some of the consequences of overestimating system testing time?

A

Inefficient Testing and delayed product release.

113
Q

What are some of the consequences of underestimating system testing time?

A

Lots of overtime, high stress, and probable ineffective testing.

Explanation: Because of underestimating, much more work needs to be accomplished therefore stress increases and the needed work to complete the test in short period of time leads to overtime. Some times testing will be pushed to the backburner leading to inefficient testing.

114
Q

What must be identified in order to create an effective timeline?

A
  • Constraints
  • Task Dependencies
  • Availability of personnel
  • Risks
115
Q

What kind of chart can be used to document a schedule?

A

Gantt Chart can be used to document a schedule.

Gantt charts identifies duration of tasks along with their starting and ending dates.

Gantt Charts identify parallel tasks.

116
Q

What must be used in order to provide the proper scheduling buffer?

A

Risk management must be used to guide the team in the amount of contingency time which must be allocated to the schedule.

117
Q

True or False? A PERT chart helps document dependencies in a test schedule.

A

True

118
Q

What is needed in the development of an estimate?

A

The variables need to be identified. Then an estimate based upon them can be produced.

119
Q

Approaches for developing estimates based on variables include:

A
  • Using historical data.
  • Calculation via a cost estimation model
  • Generating a test estimate based on a percentage of development estimate
120
Q

Major Causes of Inaccurate Estimates

A
  • Misunderstanding of requirements
  • Overlooked tasks
  • Insufficient analysiis when developing estimates due to time pressure.
  • Lack of guidelines for estimating
  • Lack of historical data.
  • Pressure to reduce estimates.
121
Q

Steps for Generic Estimation Process

A
  1. Determine Estimation Responsibilities
  2. Review and clarify testing objectives, deliverables, milestones, and constraints.
  3. Identify testing tasks.
  4. Select appropriate size measure for testing work.
  5. Select size estimation method.
  6. Estimate and document size.
  7. Estimate and document effort.
122
Q

What is Top-Down Estimation?

A

It is the development of an estimate based on past similar projects.

123
Q

True or False? Top-Down Estimation works well with new types of projects

A

False.

124
Q

What is Bottom-Up Estimation?

A

It is the breaking of testing effort into parts.

Each part is estimated separately and the parts are summed to create the estimate.

125
Q

What is the 80/20 Rule?

A

80% of the effects come from 20% of the causes.

126
Q

What is an example of the size unit for effort?

A
  • Staff hours

- Staff months

127
Q

True or False? Effort estimation can take part before, concurrently, or after size estimation.

A

False.

Effort is based upon the size of the project, and can only be determined after the size of the project is estimated.

128
Q

True or False? A work breakdown structure (WBS) helps in bottom up estimation.

A

True.

129
Q

True or False? It is important to test high risk areas early.

A

True.

130
Q

True or False? Testing high risk areas are need to be tested partially

A

False.

High risk areas need to be tested thoroughly.

131
Q

True or False? It is good to test high risk areas early, but they do not need to be tested thoroughly.

A

False.

Testing Areas that are high risk should be tested early and thoroughly.

132
Q

True or False? Testing should be prioritized based on the risk exposure.

A

True.

133
Q

What is the best criteria needed for stopping the testing?

A

The best criteria to conclude testing is when the test objective have been met.

134
Q

Define Defect Density

A

The number of defects per thousand lines of code.

135
Q

How is defect density used as a Test Exit Criteria

A

Historical defect density for similar projects can be used to determine if a project is ready for release.

136
Q

What is Defect Pooling?

A

It is the reporting of defects from 2 groups which can be tracked separately.

This is normally used with operational profiles.

137
Q

What is the Formula to calculate Unique Defects?

A

(Defects(A) + Defects(B)) - Defects(A+B)

Defects Found in Group A + Defects Found in Group B Subtracted from the common Defects found in Both Groups.

138
Q

What is the Formula to calculate Estimated Total Defects?

A

(Defects(A) x Defects(B)) / (Defects(A+B)

Defects found in Group A multiplied by Defects Found in Group B, Divided by the Common Defects found in both groups.

139
Q

What is the Formula for Estimated Defects Remaining?

A

Estimated Total Defects - Unique Defects

140
Q

What is Defect Seeding?

A

It is the approach of “seeding” defects into the system. It uses the Seeded Defects Planted Divided by Seeded Defects Found then multiplied by normal defects found.

141
Q

What is the formula for Estimated Total Defects?

A

(Seeded Defects Planted / Seeded Defects Found) x (Normal Defects Found)

142
Q

What are some of the metrics used for Trend Analysis?

A
  • Time to Failure
  • Cumulative number of failures.
  • Number of failures per unit of time (Failure Intensity)
143
Q

What are the Three Types of Trends seen in Trend Analysis?

A
  • Increasing Reliability
  • Decreasing Reliability
  • Stable Reliability
144
Q

True or False? Increasing Reliability is normally good news

A

True.

145
Q

Give an example what may cause a sudden increase in reliability.

A
  • Changing Test Effort.
  • Test Burnout.
  • Unrecorded Failures
146
Q

True or False? Using historical data to predict defect density may not always lead to accurate predictions.

A

True.

147
Q

True or False? Defect seeding is typically used with operational profiles.

A

False

Defect Pooling is typically used with Defect Pooling

148
Q

True or False? Good test cases ideally map back to requirements and have their own identifiers.

A

True

149
Q

True or False? Severity reflects the customer impact of the error.

A

True

150
Q

Who does severity impact?

A

Severity reflects the customer impact of the error

151
Q

Define Priority

A

Priority reflects project considerations.

152
Q

What is assessed during a system test?

A
  • Product Quality

- Testing Progress

153
Q

Metrics used to assessed test progress

A
  • Percentage of tests developed
  • Percentage of tests executed
  • Percentage of requirements tested
154
Q

What does BCWS stand for?

A

Budgeted Cost of Work Scheduled

155
Q

What does BCWP stand for?

A

Budgeted Cost of Work Performed

156
Q

What does ACWP stand for?

A

Actual Cost of Work Performed.

157
Q

Describe the state of this project:
BCWS = 210
BCWP = 220

A

This project is ahead of schedule.

158
Q

Describe the state of this project:
BCWS = 220
BCWP = 220

A

This project is on schedule.

159
Q

Describe the state of this project:
BCWS = 190
BCWP = 180

A

This project is behind schedule.

160
Q

Describe the state of this project:
BCWS = 180
BCWP = 180
ACWP = 190

A

This project is on schedule and above budget.

161
Q

Describe the state of this project:
BCWS = 170
BCWP = 180
ACWP = 150

A

This project is ahead of schedule and under budget.

162
Q

True or False? Earned values are used primarily to identify schedule variance.

A

False.

Earned values are used not only for scheduling but also for budgeting.

163
Q

Define Perceived Process.

A

What you think you do.

164
Q

Define Official Process

A

What you are supposed to do.

165
Q

Define Actual Process

A

What you do.

166
Q

What is the GQM Paradigm

A

GQM stands for Goal-Question-Metric

  • Define the goals of the measurement process.
  • Derive the questions that must be answered to meet the goals
  • Develop metrics to answer the questions.
167
Q

What are the Process Improvement phases

A
  • Characterize the current process.
  • Analyze current Process
  • Characterize target process
  • Process redesign
  • Implement
168
Q

True or False? When doing process improvement in an organization, the focus should be on examining the “official process”.

A

False.

The official process is what you are supposed to do.
Process begins with examining “the actual process.”