Glossary Flashcards

(165 cards)

1
Q

defect, bug, fault

A

An imperfection or deficiency in a work product where it does not meet its requirements or specifications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

error, mistake

A

A human action that produces an incorrect result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

failure

A

An event in which a component or system does not perform a required function within specified limits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

quality

A

The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

risk

A

A factor that could result in future negative consequences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

debugging

A

The process of finding, analyzing and removing the causes of failures in software.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

requirement

A

A provision that contains criteria to be fulfilled.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

review

A

A type of static testing during which a work product or process is evaluated by one or more individuals to detect issues and to provide improvements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

test case

A

A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

testing

A

The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

test objective

A

A reason or purpose for designing and executing a test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

exhaustive testing, complete testing

A

A test approach in which the test suite comprises all combinations of input values and preconditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

confirmation testing, re-testing

A

Dynamic testing conducted after fixing defects with the objective to confirm that failures caused by those defects do not occur anymore.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

exit criteria, completion criteria, test completion criteria, definition of done

A

The set of conditions for officially completing a defined task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

incident, deviation, software test incident, test incident

A

An event occurring that requires investigation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

regression

A

A degradation in the quality of a component or system due to a change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

regression testing

A

Testing of a previously tested component or system following modification to ensure that defects have not been introduced or have been uncovered in unchanged areas of the software, as a result of the changes made.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

test basis

A

The body of knowledge used as the basis for test analysis and design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

test condition, test requirement, test situation

A

An aspect of the test basis that is relevant in order to achieve specific test objectives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

coverage, test coverage

A

The degree to which specified coverage items have been determined or have been exercised by a test suite expressed as a percentage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

test data

A

Data created or selected to satisfy the execution preconditions and inputs to execute one or more test cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

test execution

A

The process of running a test on the component or system under test, producing actual result(s).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

test log, test record, test run log

A

A chronological record of relevant details about the execution of tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

test plan

A

Documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organized to coordinate testing activities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
test procedure
A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution.
26
test policy, organizational test policy
A high-level document describing the principles, approach and major objectives of the organization regarding testing.
27
test suite, test case suite, test set
A set of test cases or test procedures to be executed in a specific test cycle.
28
test summary report, test report
A test report that provides an evaluation of the corresponding test items against exit criteria
29
testware
Work products produced during the test process for use in planning, designing, executing, evaluating and reporting on testing.
30
error guessing
A test technique in which tests are derived on the basis of the tester's knowledge of past failures, or general knowledge of failure modes.
31
independence of testing
Separation of responsibilities, which encourages the accomplishment of objective testing
32
commercial off-the-shelf (COTS), off-the-shelf software
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
33
iterative development model
A development lifecycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product, a subset of the final product under development, which grows from iteration to iteration to become the final product.
34
incremental development model
A development lifecycle model in which the project scope is generally determined early in the project lifecycle, but time and cost estimates are routinely modified as the project team understanding of the product increases. The product is developed through a series of repeated cycles, each delivering an increment which successively adds to the functionality of the product.
35
validation
Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
36
verification
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
37
V-model
A sequential development lifecycle model describing a one-for-one relationship between major phases of software development from business requirements specification to delivery, and corresponding test levels from acceptance testing to component testing.
38
alpha testing
Simulated or actual operational testing conducted in the developer's test environment, by roles outside the development organization.
39
beta testing, field testing
Simulated or actual operational testing conducted at an external site, by roles outside the development organization.
40
component
A minimal part of a system that can be tested in isolation.
41
component testing, module testing, unit testing
The testing of individual hardware or software components.
42
driver, test driver
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
43
functional requirement
A requirement that specifies a function that a component or system must be able to perform.
44
non-functional requirement
A requirement that describes how the component or system will do what it is intended to do.
45
robustness, error-tolerance, fault-tolerance
The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.
46
robustness testing
Testing to determine the robustness of the software product.
47
stub
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
48
system testing
Testing an integrated system to verify that it meets specified requirements.
49
test environment, test bed, test rig
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
50
test level, test stage
A specific instantiation of a test process.
51
test-driven development (TDD)
A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.
52
user acceptance testing, acceptance testing
Acceptance testing conducted in a real or simulated operational environment by intended users focusing their needs, requirements and business processes.
53
acceptance criteria
The criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.
54
acceptance testing
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
55
black-box testing, specification-based testing
Testing, either functional or non-functional, without reference to the internal structure of the component or system.
56
black-box test technique, black-box technique, specification-based technique, specification-based test technique
A procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
57
code coverage
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g., statement coverage, decision coverage or condition coverage.
58
functional testing
Testing conducted to evaluate the compliance of a component or system with functional requirements.
59
non-functional testing
Testing conducted to evaluate the compliance of a component or system with non-functional requirements.
60
interoperability
The degree to which two or more components or systems can exchange information and use the information that has been exchanged.
61
interoperability testing, compatibility testing
Testing to determine the interoperability of a software product.
62
load testing
A type of performance testing conducted to evaluate the behavior of a component or system under varying loads, usually between anticipated conditions of low, typical, and peak usage.
63
maintainability
The degree to which a component or system can be modified by the intended maintainers.
64
maintainability testing
Testing to determine the maintainability of a software product.
65
performance efficiency, time behavior, performance
The degree to which a component or system uses time, resources and capacity when accomplishing its designated functions.
66
performance testing
Testing to determine the performance of a software product.
67
portability
The ease with which the software product can be transferred from one hardware or software environment to another.
68
portability testing, configuration testing
Testing to determine the portability of a software product.
69
reliability
The degree to which a component or system performs specified functions under specified conditions for a specified period of time.
70
reliability testing
Testing to determine the reliability of a software product.
71
security
The degree to which a component or system protects information and data so that persons or other components or systems have the degree of access appropriate to their types and levels of authorization.
72
security testing
Testing to determine the security of the software product.
73
stress testing
A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified workloads, or with reduced availability of resources such as access to memory or servers.
74
white-box testing, clear-box testing, code-based testing, glass-box testing, logic-coverage testing, logic-driven testing, structural testing, structure-based testing
Testing based on an analysis of the internal structure of the component or system.
75
usability
The degree to which a component or system can be used by specified users to achieve specified goals in a specified context of use.
76
usability testing
Testing to evaluate the degree to which the system can be used by specified users with effectiveness, efficiency and satisfaction in a specified context of use.
77
impact analysis
The identification of all work products affected by a change, including an estimate of the resources needed to accomplish the change.
78
maintenance
The process of modifying a component or system after delivery to correct defects, improve quality attributes, or adapt to a changed environment.
79
maintenance testing
Testing the changes to an operational system or the impact of a changed environment to an operational system.
80
dynamic testing
Testing that involves the execution of the software of a component or system.
81
static testing
Testing a work product without code being executed.
82
entry criteria
The set of conditions for officially starting a defined task.
83
formal review
A form of review that follows a defined process with a formally documented output.
84
informal review
A type of review without a formal (documented) procedure.
85
inspection
A type of formal review to identify issues in a work product, which provides measurement to improve the review process and the software development process.
86
metric
A measurement scale and the method used for measurement.
87
moderator, inspection leader
A neutral person who conducts a usability test session.
88
peer review
A form of review of work products performed by others qualified to do the same work.
89
reviewer, checker, inspector
A participant in a review, who identifies issues in the work product.
90
scribe, recorder
A person who records information during the review meetings.
91
technical review
A formal review type by a team of technically-qualified personnel that examines the suitability of a work product for its intended use and identifies discrepancies from specifications and standards.
92
walkthrough, structured walkthrough
A type of review in which an author leads members of the review through a work product and the members ask questions and make comments about possible issues.
93
compiler
A computer program that translates programs expressed in a high-order language into their machine language equivalents.
94
complexity
The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify.
95
control flow
The sequence in which operations are performed during the execution of a test item.
96
data flow
An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of creation, usage, or destruction.
97
static analysis
The process of evaluating a component or system without executing it, based on its form, structure, content, or documentation.
98
test case specification
Documentation of a set of one or more test cases.
99
test design
The activity of deriving and specifying test cases from test conditions.
100
test execution schedule
A schedule for the execution of test suites within a test cycle.
101
test procedure specification, test procedure, test scenario
Documentation specifying one or more test procedures.
102
test script
A sequence of instructions for the execution of a test.
103
traceability
The degree to which a relationship can be established between two or more work products.
104
experience-based testing
Testing based on the tester's experience, knowledge and intuition.
105
experience-based test technique, experience-based technique
A procedure to derive and/or select test cases based on the tester's experience, knowledge and intuition.
106
test design
The activity of deriving and specifying test cases from test conditions.
107
test design specification
Documentation specifying the features to be tested and their corresponding test conditions.
108
test technique, test case design technique, test specification technique, test technique, test design technique
A procedure used to derive and/or select test cases.
109
boundary value
A minimum or maximum value of an ordered equivalence partition.
110
boundary value analysis
A black-box test technique in which test cases are designed based on boundary values.
111
decision table, cause-effect decision table
A table used to show sets of conditions and the actions resulting from them.
112
decision table testing
A black-box test technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table.
113
equivalence partitioning, partition testing
A black-box test technique in which test cases are designed to exercise equivalence partitions by using one representative member of each partition.
114
equivalence partition, equivalence class
A portion of the value domain of a data element related to the test object for which all values are expected to be treated the same based on the specification.
115
state transition
A transition between two states of a component or system.
116
state transition testing, finite state testing
A black-box test technique using a state transition diagram or state table to derive test cases to evaluate whether the test item successfully executes valid transitions and blocks invalid transitions.
117
use case
A sequence of transactions in a dialogue between an actor and a component or system with a tangible result, where an actor can be a user or anything that can exchange information with the system.
118
use case testing, scenario testing, user scenario testing
A black-box test technique in which test cases are designed to execute scenarios of use cases.
119
decision
A type of statement in which a choice between two or more possible outcomes controls which set of actions will result.
120
decision coverage
The coverage of decision outcomes.
121
decision outcome
The result of a decision that determines the next statement to be executed.
122
statement coverage
The percentage of executable statements that have been exercised by a test suite.
123
statement, source statement
An entity in a programming language, which is typically the smallest indivisible unit of execution.
124
exploratory testing
An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.
125
tester
A skilled professional who is involved in the testing of a component or system.
126
test leader, lead tester
On large projects, the person who reports to the test manager and is responsible for project management of a particular test level or a particular set of testing activities.
127
test manager
The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object.
128
test approach
The implementation of the test strategy for a specific project.
129
test strategy, organizational test strategy
Documentation that expresses the generic requirements for testing one or more projects run within an organization, providing detail on how testing is to be performed, and is aligned with the test policy.
130
defect density, fault density
The number of defects per unit size of a work product.
131
failure rate
The ratio of the number of failures of a given category to a given unit of measure.
132
test control
A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.
133
test monitoring
A test management activity that involves checking the status of testing activities, identifying any variances from the planned or expected status, and reporting status to stakeholders.
134
configuration
The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.
135
configuration management
A discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.
136
product risk
A risk impacting the quality of a product.
137
project risk
A risk that impacts project success.
138
risk-based testing
Testing in which the management, selection, prioritization, and use of testing activities and resources are based on corresponding risk types and risk levels.
139
incident management
The process of recognizing and recording incidents, classifying them, investigating them, taking action to resolve them, and disposing of them when resolved.
140
incident report, deviation report, software test incident report, test incident report
Documentation of the occurrence, nature, and status of an incident.
141
configuration management tool
A tool that provides support for the identification and control of configuration items, their status over changes and versions, and the release of baselines consisting of configuration items.
142
coverage tool, coverage measurement tool
A tool that provides objective measures of what structural elements, e.g., statements, branches have been exercised by a test suite.
143
debugging tool, debugger
A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.
144
dynamic analysis tool
A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers, check pointer arithmetic and to monitor the allocation, use and de-allocation of memory and to flag memory leaks.
145
incident management tool
A tool that facilitates the recording and status tracking of incidents.
146
modeling tool
A tool that supports the creation, amendment and verification of models of the software or system.
147
monitoring tool
A software tool or hardware device that runs concurrently with the component or system under test and supervises, records and/or analyzes the behavior of the component or system.
148
performance testing tool
A test tool that generates load for a designated test item and that measures and records its performance during test execution.
149
requirements management tool
A tool that supports the recording of requirements, requirements attributes (e.g., priority, knowledge responsible) and annotation, and facilitates traceability through layers of requirements and requirements change management. Some requirements management tools also provide facilities for static analysis, such as consistency checking and violations to pre-defined requirements rules.
150
review tool
A tool that provides support to the review process. Typical features include review planning and tracking support, communication support, collaborative reviews and a repository for collecting and reporting of metrics.
151
security tool
A tool that supports operational security.
152
static analyzer, analyzer, static analysis tool
A tool that carries out static analysis.
153
stress testing tool
A tool that supports stress testing.
154
probe effect
The effect on the component or system by the measurement instrument when the component or system is being measured, e.g., by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being used.
155
test comparison
The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.
156
test comparator, comparator
A test tool to perform automated test comparison of actual results with expected results.
157
test data preparation tool, test generator
A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing.
158
test design tool
A tool that supports the test design activity by generating test inputs from a specification that may be held in a CASE tool repository, e.g., requirements management tool, from specified test conditions held in the tool itself, or from code.
159
test harness
A test environment comprised of stubs and drivers needed to execute a test.
160
test execution tool
A test tool that executes tests against a designated test item and evaluates the outcomes against expected results and postconditions.
161
test management
The planning, scheduling, estimating, monitoring, reporting, control and completion of test activities.
162
test management tool
A tool that provides support to the test management and control part of a test process. It often has several capabilities, such as testware management, scheduling of tests, the logging of results, progress tracking, incident management and test reporting.
163
unit test framework
A tool that provides an environment for unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. It also provides other support for the developer, such as debugging capabilities.
164
data-driven testing
A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data-driven testing is often used to support the application of test execution tools such as capture/playback tools.
165
keyword-driven testing, action word-driven testing
A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.