Glossary Flashcards

(249 cards)

1
Q

acceptance criteria

A

The criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

acceptance test-driven development (ATDD)

A

A collaborative approach to development in which the team and customers are using the customers own domain language to understand their requirements, which forms the basis for testing a component or system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

acceptance testing

A

A test level that focuses on determining whether to accept the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

accessibility

A

The degree to which a component or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

accessibility testing

A

Testing to determine the ease by which users with disabilities can use a component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

accuracy

A

The capability of the software product to provide the right or agreed results or effects with the needed degree of precision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

actual result

A

The behavior produced/observed when a component or system is tested.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

ad hoc review

A

A review technique performed informally without a structured process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Agile Manifesto

A

A statement on the values that underpin Agile software development. The values are: individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, responding to change over following a plan.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Agile software development

A

A group of software development methodologies based on iterative incremental development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Agile testing

A

Testing practice for a project using Agile software development methodologies, incorporating techniques and methods, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

agile testing quadrants

A

A classification model of test types/levels in four quadrants, relating them to two dimensions of test goals: supporting the team vs. critiquing the product, and technology-facing vs. business-facing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

alpha testing

A

A type of acceptance testing performed in the developer’s test environment by roles outside the development organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

anomaly

A

Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc., or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

audit

A

An independent examination of a work product or process performed by a third party to assess whether it complies with specifications, standards, contractual agreements, or other criteria.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

availability

A

The degree to which a component or system is operational and accessible when required for use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

behavior-driven development (BDD)

A

A collaborative approach to development in which the team is focusing on delivering expected behavior of a component or system for the customer, which forms the basis for testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

beta testing

A

A type of acceptance testing performed at an external site to the developer’s test environment by roles outside the development organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

black-box test technique

A

A test technique based on an analysis of the specification of a component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

boundary value

A

A minimum or maximum value of an ordered equivalence partition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

boundary value analysis

A

A black-box test technique in which test cases are designed based on boundary values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

build verification test (BVT)

A

An automated test that validates the integrity of each new build and verifies its key/core functionality, stability, and testability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

change-related testing

A

A type of testing initiated by modification to a component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

checklist-based review

A

A review technique guided by a list of questions or required attributes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
checklist-based testing
An experience-based test technique whereby the experienced tester uses a high-level list of items to be noted, checked, or remembered, or a set of rules or criteria against which a product has to be verified.
26
coding standard
A standard that describes the characteristics of a design or a design description of data or program components.
27
commercial off-the-shelf (COTS)
A type of product developed in an identical format for a large number of customers in the general market.
28
compatibility
The degree to which a component or system can exchange information with other components or systems, and/or perform its required functions while sharing the same hardware or software environment.
29
complexity
The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify.
30
compliance
Adherence of a work product to standards, conventions or regulations in laws and similar prescriptions.
31
component
A part of a system that can be tested in isolation.
32
component integration testing
Testing in which the test items are interfaces and interactions between integrated components.
33
component testing
A test level that focuses on individual hardware or software components.
34
concurrency
The simultaneous execution of multiple independent threads by a component or system.
35
configuration item
An aggregation of work products that is designated for configuration management and treated as a single entity in the configuration management process.
36
configuration management
A discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify that it complies with specified requirements.
37
confirmation testing
A type of change-related testing performed after fixing a defect to confirm that a failure caused by that defect does not reoccur.
38
continuous integration
An automated software development procedure that merges, integrates and tests all changes as soon as they are committed.
39
contractual acceptance testing
A type of acceptance testing performed to verify whether a system satisfies its contractual requirements.
40
control flow
The sequence in which operations are performed by a business process, component or system.
41
cost of quality
The total costs incurred on quality activities and issues and often split into prevention costs, appraisal costs, internal failure costs and external failure costs.
42
coverage
The degree to which specified coverage items have been determined or have been exercised by a test suite expressed as a percentage.
43
coverage criteria
The criteria to define the coverage items required to reach a test objective.
44
coverage item
An attribute or combination of attributes derived from one or more test conditions by using a test technique.
45
dashboard
A representation of dynamic measurements of operational performance for some organization or activity, using metrics represented via metaphors such as visual dials, counters, and other devices resembling those on the dashboard of an automobile, so that the effects of events or activities can be easily understood and related to operational goals.
46
data-driven testing
A scripting technique that uses data files to contain the test data and expected results needed to execute the test scripts.
47
debugging
The process of finding, analyzing and removing the causes of failures in a component or system.
48
decision
A type of statement in which a choice between two or more possible outcomes controls which set of actions will result.
49
decision coverage
The coverage of decision outcomes.
50
decision table testing
A black-box test technique in which test cases are designed to exercise the combinations of conditions and the resulting actions shown in a decision table.
51
decision testing
A white-box test technique in which test cases are designed to execute decision outcomes.
52
defect
An imperfection or deficiency in a work product where it does not meet its requirements or specifications.
53
defect density
The number of defects per unit size of a work product.
54
defect management
The process of recognizing, recording, classifying, investigating, resolving and disposing of defects.
55
defect report
Documentation of the occurrence, nature, and status of a defect.
56
defect taxonomy
A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.
57
defect-based test design technique
A procedure to derive and/or select test cases targeted at one or more defect types, with tests being developed from what is known about the specific defect type.
58
driver
A temporary component or tool that replaces another component and controls or calls a test item in isolation.
59
dynamic analysis
The process of evaluating a component or system based on its behavior during execution.
60
dynamic testing
Testing that involves the execution of the test item.
61
effectiveness
The extent to which correct and complete goals are achieved.
62
efficiency
The degree to which resources are expended in relation to results achieved.
63
entry criteria
The set of conditions for officially starting a defined task.
64
epic
A large user story that cannot be delivered as defined within a single iteration or is large enough that it can be split into smaller user stories.
65
equivalence partition
A subset of the value domain of a variable within a component or system in which all values are expected to be treated the same based on the specification.
66
equivalence partitioning
A black-box test technique in which test cases are designed to exercise equivalence partitions by using one representative member of each partition.
67
error
A human action that produces an incorrect result.
68
error guessing
A test technique in which tests are derived on the basis of the tester's knowledge of past failures, or general knowledge of failure modes.
69
exhaustive testing
A test approach in which the test suite comprises all combinations of input values and preconditions.
70
exit criteria
The set of conditions for officially completing a defined task.
71
expected result
The observable predicted behavior of a test item under specified conditions based on its test basis.
72
experience-based test technique
A test technique only based on the tester's experience, knowledge and intuition.
73
experience-based testing
Testing based on the tester's experience, knowledge and intuition.
74
exploratory testing
An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.
75
Extreme Programming (XP)
A software engineering methodology used within Agile software development whereby core practices are programming in pairs, doing extensive code review, unit testing of all code, and simplicity and clarity in code.
76
failed
The status of a test result in which the actual result does not match the expected result.
77
failure
An event in which a component or system does not perform a required function within specified limits.
78
failure rate
The ratio of the number of failures of a given category to a given unit of measure.
79
finding
A result of an evaluation that identifies some important issue, problem, or opportunity.
80
formal review
A type of review that follows a defined process with a formally documented output.
81
functional testing
Testing performed to evaluate if a component or system satisfies functional requirements.
82
functional suitability
The degree to which a component or system provides functions that meet stated and implied needs when used under specified conditions.
83
heuristic
A generally recognized rule of thumb that helps to achieve a goal.
84
high-level test case
A test case with abstract preconditions, input data, expected results, postconditions, and actions (where applicable).
85
impact analysis
The identification of all work products affected by a change, including an estimate of the resources needed to accomplish the change.
86
incremental development model
A type of software development lifecycle model in which the component or system is developed through a series of increments.
87
independence of testing
Separation of responsibilities, which encourages the accomplishment of objective testing.
88
informal review
A type of review that does not follow a defined process and has no formally documented output.
89
inspection
A type of formal review to identify issues in a work product, which provides measurement to improve the review process and the software development process.
90
integration testing
A test level that focuses on interactions between components or systems.
91
integrity
The degree to which a component or system allows only authorized access and modification to a component, a system or data.
92
interoperability
The degree to which two or more components or systems can exchange information and use the information that has been exchanged.
93
interoperability testing
Testing to determine the interoperability of a software product.
94
iterative development model
A type of software development lifecycle model in which the component or system is developed through a series of repeated cycles.
95
keyword-driven testing
A scripting technique in which test scripts contain high-level keywords and supporting files that contain low-level scripts that implement those keywords.
96
load testing
A type of performance testing conducted to evaluate the behavior of a component or system under varying loads, usually between anticipated conditions of low, typical, and peak usage.
97
low-level test case
A test case with concrete values for preconditions, input data, expected results, postconditions, and a detailed description of actions (where applicable).
98
maintainability
The degree to which a component or system can be modified by the intended maintainers.
99
maintenance
The process of modifying a component or system after delivery to correct defects, improve quality characteristics, or adapt to a changed environment.
100
maintenance testing
Testing the changes to an operational system or the impact of a changed environment to an operational system.
101
master test plan
A test plan that is used to coordinate multiple test levels or test types.
102
maturity
(1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. (2) The degree to which a component or system meets needs for reliability under normal operation.
103
measure
The number or category assigned to an attribute of an entity by making a measurement.
104
measurement
The process of assigning a number or category to an entity to describe an attribute of that entity.
105
memory leak
A memory access failure due to a defect in a program's dynamic store allocation logic that causes it to fail to release memory after it has finished using it.
106
metric
A measurement scale and the method used for measurement.
107
model-based testing (MBT)
Testing based on or involving models.
108
moderator
(1) The person responsible for running review meetings. (2) The person who conducts a usability test session.
109
modularity
The degree to which a system is composed of discrete components such that a change to one component has minimal impact on other components.
110
non-functional testing
Testing performed to evaluate that a component or system complies with non-functional requirements.
111
operational acceptance testing
A type of acceptance testing performed to determine if operations and/or systems administration staff can accept a system.
112
passed
The status of a test result in which the actual result matches the expected result.
113
path
A sequence of consecutive edges in a directed graph.
114
peer review
A review performed by others with the same abilities to create the work product.
115
performance efficiency
The degree to which a component or system uses time, resources and capacity when accomplishing its designated functions.
116
performance indicator
A metric that supports the judgment of process performance.
117
performance testing
Testing to determine the performance efficiency of a component or system.
118
performance testing tool
A test tool that generates load for a designated test item and that measures and records its performance during test execution.
119
perspective-based reading
A review technique in which a work product is evaluated from the perspective of different stakeholders with the purpose to derive other work products.
120
planning poker
A consensus-based estimation technique, mostly used to estimate effort or relative size of user stories in Agile software development. It is a variation of the Wideband Delphi method using a deck of cards with values representing the units in which the team estimates.
121
portability
The degree to which a component or system can be transferred from one hardware, software or other operational or usage environment to another.
122
postcondition
The expected state of a test item and its environment at the end of test case execution.
123
precondition
The required state of a test item and its environment prior to test case execution.
124
priority
The level of (business) importance assigned to an item, e.g., defect.
125
probe effect
An unintended change in behavior of a component or system caused by measuring it.
126
process model
A framework in which processes of the same nature are classified into an overall model.
127
product risk
A risk impacting the quality of a product.
128
project risk
A risk that impacts project success.
129
quality
The degree to which a component or system satisfies the stated and implied needs of its various stakeholders.
130
quality assurance (QA)
Activities focused on providing confidence that quality requirements will be fulfilled.
131
quality characteristic
A category of quality attributes that bears on work product quality.
132
quality control (QC)
A set of activities designed to evaluate the quality of a component or system.
133
quality management
The process of establishing and directing a quality policy, quality objectives, quality planning, quality control, quality assurance, and quality improvement for an organization.
134
quality risk
A product risk related to a quality characteristic.
135
Rational Unified Process (RUP)
A proprietary adaptable iterative software development process framework consisting of four project lifecycle phases: inception, elaboration, construction and transition.
136
regression testing
A type of change-related testing to detect whether defects have been introduced or uncovered in unchanged areas of the software.
137
regulatory acceptance testing
regulatory acceptance testing
138
reliability
The degree to which a component or system performs specified functions under specified conditions for a specified period of time.
139
reliability growth model
A model that shows the growth in reliability over time of a component or system as a result of the defect removal.
140
requirement
A provision that contains criteria to be fulfilled.
141
retrospective meeting
A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.
142
reusability
The degree to which a work product can be used in more than one system, or in building other work products.
143
review
A type of static testing in which a work product or process is evaluated by one or more individuals to detect defects or to provide improvements.
144
reviewer
A participant in a review who identifies issues in the work product.
145
risk
A factor that could result in future negative consequences.
146
risk analysis
The overall process of risk identification and risk assessment.
147
risk level
The qualitative or quantitative measure of a risk defined by impact and likelihood.
148
risk management
The process for handling risks.
149
risk mitigation
The process through which decisions are reached and protective measures are implemented for reducing or maintaining risks to specified levels.
150
risk-based testing
Testing in which the management, selection, prioritization, and use of testing activities and resources are based on corresponding risk types and risk levels.
151
robustness
The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.
152
role-based review
A review technique in which a work product is evaluated from the perspective of different stakeholders.
153
root cause
A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed.
154
root cause analysis
An analysis technique aimed at identifying the root causes of defects. By directing corrective measures at root causes, it is hoped that the likelihood of defect recurrence will be minimized.
155
scenario-based review
A review technique in which a work product is evaluated to determine its ability to address specific scenarios.
156
scribe
A person who records information at a review meeting.
157
scrum
An iterative incremental framework for managing projects commonly used with Agile software development.
158
security
The degree to which a component or system protects information and data so that persons or other components or systems have the degree of access appropriate to their types and levels of authorization.
159
security testing
Testing to determine the security of the software product.
160
sequential development model
A type of software development lifecycle model in which a complete system is developed in a linear way of several discrete and successive phases with no overlap between them.
161
service virtualization
A technique to enable virtual delivery of services which are deployed, accessed and managed remotely.
162
session-based testing
An approach in which test activities are planned as test sessions.
163
severity
The degree of impact that a defect has on the development or operation of a component or system.
164
simulator
A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs.
165
software development lifecycle (SDLC)
The activities performed at each stage in software development, and how they relate to one another logically and chronologically.
166
software lifecycle
The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software lifecycle typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and sometimes, retirement phase. Note these phases may overlap or be performed iteratively.
167
standard
Formal, possibly mandatory, set of requirements developed and used to prescribe consistent approaches to the way of working or to provide guidelines (e.g., ISO/IEC standards, IEEE standards, and organizational standards).
168
state transition testing
A black-box test technique in which test cases are designed to exercise elements of a state transition model.
169
statement
An entity in a programming language, which is typically the smallest indivisible unit of execution.
170
statement coverage
The coverage of executable statements.
171
statement testing
A white-box test technique in which test cases are designed to execute statements.
172
static analysis
The process of evaluating a component or system without executing it, based on its form, structure, content, or documentation.
173
static testing
Testing a work product without the work product code being executed.
174
structural coverage
Coverage measures based on the internal structure of a component or system.
175
stub
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
176
system integration testing
A test level that focuses on interactions between systems.
177
system testing
A test level that focuses on verifying that a system as a whole meets specified requirements.
178
system under test (SUT)
A type of test object that is a system.
179
technical review
A formal review by technical experts that examine the quality of a work product and identify discrepancies from specifications and standards.
180
test
A set of one or more test cases.
181
test analysis
The activity that identifies test conditions by analyzing the test basis.
182
test approach
The implementation of the test strategy for a specific project.
183
test automation
The use of software to perform or support test activities.
184
test basis
The body of knowledge used as the basis for test analysis and design.
185
test case
A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions.
186
test charter
Documentation of the goal or objective for a test session.
187
test completion
The activity that makes testware available for later use, leaves test environments in a satisfactory condition and communicates the results of testing to relevant stakeholders.
188
test completion report
A type of test report produced at completion milestones that provides an evaluation of the corresponding test items against exit criteria.
189
test condition
A testable aspect of a component or system identified as a basis for testing.
190
test control
The activity that develops and applies corrective actions to get a test project on track when it deviates from what was planned.
191
test cycle
An instance of the test process against a single identifiable version of the test object.
192
test data
Data needed for test execution.
193
test data preparation tool
A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing.
194
test design
The activity that derives and specifies test cases from test conditions.
195
test environment
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
196
test estimation
An approximation related to various aspects of testing.
197
test execution
The activity that runs a test on a component or system producing actual results.
198
test execution schedule
A schedule for the execution of test suites within a test cycle.
199
test execution tool
A test tool that executes tests against a designated test item and evaluates the outcomes against expected results and postconditions.
200
test harness
A collection of stubs and drivers needed to execute a test suite
201
test implementation
The activity that prepares the testware needed for test execution based on test analysis and design.
202
test infrastructure
The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.
203
test item
A part of a test object used in the test process.
204
test leader
On large projects, the person who reports to the test manager and is responsible for project management of a particular test level or a particular set of testing activities.
205
test level
A specific instantiation of a test process.
206
test management
The planning, scheduling, estimating, monitoring, reporting, control and completion of test activities.
207
test management tool
A tool that supports test management.
208
test manager
The person responsible for project management of testing activities, resources, and evaluation of a test object.
209
test monitoring
The activity that checks the status of testing activities, identifies any variances from planned or expected, and reports status to stakeholders.
210
test object
The work product to be tested.
211
test objective
The reason or purpose of testing.
212
test oracle
A source to determine an expected result to compare with the actual result of the system under test.
213
test plan
Documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organized to coordinate testing activities.
214
test planning
The activity of establishing or updating a test plan.
215
test policy
A high-level document describing the principles, approach and major objectives of the organization regarding testing.
216
test procedure
A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution.
217
test process
The set of interrelated activities comprising of test planning, test monitoring and control, test analysis, test design, test implementation, test execution, and test completion.
218
test process improvement
A program of activities undertaken to improve the performance and maturity of the organization's test processes.
219
test progress report
A type of test report produced at regular intervals about the progress of test activities against a baseline, risks, and alternatives requiring a decision.
220
test report
Documentation summarizing test activities and results.
221
test reporting
Collecting and analyzing data from testing activities and subsequently consolidating the data in a report to inform stakeholders.
222
test result
The consequence/outcome of the execution of a test.
223
test schedule
A list of activities, tasks or events of the test process, identifying their intended start and finish dates and/or times, and interdependencies
224
test script
A sequence of instructions for the execution of a test.
225
test session
An uninterrupted period of time spent in executing tests.
226
test strategy
Documentation aligned with the test policy that describes the generic requirements for testing and details how to perform testing within an organization.
227
test suite
A set of test scripts or test procedures to be executed in a specific test run.
228
test technique
A procedure used to define test conditions, design test cases, and specify test data.
229
test tool
Software or hardware that supports one or more test activities.
230
test type
A group of test activities based on specific test objectives aimed at specific characteristics of a component or system.
231
test-first approach
An approach to software development in which the test cases are designed and implemented before the associated component or system is developed.
232
testability
The degree to which test conditions can be established for a component or system, and tests can be performed to determine whether those test conditions have been met.
233
tester
A person who performs testing.
234
testing
The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of a component or system and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
235
testware
Work products produced during the test process for use in planning, designing, executing, evaluating and reporting on testing.
236
traceability
The degree to which a relationship can be established between two or more work products.
237
usability
The degree to which a component or system can be used by specified users to achieve specified goals in a specified context of use.
238
usability testing
Testing to evaluate the degree to which the system can be used by specified users with effectiveness, efficiency and satisfaction in a specified context of use.
239
use case testing
A black-box test technique in which test cases are designed to exercise use case behaviors.
240
user acceptance testing (UAT)
A type of acceptance testing performed to determine if intended users accept the system.
241
user interface
All components of a system that provide information and controls for the user to accomplish specific tasks with the system.
242
user story
A user or business requirement consisting of one sentence expressed in the everyday or business language which is capturing the functionality a user needs, the reason behind it, any non-functional criteria, and also including acceptance criteria.
243
V-model
A sequential development lifecycle model describing a one-for-one relationship between major phases of software development from business requirements specification to delivery, and corresponding test levels from acceptance testing to component testing.
244
validation
Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
245
verification
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
246
walkthrough
A type of review in which an author leads members of the review through a work product and the members ask questions and make comments about possible issues.
247
white-box test technique
A test technique only based on the internal structure of a component or system.
248
white-box testing
Testing based on an analysis of the internal structure of the component or system.
249
Wideband Delphi
An expert-based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.