Ch5 Flashcards Preview

ISTQB > Ch5 > Flashcards

Flashcards in Ch5 Deck (41)
Loading flashcards...
1

Benefits of Test Independence?

See other and different defects
Unbiased
Verify assumptions made during specification and implementation
Bring experience, skills and quality

2

Drawbacks of Test Independence?

Isolation from development team
May be seen as bottleneck or blamed for delays in release
May not be familiar with business, project or systems
Developers may lose a sense of responsibility for quality

3

Tasks of the Task Leader?

Write or review test policy and strategy.

Contribute to the testing perspective.

Plan tests (approach, estimates and resoruces.)

Assess testing objective and risks.

Schedule test activities and initate specification, preparation, implementation and execution of tests.

Monitor the test results and check exit criteria.

Adapt planning based on test results and progress, taking action necessary to compensate for problems.

Set up adequate configuration management of testware.

Introduce suitable metrics for measuring test progress and evaluating the quality of testing.

Consider automation and select tools to support testing.

Supervise the implementation of the test environment
Write test summary reports for stakeholders.

4

Tasks of the Tester?

Review and contribute to test plans

Review user requirements, specifications and models for testability

Create test specifications

Set up the test environment with appropriate technical support

Prepare and acquire test data

Execute and log tests

Evaluate results and record incidents

Use test tools as necessary and automate tests

Measure performance (if applicable)

Review tests developed by others

5

Are you a mad? Recite the IEEE Stand 829-1998.
(Test plan identifier)

1. Test plan identifier
2. Introduction
3. Test items
4. To be tested
5. Not to be tested
6. Approach
7. P/F criteria
8. Suspension and Resumption.
9. Test deliverables.
10. Testing tasks
11. Enviormental needs
12. Responsibilities
13. Staffing and training needs.
14. Schedule.
15 Risk and contingencies.
16. Approvals.


6

State the order in Levels of Planning!!

Test Policy --> Test Strategy --> Master Test Plan--Compent-->Integration-->System-->Acceptance

7

What is the definition of Entry Criteria?

Entry C is the condition by which we start testing, i.e when tests are ready for execution.

8

What are a few Entry Critera?

Test environment available and ready
Test tool configured in the test environment
Testable code available
Test data available, including configuration data, logins, etc.
Test summary report available from previous testing, including quality measures
Third-party software delivered and software licences bought
Other project dependencies in place

9

What is the Exit Criteria?

Exit Criteria is the used to define test end. Typically after testing has achieved a specific goal.

10

What are a few Exit Critera?

Measures of testing thoroughness, i.e. coverage measures
Estimates of defect density or reliability
Cost
Residual risks such as number of defects outstanding or requirements not tested
Schedules such as those based on time to market

11

What should you remember about Exit Criteria?

Exit Criteria varies with test level. The coverage of code for component testing.
Coverage of requirements or risk for system testing
Non-functional measures such as usability in acceptance testing

12

Define Test Approach.

The implementation of test stategy. Based on objective and risk.

13

What can the Test Approach be used for?

A starting point for test planning.

Selecting design techniques and test types.

Defining Entry/Exit Criteria

14

What are the types of Test Approach?

Analytical
e.g. Risk-based
Model-based
e.g. Using statistics such as expected usage profiles
Methodical
e.g. Based on failures (error guessing), experience, checklist
Process- or standard-compliant
e.g. Industry standards or agile methods
Dynamic/heuristic
e.g. Reactive, exploratory testing
Consultative
Based on advice from experts in technology or business
Regression-averse
e.g. Reuse and automation

15

Test Estimation is?

A calculated approximation of the cost or effort required to complete a task.

16

What are the approaches for Test Estimation?

Two Approaches:

The Metrics-based approach based on
Metrics of former or similar projects, or typical values.

The Expert-based approach, based on assessments by the owner of the tasks, or domain experts.

17

What factors should you consider whist Estimating?

Product -
Quality of the specification
Size of the product
Complexity
Requirements for reliability, security and documentation

Development process -
Stability of the organisation, tools used, test process, skills of the people involved, time pressure

Software quality -
Expected number of defects and the amount of rework required

18

Why do we perform
Test Progress Monitoring?

Provide feedback and visibility about testing
Assess progress against planned schedule and budget
Measure exit criteria such as coverage
Assess effectiveness of test approach with respect to objectives
Collect data for future project estimation

19

True or false, Metrics can be collected manually or automatically?

True, Test tools (test management, execution tools, defect trackers) can record key data.

20

State a few useful Metrics.

Percentage of work done in test case and environment preparation
Test case execution (e.g. number of test cases run/not run and test cases passed/failed)
Defect information (e.g. defect density, defects fixed, failure rate, retest results)
Coverage of requirements, risks or code
Dates of test milestones
Testing costs, including cost-benefit analysis of fixing defects

21

What should you consider when choosing Metrics?

Estimates (Time, Cost, etc.)
Exit criteria (e.g. coverage, risk and defect data)
Suspension criteria (e.g. quality, timescales)

22

Why should we consider in Test Control?

Re-prioritise tests when an identified risk occurs (e.g. software delivered late).
Change the test schedule due to availability of a test environment.
Set an entry criterion requiring fixes to have been retested by a developer before accepting them into a build.

23

What are the objectives of Test Reporting?

To summarise information about test activities during test phase:

What testing occurred?
Statistics on tests run/passed/failed, incidents raised/fixed

Was exit criteria met?

To analyse data and metrics to support recommendations and decisions about future actions:
Assessment of defects remaining
Economic benefit of continued testing
Outstanding risks
Level of confidence in tested software
Effectiveness of objectives, approach and tests

24

Impress me, outline the IEEE Std 829-1998 Test Summary Report:

Summary:

Software versions and hardware environment
Refer to test plan, logs and incident reports

Variances:

Changes from test plan, designs or procedures
Comprehensiveness assessment
Features not tested, with reasons

Summary of results:

Description of incidents, list of fixes and outstanding incidents

Evaluation:

Estimate of the software quality, reliability and failure risk

Summary of activities:

Effort and elapsed time categorised
Dates exit criteria were met

Approvals:

Provide a list and signature block for each approving authority



25

What is Configuration Management?

The aim is to establish and maintain the integrity of the products of the system through the project and product life cycle

26

Configuration Management can support testing by ensuring that?

All items of testware are
Uniquely identifiable
Version controlled
Tracked for changes
Related to each other
Related to development items

27

What's Risk?

“A factor that could result in future negative consequences; usually expressed as impact and likelihood”
ISTQB® Glossary

28

What determines the level of Risk?

Financial, Legal, Safety, Image, Rework or Embarrasemt.

29

The main definition of Project Risk is?

Simply put, it's the projects ability to deliver its objectives. (Think people over software)

30

The main definition of Product Risk is?

Product Risks are issues with software or system.

31

What are can affect Product Risk?

Technical issues
Problems in defining the right requirements
The extent that requirements can be met given existing constraints
Low quality of the design, code, test data and tests
Test environment not ready on time
Late data conversion or migration planning
Organisational factors
Skill, training and staff shortages
Personnel issues
Political issues, (e.g. communication problems)
Unrealistic expectations of testing
Supplier issues
Failure of a third party
Contractual issues

32

What are can affect Project Risk?

Failure-prone software delivered
Poor software characteristics
e.g. Reliability, usability, performance
Poor data integrity and quality
e.g. Data migration or conversion problems, violation of data standards
Software does not perform its intended functions
Potential for software to cause harm to an individual or company

33

What's Risk-based Testing?

Testing driven by Risk identification.

34

What is Incident Management?

Incident management is the process of recognising, investigating, taking action and disposing of incidents.

35

What's a Incident?

Discrepancies between actual and expected results are logged as incidents
They must be investigated and may turn out to be defects

36

What could be a cause of an Incident?

Software defect
Requirement or specification defect
Environmental problem
e.g. Hardware, operating system, network
Test procedure or script fault
e.g. Incorrect, ambiguous or missing step
Incorrect test data
Incorrect expected results on test procedure
Tester error
Not following the procedure

37

What are the objectives of a Incident Report?

Provide feedback to enable developers and other parties to identify, isolate and correct defects
Enable test leaders to track:
The quality of the system
The progress of the testing
Provide ideas for test process improvement
Identify defect clusters
Create a history of incidents and resolutions
Supply metrics for assessing exit criteria

38

Can you outline the IEE 829-1998 for Test Incident Reports?

Report Identifier
Unique reference for each incident

Summary
Of the circumstances in which the incident occurred, referring to software and revision level, test case and test log

Description
Of the incident, referring to inputs, expected results, actual results, anomalies, date and time, procedure step, environment, attempts to repeat, testers and observers

Impact
Of the incident on test plans, test case and procedure specifications, if known

39

What is the Test Policy?

A high-level document describing the principles, approach and major objectives of the organisation regarding testing.

40

What is the Test Strategy?

Documentation that expresses the generic requirements for testing one or more projects run within an organisation, providing detail on how testing is to be performed, and is aligned with the test policy.

41

What's the definition of Test Control?

Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported.