Ch5 Flashcards
(41 cards)
Benefits of Test Independence?
See other and different defects
Unbiased
Verify assumptions made during specification and implementation
Bring experience, skills and quality
Drawbacks of Test Independence?
Isolation from development team
May be seen as bottleneck or blamed for delays in release
May not be familiar with business, project or systems
Developers may lose a sense of responsibility for quality
Tasks of the Task Leader?
Write or review test policy and strategy.
Contribute to the testing perspective.
Plan tests (approach, estimates and resoruces.)
Assess testing objective and risks.
Schedule test activities and initate specification, preparation, implementation and execution of tests.
Monitor the test results and check exit criteria.
Adapt planning based on test results and progress, taking action necessary to compensate for problems.
Set up adequate configuration management of testware.
Introduce suitable metrics for measuring test progress and evaluating the quality of testing.
Consider automation and select tools to support testing.
Supervise the implementation of the test environment
Write test summary reports for stakeholders.
Tasks of the Tester?
Review and contribute to test plans
Review user requirements, specifications and models for testability
Create test specifications
Set up the test environment with appropriate technical support
Prepare and acquire test data
Execute and log tests
Evaluate results and record incidents
Use test tools as necessary and automate tests
Measure performance (if applicable)
Review tests developed by others
Are you a mad? Recite the IEEE Stand 829-1998.
Test plan identifier
- Test plan identifier
- Introduction
- Test items
- To be tested
- Not to be tested
- Approach
- P/F criteria
- Suspension and Resumption.
- Test deliverables.
- Testing tasks
- Enviormental needs
- Responsibilities
- Staffing and training needs.
- Schedule.
15 Risk and contingencies. - Approvals.
State the order in Levels of Planning!!
Test Policy –> Test Strategy –> Master Test Plan–Compent–>Integration–>System–>Acceptance
What is the definition of Entry Criteria?
Entry C is the condition by which we start testing, i.e when tests are ready for execution.
What are a few Entry Critera?
Test environment available and ready
Test tool configured in the test environment
Testable code available
Test data available, including configuration data, logins, etc.
Test summary report available from previous testing, including quality measures
Third-party software delivered and software licences bought
Other project dependencies in place
What is the Exit Criteria?
Exit Criteria is the used to define test end. Typically after testing has achieved a specific goal.
What are a few Exit Critera?
Measures of testing thoroughness, i.e. coverage measures
Estimates of defect density or reliability
Cost
Residual risks such as number of defects outstanding or requirements not tested
Schedules such as those based on time to market
What should you remember about Exit Criteria?
Exit Criteria varies with test level. The coverage of code for component testing.
Coverage of requirements or risk for system testing
Non-functional measures such as usability in acceptance testing
Define Test Approach.
The implementation of test stategy. Based on objective and risk.
What can the Test Approach be used for?
A starting point for test planning.
Selecting design techniques and test types.
Defining Entry/Exit Criteria
What are the types of Test Approach?
Analytical e.g. Risk-based Model-based e.g. Using statistics such as expected usage profiles Methodical e.g. Based on failures (error guessing), experience, checklist Process- or standard-compliant e.g. Industry standards or agile methods Dynamic/heuristic e.g. Reactive, exploratory testing Consultative Based on advice from experts in technology or business Regression-averse e.g. Reuse and automation
Test Estimation is?
A calculated approximation of the cost or effort required to complete a task.
What are the approaches for Test Estimation?
Two Approaches:
The Metrics-based approach based on
Metrics of former or similar projects, or typical values.
The Expert-based approach, based on assessments by the owner of the tasks, or domain experts.
What factors should you consider whist Estimating?
Product - Quality of the specification Size of the product Complexity Requirements for reliability, security and documentation
Development process -
Stability of the organisation, tools used, test process, skills of the people involved, time pressure
Software quality -
Expected number of defects and the amount of rework required
Why do we perform
Test Progress Monitoring?
Provide feedback and visibility about testing
Assess progress against planned schedule and budget
Measure exit criteria such as coverage
Assess effectiveness of test approach with respect to objectives
Collect data for future project estimation
True or false, Metrics can be collected manually or automatically?
True, Test tools (test management, execution tools, defect trackers) can record key data.
State a few useful Metrics.
Percentage of work done in test case and environment preparation
Test case execution (e.g. number of test cases run/not run and test cases passed/failed)
Defect information (e.g. defect density, defects fixed, failure rate, retest results)
Coverage of requirements, risks or code
Dates of test milestones
Testing costs, including cost-benefit analysis of fixing defects
What should you consider when choosing Metrics?
Estimates (Time, Cost, etc.) Exit criteria (e.g. coverage, risk and defect data) Suspension criteria (e.g. quality, timescales)
Why should we consider in Test Control?
Re-prioritise tests when an identified risk occurs (e.g. software delivered late).
Change the test schedule due to availability of a test environment.
Set an entry criterion requiring fixes to have been retested by a developer before accepting them into a build.
What are the objectives of Test Reporting?
To summarise information about test activities during test phase:
What testing occurred?
Statistics on tests run/passed/failed, incidents raised/fixed
Was exit criteria met?
To analyse data and metrics to support recommendations and decisions about future actions:
Assessment of defects remaining
Economic benefit of continued testing
Outstanding risks
Level of confidence in tested software
Effectiveness of objectives, approach and tests
Impress me, outline the IEEE Std 829-1998 Test Summary Report:
Summary:
Software versions and hardware environment
Refer to test plan, logs and incident reports
Variances:
Changes from test plan, designs or procedures
Comprehensiveness assessment
Features not tested, with reasons
Summary of results:
Description of incidents, list of fixes and outstanding incidents
Evaluation:
Estimate of the software quality, reliability and failure risk
Summary of activities:
Effort and elapsed time categorised
Dates exit criteria were met
Approvals:
Provide a list and signature block for each approving authority