II - Testing Throughout the Software Development Lifecycle Flashcards

1
Q

Functional Testing

A

Evaluates “what“ the system should do. Work products would include: business requirements, functional specs, epics and user stories, and use cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Non-Functional Testing

A

Evaluates “how well“ the system behaves.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Change Related Testing

A

Required when functions have been changed or added, to confirm the fix. Ex. Confirmation Testing and Regression Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Confirmation Testing

A

A type of Change Related Testing.

Confirms whether the original defect has been successfully fixed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Regression Testing

A

A type of Change Related Testing.

Re-testing an already tested program after its modification (or a modification in the environment) to make sure that:
- the behavior of other code parts is not influenced unintentionally.
- no new defects are built in.
- no existing defects have been exposed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Characteristics of Good Testing

A
  • For every development activity, there is a corresponding test activity.
  • Each test level has test objectives specific to that level.
  • Test activities (starting with test analysis and design) for a given test level begin during the corresponding development.
  • Testers participate in discussions to define and refine requirements and design.
  • Testers are involved in reviewing work products (e.g.,requirements, design, user stories, etc.)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Sequential Development Models

A

A type of Software Development Lifecycle Model. Linear, sequential process (step by step).

Typically done for long-term projects.

Ex. Waterfall or V model.

(terms like validation/verification, usually referring to V model).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Waterfall Model

A

An example of a Sequential Development Model.

  • Testing is a “one time only” activity.
  • “Final testing” at test closure of a software project.
  • The model does not include error correction or a continuation of the project.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

V Model

A

An example of a Sequential Development Model.

One side is developer activities, the other side is tester activities.

The V-model integrates the test process throughout the development process, implementing the principle of early testing: Development step, then a Verification + Validation step.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Iterative and Incremental Models

A

A type of Software Development Lifecycle Model. When groups of features are specified, designed, built, and tested together in a series of cycles.

The software’s features grow incrementally.

Typically for shorter-term projects, as it works in weekly increments

Ex. Rational Unified Process (RUP), Scrum/Agile, Kanban, and Spiral (Prototyping).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Rational Unified Process (RUP)

A

An example of an Iterative and Incremental Model.

An outdated model, not used much anymore.

Business value is delivered incrementally in time-boxed cross-disciplined iterations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Scrum/Agile

A

An example of an Iterative and Incremental Model.

Pull user stories from the Backlog. Items are developed + tested over a short period of time (ex. days or weeks), to produce a potentially shippable product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Kanban

A

An example of an Iterative and Incremental Model.

Work to do is categorized under 3 categories: To Do, Doing, Done.

Implemented with or without fixed-length iterations,
which can deliver either a single enhancement or feature upon completion, or can group features together to release at once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Spiral (Prototyping)

A

An example of an Iterative and Incremental Model.

Involves creating experimental increments,
some of which may be heavily re-worked or even abandoned in subsequent development work.

Work with client to prototype the product. Testing begins once client signs off on the “final” prototype.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Software Development Lifecycle Model Selection

A

Software development lifecycle models must be selected and adapted to the context of project and product characteristics.

High-risk / long-term projects should use Sequential models.

Low-risk / shorter-term projects should use Iterative models.

Can also combine models together, depending on the project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Test Levels

A

Test levels are groups of test activities that are organized and managed together.

Each test level uses at least one instance of the test process.

Includes Component Testing, Integration Testing, System Testing, and Acceptance Testing.

17
Q

Component Testing

A

Focuses on a single component, like a button, or a field.

Component testing is often done in isolation from the rest of the system and is usually performed by the developer who wrote the code.

Defects are typically fixed as soon as they are found, often with no formal defect management.

Test Basis:
- Detailed Design
- Code
- Data Model
- Component Specifications

Test Objects:
- Components, units, or modules
- Code and data structures
- Classes
- Database models

18
Q

Integration Testing

A

Includes multiple components. Focuses on interactions between components or systems.

Includes:
- Component Integration Testing, which is usually done by the developer.
- System Integration Testing, which is usually done by the tester. Typically involves testing multiple softwares (ex. Amazon site + payment system).

Test Basis:
- Software and system design
- Sequence diagrams
- Interface and communication protocol specifications
- Use cases
- Architecture
- Workflows
- External interface definitions

Test Objects:
- Subsystems
- Databases
- Infrastructure
- Interfaces
- APIs
- Microservices

19
Q

System Testing

A

Testing everything together (end to end testing). Focuses on the behavior and capabilities of a whole system or product.
Ex. from logging in to logging out

Typically performed by independent testers.

Test Basis:
- System and software requirement specifications
- Risk analysis reports
- Use cases
- Epics and user stories
- Models of system behavior
- State diagrams
- System and user manuals

Test Objects:
- Applications
- Hardware/software systems
- Operating systems
- System under test (SUT)
- System configuration and configuration data

20
Q

Acceptance Testing

A

Also known as User Acceptance Testing (UAT).
Making sure it’s easy to use, from a user’s perspective.

Is often the responsibility of the customers, business users, product owners, or operators of a system.

Test Basis:
- Business processes
- User or business requirements
- Regulations, legal contracts and standards
- Use cases and/or user stories
- System requirements
- System or user documentation
- Installation procedures
- Risk analysis reports

Test Objects:
- System under test
- System configuration and
configuration data
- Business processes for a fully integrated system
- Recovery systems and hot sites (for business continuity and disaster recovery testing)
- Operational and maintenance processes
data
- Forms and Reports
- Existing and converted production

21
Q

Component Integration Testing

A

Focuses on the interactions and interfaces between integrated components. This is generally automated.

Examples of Defects:
- Incorrect data, missing data, or incorrect data encoding
- Incorrect sequencing or timing of interface calls
- Interface mismatch
- Failures in communication between components

22
Q

System Integration Testing

A

Is performed after the system test or in parallel with ongoing system test activities.
Focuses on the interactions and interfaces between systems, packages, and microservices.

Examples of Defects:
- Inconsistent message structures between systems
- Incorrect data, missing data, or incorrect data encoding
- Interface mismatch
- Failures in communication between systems
- Failure to comply with mandatory security regulations

23
Q

Common Forms of Acceptance Testing

A
  • User acceptance testing
  • Operational acceptance testing
  • Contractual acceptance testing
  • Regulatory acceptance testing
  • Alpha and beta testing
24
Q

User Acceptance Testing (UAT)

A
  • Building confidence that the users can use the system to meet their needs, fulfill requirements.
  • Perform business processes with minimum difficulty, cost, and risk.
25
Q

Operational Acceptance Testing (OAT)

A

Build confidence that the operators or system
administrators can keep the system working properly for the users in the
operational environment.

Ex. When the administrative/IT team completes a system update, database upgrade, etc. Usually happens behind the scenes.

  • testing of backup and restore
  • installing, uninstalling and upgrading
  • disaster recovery
  • data load and migration tasks
  • checks for security vulnerabilities
26
Q

Regulatory Acceptance Testing

A

Testing is performed against any regulations that must be adhered to, such as government, legal, or safety regulations.

Ex. If you’re a bank or health authority, there would be legal regulations to comply to.

27
Q

Alpha Testing

A

Potential or existing customers/ user/operator or independent tester perform tests at the developing organization’s site.

Separate team completes the testing, within the company.

28
Q

Beta Testing

A

Potential or existing customers/user/ operator perform the tests of a SW- System in its intended operational environment at the customer site

Ex. Individual testing a game at their own house.

OR Google initial releases

29
Q

Maintenance Testing

A

Once deployed to production environments, software and systems need to be maintained.

Completed anytime changes are done on production.

Maintenance is also needed to preserve or improve non-functional quality characteristics over its lifetime, especially
- performanceefficiency
- compatibility
- reliability
- security

30
Q

Triggers for Maintenance Testing

A
  1. Modification, such as:
    - planned enhancements (e.g.,release-based)
    - corrective and emergency changes
    - changes of the operational environment
    (such as planned operating system or database upgrades)
    - upgrades of COTS software
    - patches for defects and vulnerabilities
  2. Migration (from one platform to another)
    → operational testing of the new environment
    → testing the changed software
  3. Retirement (application reaches the end of its life)
    → testing of data migration
    → testing restore/retrieve procedures
    (after archiving for long retention periods may also be needed)

Regression testing may be needed
→ to ensure that any functionality that remains in service still works.

31
Q

Impact Analysis

A

When a dev comes to you saying that a major change has been made and maintenance testing is needed, an impact analysis is needed to determine how much maintenance testing will be needed.

Evaluates the changes to:
- identify the intended consequences
- identify possible side effects of a change
- identify the areas in the system that will be affected by the change
- determines the extent to which regression testing is to be carried out
- help to decide if the change should be made

32
Q

What can make Impact Analysis Difficult?

A
  • specifications are out of date or missing
  • test cases are not documented or are out of date
  • bi-directional traceability between tests and the test basis has not been maintained
  • tool support is weak or non-existent
  • the people involved do not have domain and/or system knowledge
  • insufficient attention has been paid to the software’s maintainability during development
33
Q

Test Object

A

The component or system to be tested.

34
Q

Test Basis

A

The body of knowledge used as the basis for test analysis and design.

35
Q

White-box testing

A

Testing based on an analysis of the internal structure of the component or system.