Lecture 6 Flashcards

1
Q

What is validation and what is verification?

A
  • verification: Are we developing the product right? (Often performed as internal procedure)
  • validation: Are we developing the right product? (Between stakeholders and costumer)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is program testing?

A

Testing aims to reveal errors in the program

It’s aimed at breaking the system.
Testing can only reveal the presence of errors, not their absence.

Testing is part of a general verification and validation process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How can we deal with faults?

A
  • Fault avoidance (before the system is released):
  • Fault detection (while system is running)
  • Fault tolerance (recover from failure once the system is released)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is fault, erroneous state and failure?

A

Fault: The mechanical or algorithmic cause of an erroneous state (“bug”).

Erroneous state: The system is in a state such that further processing by the system can lead to a failure.

Failure: Any deviation of the observed behaviour from the specified behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Shortly what is black-box and white boc testing?

A

Black-box testing focuses on testing software functionality without knowledge of internal code structure, while white-box testing involves testing internal code structures and logic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Example of fault (inteface specification):

A

Mismatch between what the client needs and what the server offers
Mismatch between requirements and implementation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the focus and goal of black-box testing?

A

Focus: Input/output behaveior. if for any given input, we can predict the output, then the unit passes the test.
Goal: Reduce test cases using equivalence partitioning. Group similar inputs and test one from each group. Example: Test one negative number if all negatives are expected to behave the same.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Example of algorithmic faults:

A

Missing initialisation
Incorrect branching condition
Missing test for null

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the focus and goal of white-box testing?

A

Internal code logic. Test cases are designed to check if the software works correctly for various code paths and conditions.

Goal: Ensure all paths and conditions in the code are tested. (CHAT)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Examples of mechanical faults (hard to find):

A
  • Operating temperature outside of equipment specification
  • Power failure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is COVERAGE?

A

Degree to which sum of test cases
covers program.
E.g., 75% path coverage, if 4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Example of erroneous state:

A
  • Wrong user input
  • Null reference errors
  • Concurrency errors
  • Exceptions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Name the three types of coverage:

A

Types of coverage
* Statement Coverage (C0)
* Branch Coverage (C1)
* Path Coverage (C2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How do we deal with them (fault, erroneous state, failure)?

A
  • Patching
  • Declaring the bug as a feature
  • Testing
  • Modular redundancy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain C1

A

C1 is BRANCH COVERAGE:
Full coverage
* At every decision, all branches are executed, e.g.,
for if statement, both then and else part
* Considers “empty” alternatives, e.g., omitted else
branches

Benefits
* Reveals unreachable branches
* Good functional test usually achieves high branch
coverage

Metric:
* C1 = Covered (Primitive) Branches / All (Primitive)
Branches

Pic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What can testing only show?

A

“Testing can only show the presence of bugs,
not their absence” (Dijkstra)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Explain C2

A

Full coverage
* All sequences whose elements relate to directly
connected nodes in the control flow graph
(potentially infinitely many)
* Considers (repeated) loops
Benefits
* theoretically optimal testing strategy but
combinatorial explosion
Metric:
* C2 = Covered Execution Paths / All Execution Paths
(theoretically)

pic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Static analysis vs. dynamic analysis?

A

Static analysis (testing without code execution):
- Hand execution
- Walk-through
- Code inspection

Automated tools checking for
- syntactic and semantic errors
- departure from coding standards

Dynamic analysis: (testing with code execution):
- Black-box testing (Test the input/output behaviour)
- White-box testing (Test the internal logic of the subsystem or class)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Name the four testing styles:

A

Unit Testing, System Testing, Integration Testing, Acceptance Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How to control flow graphs?

A

Statement (C_0)
- One language construct
- “Line of code”

Branch (C_1)
- Caused by conditions (if, switch, while etc.)
- E.g., A at first choice but not B

Path (C_2)
- One potential execution
- E.g., sequence creating A, C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Explain unit testing

A
  • Individual components (class or
    subsystem) are tested
  • Carried out by developers
  • Goal: confirm that the component or subsystem is correctly coded and carries out the intended functionality
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is statement coverage?

A

Full coverage, as every statement is executed at least once.
Benefits are that it’s easy to achieve and reveals unreachable code
Metric is C_0 = Covered Statements/All statements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Explain System Testing

A
  • The entire system is tested
  • Carried out by developers
  • Goal: determine if the system meets the requirements
    (functional and nonfunctional)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Describe the characteristics of C_0 (name, execution, effort):

A

Statement coverage test
Execute each statement at least once
Relatively low effort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Explain Integration Testing

A
  • Groups of subsystems (collection
    of subsystems) and eventually the
    entire system are tested
  • Carried out by developers
  • Goal: test the interfaces among
    the subsystems.
26
Q

Describe the characteristics of C_2 (name, execution, effort):

A

Path coverage test:
Complete test (C_2a)
All possible paths are executed
nerally not possible with loops

Boundary-interior test (C_2b)
Similar to C_2a but loops are executed via special rules
High effort

Structured test (C_2c)
Similar to C_2a but loops are executed exactly n-times
High effort

27
Q

Explain Acceptance Testing

A
  • Evaluates the system delivered by
    developers
  • Carried out by the client. May
    involve executing typical
    transactions on site on a trial basis
  • Goal: demonstrate that the
    system meets the requirements
    and is ready to use.
28
Q

Describe the characteristics of C_1 (name, execution, effort):

A

Branch coverage test
Each edge is visited at least once
Realistic min. requirement, moderate effort

29
Q

Mention the three INTEGRATION TEST
STRATEGIES

A
  • Big bang
  • Bottom-up
  • Top-down
30
Q

Explain shortly the levels of tests using car as an example

A

Unit Testing: Checks small parts like a car’s engine. Done by creators to ensure these parts work as intended.
Integration Testing: Tests how these parts fit together (like wheels to the engine). Done by creators to verify the connections.
System Testing: Tests the whole car. Done by creators to see if the whole thing meets the plan.
Acceptance Testing: Done by the person buying the car to make sure it fits their needs and works properly in real use.

31
Q

1.

Mention the four different doubles

A
  • Dummy Object: Only used to fill parameter spaces; not used in actual testing logic.
  • Fake Object: A simpler, working implementation for testing, like an in-memory database.
  • Stub: Returns fixed responses to specific calls made during testing.
  • Mock Object: Simulates behavior of real objects, checking interactions and sequences in tests.
32
Q

What are test drivers?

A

Simulates the part of the system that calls the component under test.

Helps in setting up the conditions necessary for testing a specific component by mimicking the behavior of other components it depends on.

Calls the Systems Under Test (SUT) or TestedUnit

33
Q

What is a test double?

A

A test double in software testing is similar to a stunt double in movies. It’s used to replace a real component in the system when that component is difficult or impractical to use during testing

34
Q

What are test stubs?

A

Simulates a component that’s being called by the tested component

A component that System Under Test (SUT) or TestedUnit depends on

Partial implementation and returns fake values

35
Q

Explain the big bang approach:

A
36
Q

Explain layered integration shortly (NOT DONE)
Her

A

Tests the interactions between different layers of an application before testing the entire system as a whole.

Layer I: Likely the user interface

Layer II: May include business logic with components like “Entity Model,” “Calculator,” and “Currency Converter”.

Layer III: The bottom layer, typically the data layer, with components “BinaryFile Storage,” “XMLFile Storage,” and “Currency DataBase”.
The approach suggests that integration testing should be done starting from the bottom layer up. First, ensure that each component within a layer (Layer III) works as expected. Next, test the interactions between components in Layer III and Layer II. Once Layer II components are working with Layer III, proceed to test the integration between Layer II and the top Layer I.

This method helps isolate issues within and between layers, making it easier to identify and fix problems than testing everything at once (as in the Big Bang approach).

37
Q

Explain bottom up in integration testing. Draw it?

A
38
Q

What is bottom up?

A

An integration strategy in the lowest layer of the call hierarchy (tested individually)

The subsystems above this layer are tested and call the previously tested subsystems

This is repeated until all subsystems are included.

Test from the bottom and up.

39
Q

Explain top down and draw it:

A
40
Q

What integration strategy is this?
Her

A

Bottom up testing

41
Q

Explain agile process and acceptance test in comparesion together:

A

The point of the acceptance testing process in Agile methods is to ensure that the software meets the user or customer’s expectations and requirements.

42
Q

What is top down?

A

An integration strategy in the top layer

Combine all the subsystems that are called by the tested subsystems and tets the resulting collection of subsystems

Do this until everything is incorporated into the tests

test from the top down

43
Q

What is regression testing?

A

Regression testing is testing the system to check that changes have not ‘broken’ previously working code

44
Q

Pros and cons of bottom up

A

Pros:
- No stubs needed
- Useful for integration testing of these systems: Object oriented systems, real-time systems, systems with strict performance requirements

Cons:
- Drivers are needed
- Tests an important subsystem last (the user interface

45
Q

Explain Continuous Integration (integration strategies):

A

a software development technique where members of a team integrate their work frequently, usually each person integrates at least daily, leading to multiple integrations per day.

46
Q

What are the two types of system testing?

A

Functional testing:
- Goal: Tests the functionality of system
- Test cases are designed from the requirement analysis and centers the requirements and key functions (use cases)
- The system is treated as black box

Performance testing:
- Goal: Try to violate non-functional requirements
- Test how the system behaves when overloaded
- Tries unusual orders of execution
- Check the system’s response to large volumes of data

47
Q

Mention some types of performance testing

A
  • Stress Testing
  • Volume testing
  • Configuration testing
  • Compatibility test
  • Timing testing
  • Security testing
  • Environmental test
  • Quality testing
  • Recovery testing
  • Human factors testing
48
Q

Pros and cons of top down

A

Pros:
- Tests can be defiend in terms of the functionality of the system (functional requirements)
- No drivers needed

Cons:
- Stubs are needed
- Stubs are difficult to write. Stubs must be allowed all possible conditions to be tested
- Large number of stubs may be required if the lowest layer contains many methods
- Some interfaces are not tested separately

49
Q

Say the test-driven development cycle

A

Test-driven development cycle
* Add a new test to the test model
* Run the automated tests
* => the new test will fail
* Write code to deal with the failure
* Run the automated tests
* => see them succeed
* Refactor code

50
Q

What are some types of user testing (type of acceptance testing)?

A

Alpha testing: Users of the software work with the developers to test the software
Beta testing: Release of the software is made available to the users (raise problems they find)
Acceptance testing: Customers test and decide whether it’s ready or not

51
Q

What is the proper way to set up a unit test in Java?

A

Use @BeforeEach to create fresh instances of the class for each test, ensuring tests are independent and repeatable.

52
Q

When should you write your tests?

A

Normally after the source code is written.
But in XP (or when using TDD), before the source code is written

53
Q

What is the typical structure of a unit test in Java?

A

A unit test in Java typically consists of the @Test annotation followed by a test method that invokes the method to be tested and asserts expected outcomes.

54
Q

How do you write a test method for a Java class?

A

Instantiate the class, call the method to be tested with specific inputs, and use assertEquals to verify the output matches the expected result.

55
Q

What are some JUnit annotations

A
56
Q

What is a JUnit assert statement?

A

An assert statement formulates an assumption about a test case’s expected behaviour.
It has to hold for the test case to pass.

57
Q

What are some JUnit assert statements?

A
58
Q

How do you test for expected exceptions in Java unit tests?

A

Use Assertions.assertThrows() to verify that the correct exception is thrown for invalid inputs.

59
Q

How do you test the expected behavior of a method?

A

Write test cases that reflect the method’s specification, such as input-output pairs, and use assertions to ensure the method behaves as intended.

60
Q

What are some integration strategies risks?

A
  • The higher the complexity of the software system, the more difficult is the integration of its components
  • The later integration occurs in a project, the bigger is the risk that unexpected faults occur
  • Bottom up or top down integration strategies don’t do well with later integration
  • Continuous integration addresses these risks by building as early and frequently as possible

Additional advantages:
- There is always an executable version of the system
- Team members have a good overview of the project status