csci 387 quiz 4 Flashcards

1
Q

program testing is intended to show that

A

a program does what it is intended to do and to discover program defects before it is put into use

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

when you test software, you execute a program using

A

artificial data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

program testing goals

A

to demonstrate to the developer and the customer that the software meets its requirements (validation); to discover situations in which the behavior of the software is incorrect, undesirable, or does not conform to its specification (defect)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

for custom software, there must be at least ? test(s) for every requirement in the requirements document

A

one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

for generic software products, there must be tests for ?

A

all of the system features, plus combinations of these features, that will be incorporated in the product release

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

defect testing is concerned with

A

rooting out undesirable system behavior such as crashes, unwanted interactions with other systems, incorrect computations, and data corruption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

verification

A

are we building the product right (the software should conform to its specification)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

validation

A

are we building the right product (the software should do what the user really requires)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

aim of verification and validation is to

A

establish confidence that the system is ‘fit for purpose’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

verification and validation confidence depends on

A

system’s purpose, user expectations, and marketing environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

static verification

A

software inspections concerned with analysis of the static system representation to discover problems (may be supplemented by tool-based document and code analysis)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

dynamic verification

A

software testing concerned with exercising and observing product behavior (the system is executed with test data and its operational behavior is observed)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

software inspections involve

A

people examining the source representation to discover anomalies/defects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

inspections do not require ?, so they may be used ? implementation

A

execution of a system; before

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

during testing, ? can mask/hide other errors

A

errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

incomplete versions of a system can be inspected (with/without) additional costs

A

without

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

an inspection can also consider broader quality attributes of a program, such as ?

A

compliance with standards, portability, maintainability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

inspections and testing are ? verification techniques

A

complementary and not opposing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

both ? and ? should be used during the V&V process

A

inspections and testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

inspections can check conformance with ? but not with ?

A

a specification; a customer’s real requirements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

inspections cannot check ? characteristics such as ?

A

non-functional; performance, usability, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

stages of testing

A

development, release, user

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

development testing

A

the system is tested during development to discover bugs and defects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

release testing

A

a separate testing team tests a complete version of the system before it is released to users

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
user testing
users or potential users of a system test the system in their own environment
26
development testing includes
unit testing, component testing, and system testing
27
unit testing
individual program units or object classes are testing
28
unit testing should focus on testing
the functionality of objects or methods
29
component testing
several individual units are integrated to create composite components
30
component testing should focus on
testing component interfaces
31
system testing
some or all of the components in a system are integrated and the system is tested as a whole
32
system testing should focus on
testing component interactions
33
unit testing is a ? testing process
defect
34
units in unit testing may be
individual functions or methods within an object, object classes with several attributes and methods, composite components with defined interfaces used to access their functionality
35
object class testing: complete test coverage of a class involves
testing all operation associated with an object; setting and interrogating all object attributes; exercising the object in all possible states
36
? makes it more difficult to design object class tests as the information to be tested is not localized
inheritance
37
using a state model, identify ? to be tested and the ? to cause these transitions
sequences of state transitions; event sequences
38
whenever possible, unit testing should be ? so that tests are run and checked without manual intervention
automated
39
in automated unit testing, you make use of a ? to write and run your program tests
test automation framework (such as JUnit)
40
unit testing frameworks provide generic test classes that you ? to create specific test cases. they can then run all of the tests that you have implemented and report on ?
extend; the success of the tests
41
automated test components
setup, call, assertion parts
42
setup part
where you initialize the system with the test case, namely the inputs and expected outputs
43
call part
where you call the object or method to be tested
44
assertion part
where you compare the result of the call with the expected results (true = successful)
45
two types of unit test cases
1. reflect normal operation of a program/show that the component works as expected 2. use abnormal inputs to check that these are properly processed and do not crash the component
46
testing strategies
partition, guideline-based
47
partition testing is where you
identify groups of inputs that have common characteristics and should be processed in the same way
48
guideline-based testing is where you
use testing guidelines to choose test cases (guidelines reflect previous experience of the kinds of errors that programmers often make when developing components)
49
equivalence partition
the domain where the program behaves in an equivalent way for each class member
50
boundary value analysis/range checking
test boundaries between equivalence partitions; test values at both valid and invalid boundaries
51
test software with sequences which have only a ? value
single
52
use sequences of (different/same) sizes in different tests
different
53
derive tests so that the ? elements of the sequence are accessed
first, middle, last
54
test with sequences of length ?
zero
55
choose inputs that force the system to
generate all error messages
56
design inputs that cause input buffers to
overflow
57
force computation results to be
too large or too small
58
software components are often composite components that are made up of
several interacting objects
59
you access the functionality of the interacting objects through the defined
component interface
60
testing composite components should focus on showing
that the component interface behaves according to its specification (you can assume unit tests have been completed)
61
interface types
parameter, shared memory, procedural interfaces, message passing interfaces
62
parameter interfaces
data passed from one method or procedure to another
63
shared memory interfaces
block of memory is shared between procedures or functions
64
procedural interfaces
sub-system encapsulates a set of procedures to be called by other sub-systems
65
message passing interfaces
sub-systems request services from other sub-systems
66
interface misuse
a calling component calls another component and makes an error in its use of its interface (e.g., parameters in the wrong order)
67
interface misunderstanding
a calling component embed assumptions about the behavior of the called components which are incorrect (e.g., binary search in an unsorted array)
68
timing errors with interface
the called and calling component operate at different speeds and out-of-date information is accessed
69
for the interface, design tests so that parameters to a called procedure are at the ? ends of their ranges
extreme
70
for the interface, always test pointer parameters with ? pointers
null
71
system testing during development involves
integrating components to create a version of the system and then testing the integrated system
72
the focus in system testing is
testing the interactions between components
73
system testing tests the ? behavior of a system
emergent
74
during system testing, ? and ? may be integrated with the newly developed components
reusable components that have been separately developed and off-the-shelf-systems
75
the use-cases developed to identify system interactions can be used as
a basis for system testing
76
testing policies
define the required system test coverage
77
examples of testing policies
all system functions that are accessed through menus should be tested; combinations of functions that are accessed through the same menu should be tested; etc.
78
test coverage
measures the degree to which the specification or code of a software program has been exercised by tests
79
code coverage
measures the degree to which the source code of a program has been tested
80
code coverage criteria include
equivalence, boundary, control-flow, and state-based testing
81
statement coverage
each statement executed at least once by some test case
82
edge coverage
every edge (branch) of the control flow is traversed at least once by some test case
83
condition coverage
every condition takes true and false outcomes at least once in some test case
84
path coverage
finds the number of distinct paths through the program to be traversed at least once
85
control flow testing using the ? of a program to ?
control structure; develop the test cases for the program
86
the control flow graph of the program represents
the control structure of a program
87
the control flow graph G=(N, E) of a program consists of
a set of nodes N and a set of edges E
88
each node on the control flow graph represents ?; there are ? types of nodes; there is a unique ? and a unique ?
a set of program statements; five; entry, exit
89
in the control flow graph, there is an edge from node a to node b if
the control may flow from the last statement in a to the first statement in b
90
decision node
contains a conditional statement that creates two or more control branches (if or switch)
91
merge node
represents a program point where multiple control branches merge
92
statement node
contains a sequence of statements; control must enter from the first statement and exit from the last
93
state-based testing
defines a set of abstract states that a software unit can take and tests the unit's behavior by comparing its actual states to the expected states
94
the state of an object is defined as
a constraint on the values of object's attributes
95
ensure state coverage conditions by
covering all identified states at least once, covering all valid transitions at least once, and triggering all invalid transitions at least once
96
test-driven development is an approach to program development in which you
interleave testing and code development; tests are written before code and 'passing' the tests is the critical driver of development
97
TDD
start by identifying the increment of functionality that is required; write a test for this and implement; run this test; once all tests run successfully, move on
98
benefits of test-driven development
code coverage, regression testing, simplified debugging, system documentation
99
regression testing
testing the system to check that changes have not 'broken' previously working code; all tests are rerun every time a change is made to the program
100
release testing
the process of testing a particular release of a system that is intended for use outside of the development team
101
the primary goal of the release testing process is to
convince the supplier of the system that it is good enough for use
102
release testing is usually a ? testing process where tests are only derived from the system specification
black-box
103
requirements-based testing involves
examining each requirement and developing a test or tests for it
104
performance testing
usually involves planning a series of tests where the load is steadily increased until the system performance becomes acceptable
105
stress testing is a form of performance testing where the system is
deliberately overloaded to test its failure behavior
106
user or customer testing is a stage in the testing process in which
users or customers provide input and advice on system testing
107
user testing is essential because
the user's working environment may have a major effect on the reliability, performance, usability, and robustness of a system that cannot be replicated in a testing environment
108
types of user testing
alpha, beta, acceptance
109
alpha testing
users of the software work with the development team to test the software at the developer's site
110
beta testing
a release of the software is made available to users to allow them to experiment and to raise problems that they discover with the system developers
111
acceptance testing
customers test a system to decide whether or not it is ready to be accepted from the system developers and deployed in the customer environment; primarily for custom systems
112
in agile methods, the user/customer is part of ? and is responsible for ?
the development team; making decisions on the acceptability of the system
113
in agile methods, there (is a / is no) separate testing process
is no