flashcards

(114 cards)

1
Q

verify T basis early in the SDLC will …

A

prevent defects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

T control in fundament.T process - when?

A

always

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

design and prioritize of high level TC - when?

A

T analysis and design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

developer makes a __ which causes a __ when code is dynamically tested

A

mistake, failure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

exhaustive T is __

A

impossible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Func T can be conducted at ___ levels

A

all

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Non-func T can be conducted at ___ levels

A

all

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

triggers for Maintenance T

A

a component in production is modifies, migrated or retired

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

V-model. Design docs (DD) available. What Testers do?

A

create func/non-func TCs + review DD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Formal review. Role which documents issues

A

scribe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

static analysis best finds

A

dead code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

best T tech for: determine/improve code maintainability

A

static

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

document specifies input/output for test

A

TC specs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is a test condition?

A

is what a TC targets for testing. = TC tests a test condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

reason for use experience-based tech?

A

can found defects which missed by more formal tech

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

error guessing is used in …

A

experience T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

how calc decision (D) coverage?

A

num of D outcomes executed / total num of D outcomes in module

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

how calc statement (S) coverage?

A

num of S executed / total num of S in module

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

equivalence partitioning requires

A

one TC for each partition, one for too low and one for too high

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

boundary value analysys (BVA) requires

A

for each partition: prev+first - 0;1 + 49;50 + 59;60 + 69;70 + 79;80

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

T design specs contains …

A

T conditions (what to test) + T approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

TC specs contains …

A

test cases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

T procedures contain …

A

test steps

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

full statement coverage

A

each S (=operator. usually ‘if’) executed once

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
what T leader does ..
writes a T strategy
26
T planning should be … (when?)
not when. It's a continuous activity
27
Risk-based T is an … approach
analytical (risk analysis)
28
T summary report contains 'variances' section which descibes …
diff btw what was planned for T and what was ACTUALLY tested
29
system always become more reliable after debugging: T/F?
F
30
what fund. T principle helps to find as many bugs as possible?
defect clustering
31
3 activities of T implem and execution:
1. TC dev and prioritize, create T data, write T proc; 2. group TC into TS; 3. verif T env
32
V-model includes the verif of …
design
33
acceptance T is required for …
confidence
34
M Testing requires …
both re-test and R test
35
M Testing is difficult to scope =>
req careful risk and impact analysis
36
S. and D. Testing are complementary because …
share the aim of ident defects but differ in the defect types found
37
reviews are a cost-effective …
early static test
38
use case testing is good for …
acceptance, cover main business processes, find defects in components integration
39
which fundamental T activity do the test data prep tools support?
T analysis and design
40
if disagreement w dev …
remind about common goal create quality systems
41
inside SDLC, testing role is …
provide decision-making info
42
sometimes T is required for legal reasons because …
contracts may specify T reqs
43
root cause analysis helps …
to better identify and correct the defects root cause
44
pesticide paradox is …
running the same T over and over -> reduce the chance of finding new defects
45
well-managed test level should have …
a T objective
46
black-box T us based on …
req. docs
47
experience-based T is used …
in conj w more formal tech
48
TC tests T cond by …
following T procedures
49
bva =
2 per valid range + 1 for negative + 1 for exceeding
50
risk level is determined by …
likelyhood and impact
51
defect density is used for …
determine which areas of sw have the highest number of defects -> re-evaluate risk/priority
52
T exec tool purpose is …
execute T objects using automated T scripts
53
pilot project objectives are …
learn, evaluate the fit in the organization, decide standard usage, assess benefits
54
T contributes to the quality of delivered software by …
identif root causes of defects from past projects and use lessons -> improve processes -> help to reduce defect count
55
T planning assigns resources and …
sets the level of T procedures
56
acceptance T test basis is …
risk analysis report, system reqs, business use cases
57
objective for T is …
finding defects
58
objectives for acceptance T are …
confidence + assess readiness for deployment and use
59
debugging process:
T ident defect, Dev locate and fix, T confirm
60
verify the T env is ready - done during this fundamental T process:
Planning and Control
61
choice of SDLC model depends on …
product and project characteristics
62
what T metrics provides the best indication of T progress?
Test failure rate of tests executed
63
Integration T test level. Test basis:
software and system design
64
Integration T test level. Test objects:
interfaces
65
independent T is important because …
independent T can verify assumptions made during specification and implementation of the system
66
functional and structural T can be used together at __ T levels
ALL
67
M Testing is triggered by …
changes to delivered sw and uses impact analysis to min regression T
68
Formal review. One of roles -
moderator
69
review process success factors are …
1. predefined objectives; 2. right people involved; 3.emphasis on learning and process improvement
70
experience-based T: TC are derived from …
knowledge of the testers
71
most effect the testing efforts -
product reqs for reliability and security
72
T planning - when
continuously in all life cycle processes and activities
73
execution tools examples:
test harness, test comparators
74
pilot project main reason:
assess cost-effectiveness
75
T planning - major tasks:
find: scope, risks, objectives
76
evaluate reqs testability is a part of T. phase
T analysis/design
77
acceptance TC are based on …
output of requirement analysis/req.specs
78
validation =
helps to check that we have built the right product
79
impact analysis helps to decide …
how much testing should be done
80
functional system testing is …
end-to-end func of the system as a whole
81
technical review AKA
peer review
82
formal review kick-off =
explain objectives
83
low level design -> what level of T?
integration
84
business reqs -> what level of T?
acceptance
85
high level design -> what level of T?
system
86
review success factors:
1. defects found are welcomed and expressed objectively; 2. mgmt support; 3. emphasis on learn and proc improv
87
static analysis tools can find defects:
vars never used, security vuln, prog.std violations, uncalled func
88
T cond derive from …
specs
89
regression T - when:
after sw changed, environment changed
90
T leader tasks:
1. interact w T tool vendor; 2. write T sum report; 3. decide what should be automated and how
91
typical exit criteria …
Thoroughness measures, reliability measures, cost, schedule, tester availability and residual risks.
92
when to stop T ?
when T completion crit have been met
93
formal review phases:
plan, kick-off, prep, review meeting, rework, follow-up
94
T objectives during dev
provoke as many failures as possible
95
T objectives during delivery
confirm that system works as expected and assess the quality for stakeholders
96
QA?
prevents defects
97
static T is …
remove ambiguites and errors
98
dynamic T is …
execute program with some test data
99
7 T principles
* T shows the presence of bugs * exhaustive T is impossible * early T * defect clust * pesticide paradox * T is context-dependent * absence-of-errors fallacy - no errors doesn't mean good product
100
if risk is low and acceptable ->
stop T and ship
101
T should provide enough info for whom?
stakeholders
102
risk analysis answers:
* what to test 1st * what to test most * how thoroughly to test * what not to test * how much time to allocate for T
103
fundamental T process steps:
1a. Plan = def T obj and T activities 1b. Control = compare actual progress against the plan and report status 2. Analysis/Design = _tangible_ T cond and T cases, test-bed 3. Implem/Execute = write T proc, TC->TS, priority, check test-env, run, log, bugrep 4. Evaluate exit crit and summary report to stakehold (what planned/achieved) 5. T closure
104
T design tech list
bb, wb, exp-based
105
bb types
decision-table, state_transition, use_case(actors/activities/system), bva, equiv_part
106
T design process parts:
identify T cond / T cases / T data
107
typical test design strategy
1. func (bb) 2. non-func 3. wb - check statement/decision cov and create new TCs if necessary 4. exp-based T
108
test types
``` func non-func struct = wb related to changes (regression, re-test, maintenance) ```
109
validation =
doing the right thing (sw created by specs but code not maintainable)
110
verif =
doing the things in the right way (good code but not match specs)
111
V&V for Testers
verif=detect_faults, valid=comply
112
V&V for Analysts
``` verif = reqs not ambigious and complete valid = valid w customer what he asks make sure ```
113
signs of good T for any model
each T level has clear T obj for every dev act -> T act review drafts as soon as they're ready
114
T exec tools types:
T comparators coverage measure security T test harness / unit test framework