Gg Flashcards

(195 cards)

1
Q

difference between testing and debugging

A

Testing can trigger failures in dynamic testing /static testing ., then debugging deals with it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

debugging involves (3)

A

Reproduction of a failure • Diagnosis (finding the root cause) • Fixing the cause

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Subsequent confirmation testing checks…

A

the fixes resolved the problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Subsequent regression testing

A

to check whether the fixes are causing failures in other parts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When static testing identifies a defect, debugging….

A

concerned with removing I, no need to reproduce

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

when dynam testing triggers a failure debugging has to

A

has to find the cause, analyze and eleminate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Testing is a form of quality control (QC). QC is…

A

product oriented, supporting the achievement of appropriate levels of quality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

QA is

A

a process-oriented,preventive approach,improvement of processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

how QC uses test results…

A

used to fix defects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

how QA uses test results…

A

provide feedback

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

errors and defects are cause of …

A

failures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A root cause is a…

A

reason for the occurrence of a problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Root causes are identified through..

A

root cause analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

when root cause analysis is performed?(2)

A

when a failure occurs defect is identified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

testing principles(7)

A

1 testing shows presence of defects 2 testing everything impossible (exhaustive testing) 3early testing saves time and money 4Defects cluster together. 5. Tests wear out 6Testing is context dependent. 7Absence-of-defects misconception

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

to overcome the wear out of tests…

A

existing tests and test data may need to be modified, and new tests may need to be written

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

testing activities(7)

A

1planning 2monitoring/control 3analysis 4design 5implementation 6execution 7completion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Testware is created as

A

output work products from the test activities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

to implement effective test monitoring and control

A

establish and maintain traceability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

test management role takes overall responsibility for

A

test process,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

test management role is mainly focused on the activities of

A

test planning, test monitoring and control and test completion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

testing role takes overall responsibility for the

A

engineering (technical) aspect of testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

testing role is mainly focused on the activities of

A

test analysis, test design, test implementation and test execution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

whole-team approach

A

any team member with the necessary knowledge and skills can perform any task, and everyone is responsible for quality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
In sequential development testers
participate in requirement reviews, test analysis, and test design.
26
in iterative and incremental development models testing
each iteration both static and dynamic testing
27
manual testing tends to be done using
experience-based test techniques that dont require prior test analysis and design.
28
good practices in any SDLC (4)
1.For every software development activity-corresponding test activity 2.Different test levels - different test objectives, 3.Test analysis and design in early testing 4.Testers are involved in draft documentation
29
TDD
Directs the coding through test cases, tests written first
30
ATDD
Derives tests from acceptance criteria, Tests are written before the part of the application
31
BDD
test cases written in natural language first
32
DevOps promotes
CI / CD
33
Although DevOps comes with a high level of automated testing, manual testing...
will still be needed.
34
shift left?
early testing in SDLC
35
Retrospectives?
end of a project or an iteration meetings
36
Results after retrospective...
part of the test completion report
37
test levels-
groups of test activities that are organized and managed together.
38
Each test level is
instance of the test process
39
sequential SDLC models, the test levels
defined such that the exit criteria of one level are part of the entry criteria for the next level.
40
Test types are
groups of test activities related to specific quality characteristics
41
test levels (5)
componenet component integration system system integration acceptance
42
component Testing level
tesing components in isolation
43
component integration testing level
testing interfaces and interactions between components
44
System Testing level
overall behavior and capabilities of an entire system
45
system integration Testing level
testing the interfaces of the system under test and other systems and external services
46
acceptance Testing level
demonstrating readiness for deployment,
47
system testing level includes (2)
1.functional testing of end-to-end tasks 2.the non-functional testing of quality characteristics
48
acceptance testing level forms(5)
1. UAT testing 2. operational 3. regulatory 4.alpha 5.beta
49
Test levels are distinguished by attributes (5)
test obejct test objective basis defects approach/responsibilities
50
test types (4)
functional non functional black box white box
51
f. Testing type
evaluates the functions that a component or system should perform 'what' object should do
52
n.f.testing type
non-functional software quality characteristics.“how well the system behaves”
53
black box Testing type
specification-based
54
white box Testing Type
structure-based
55
confirmation tesing
original defect has been successfully fixed
56
how to do confirmation test (2)
1.executing all test cases that previously have failed 2.adding new tests to cover any changes
57
after maintenance regression testing makes sure that...
no adverse consequences have been caused by a change. including a fix that has already been confirmation tested.
58
before regression test
perform impact analysis
59
maintenance involve what realeases?
planned releases and unplanned(hotfix)
60
before maintenance change is made need to do
impact analysis
61
testing the changes to a system in prod includes(2)
1evaluating success of implement of change 2checking for regression in parts that didnt change
62
scope of maintenance depends on (3)
1degree of risk 2size of system 3size of change
63
triggers for maintenance (3)
1modifications 2upgrades/migrations 3retirement of sys
64
after retirment/archiving the sys, testing of ....needed
restore and retrieval procedures
65
work products are evaluated through…
manual examination (reviews)
66
static t. objectives
improving quality, assessing characteristics like readability, completeness,
67
static t, can be applied to v…and v…
verification and validation
68
the Definition of Ready Review techniques
to ensure user stories are complete and understandable and include testable acceptance criteria.
69
Static analysis can identify problems prior to
dynamic test.
70
during static analysis we use
no test cases are required, and tool are typically used.
71
Almost any work product can be examined using
static t
72
work pr testing with static t. needs to
work products need a structure against which they can be checked
73
Work products that are not appropriate for static testing
that are difficult to interpret by human beings and that should not be analyzed by tools ( legal reasons).
74
value of static t.
detect defects early and defects which cannot be detected by dynamic testing
75
difference between static and dynamic(5)
1. both detect defects but of diff types can be found by either 2. static finds defects directly while dynamic causes failures 3.static find decets that lay on paths through the code that are rarely executed 4. static to non executable work products 5.static measure quality characteristics that are not dependant on executing code
76
defects that are easier to find through static testing(7)
req def design def coding def deviations of stand security vulnerab gaps in test coverage
77
ISO/IEC 20246 standard defines a
review process
78
activities in the review process(5)
planning review initiation individual review communication and analysis fixing and reporting
79
roles in reviews(6)
manager author moderator scribe reviewer review leader
80
manager's role in reviews
decides what is to be reviewed and provides resources
81
author's role in reviews
creates and fixes the work product under review
82
moderator's role in reviews
– ensures the effective running of review meetings
83
scribe's role in reviews
collects anomalies from reviewers
84
reviewer's role in reviews
performs reviews.
85
review leader's role in reviews
overall responsibility- who where when reviewing
86
review types(4)
informal review walkthrough technical review inspection
87
informal review type
main objective is detecting anomalies. dont follow a process
88
walkthrough review type
led by author. evaluating quality and building confidence in the work product, educating reviewers
89
Technical Review type
led by a moderator. gain consensus and make decisions regarding a technical problem,
90
Inspection review type
main objective is to find the maximum number of anomalies.
91
black box test techniques(4)
Equivalence Partitioning • Boundary Value Analysis • Decision Table Testing • State Transition Testing
92
Equivalence Partitioning (EP)
divides data into partitions,to be processed in the same way by the test object.
93
Equivalence partitions can be identified for any data element related to...(7)
the test object, including inputs, outputs, configuration items, internal values, time-related values, interface parameters
94
types of EP partitions(2)
valid values invalid values
95
valid values partition
should be processed by the test object or as those for which the specification defines
96
invalid values partition
should be ignored or rejected values or no processing is defined in specification
97
Each Choice coverage ?
simplest coverage criterion in the case of multiple sets of partitions
98
Each Choice coverage requires test cases to?
to exercise each partition from each set of partitions at least once.
99
Boundary Value Analysis
exercising the boundaries of equivalence partitions
100
BVA can only be used for..
ordered partitions
101
boundry values?
the minimum and maximum values of a partition
102
BVA, if two elements belong to the same partition..
all elements between them must also belong to that partition
103
2-value BVA
two coverage items: this boundary value and its closest neighbor
104
3-value BVA
three coverage items: this boundary value and both its neighbors
105
Decision tables
r testing the implementation of system requirements that specify how different combinations of conditions result in different outcomes
106
state transition diagram
behavior of a system by showing its possible states and valid state transitions.
107
in state transition diagram, transition is initiated by...
event
108
in state transition diagram, transitions are assumed to be...(2)
to be instantaneous may sometimes result in the software taking action
109
in State Transition Testing , state table?
model equivalent to a state transition diagram
110
coverage criterias for state transition testing (3)
all states coverage valid transitions coverage all transitions coverage
111
all states coverage
the coverage items are the states
112
valid transitions coverage
coverage items are single valid transitions
113
all transitions coverage
coverage items are all the transitions shown in a state table.
114
statement testing coverage items
executable statements
115
statement t. aim to design test cases that..
that exercise statements in the code until an acceptable level of coverage is achieved
116
statement t.- when 100proc statement coverage is acheved it ensures..
it ensures that all executable statements been exercised at least once
117
statement t. - each statement with a defect will be executed, which may cause...
may cause a failure demonstrating the presence of the defect.
118
statement t. - when it may not detect defect?
if its data dependent
119
A branch is
a transfer of control between two nodes in the control flow graph
120
control flow graph shows
the possible sequences in which source code statements are executed
121
Each transfer of control can be either... (2)
conditional unconditional
122
Conditional branches typically correspond to a.. (2)
true/false outcome a decision to exit /conrinue
123
Branch t.- may not detect defects that require
the execution of a specific path in a code.
124
strenght of white box t.
entire software implementation is taken into account during testing
125
weakness of white box t.
if the software does not implement one or more requirements, white box testing may not detect defects
126
if white box does not detect defect, it may result in
defects of omission
127
White-box coverage measures provide an.... (2)
objective measurement of coverage nformation to allow additional tests to be generated
128
experience based test techniques 3
Error guessing • Exploratory testing • Checklist-based testing
129
In general, errors, defects and failures may be related to: .... (5)
input output logic camputation data
130
Fault attacks are a methodical approach to...
to the implementation of error guessing
131
how to do Fault attacks 2 steps
1. tester to create or acquire a list of possible errors 2.design tests that will identify defects associated with the errors
132
exploratory testing
tests are simultaneously designed, executed, and evaluated while the tester learns about the test object
133
exploratory t. conducted using
session-based testing
134
In a session-based approach for explo testing 3
1.its within a defined timebox 2. tester uses a test charter 3. in the end discussion between the tester and stakeholders
135
Exploratory testing is useful when 2
there are few specifications time pressure
136
checklist-based testing
a tester designs, implements, and executes tests from a checklist.
137
Checklists should not contain items that...3
1 items that can be checked automatically, 2 items better suited as entry/exit criteria, 3 items that are too general
138
Collaboration-based approaches
focus on defect avoidance by collaboration and communication.
139
User stories critical aspects 3
card coversation confirmation
140
Collaborative authorship of the user story can use techniques such as....2
brainstorming mind mapping
141
Acceptance criteria for a user story are
conditions that an implementation must meet to be accepted by stakeholders.
142
ways to write acceptance criteria for a user story 2
scenario oriented rule oriented
143
ATDD steps 2
1. specification workshop . acceptance criteria written 2. create test cases
144
ATDD test cases must cover...
all the characteristics of the user story and should not go beyond the story
145
In iterative SDLCs, kinds of planning 2
release planning iteration planning.
146
If entry criteria are not met...
activity will prove to be more difficult, time-consuming, costly, and riskier
147
Typical entry criteria include: 3
availability of resources availability of testware initial quality of object
148
Typical exit criteria include 2
measures of thoroughness completion criteria
149
In Agile software development, exit criteria are often called
Definition of Done
150
Definition of Ready.
Entry criteria that a user story must fulfill to start the development and/or testing
151
Test effort estimation
predicting the amount of test-related work needed
152
estimation techniques 4
Estimation based on ratios Extrapolation Wideband Delphi. Three-point estimation
153
Estimation based on ratios
figures are collected from previous projects within the organization, which makes it possible to derive “standard” ratios for similar projects
154
Extrapolation
measurements are made as early as possible in the current project to gather the data.. This method is very suitable in iterative SDLCs
155
Wideband Delphi
experts make experience-based estimations. Each expert, in isolation, estimates the effort.
156
Three-point estimation
hree estimations are made by the experts: the most optimistic estimation (a), the most likely estimation (m) and the most pessimistic estimation (b)
157
test case prioritization strategies 3
Risk-based prioritization Coverage-based prioritization Requirements-based prioritization
158
Risk-based prioritization
order of test execution is based on the results of risk analysis.the most important risks are executed first.
159
Coverage-based prioritization
order of test execution is based on coverage.Test cases achieving the highest coverage are executed first
160
Requirements-based prioritization
Test cases related to the most important requirements are executed first.
161
Test Pyramid
a model showing that different tests may have different granularity
162
test pyramid layers
The higher the layer, the lower the test granularity, test isolation and test execution time. Tests in the bottom layer are small, isolated, fast, and check a small piece of functionality
163
The testing quadrants
group the test levels with the appropriate test types, activities, test techniques and work products in the Agile software development.
164
Testing Quadrants model supports test management in...
visualizing groups to ensure that all appropriate test types and test levels are included in the SDLC
165
In Testing Quadrants model, tests can be 2
business facing technology facing
166
Quadrant Q1( technology facing, support the team)
contains component and component integration tests. These tests should be automated and included in the CI process.
167
Quadrant Q2 (business facing, support the team)
contains functional tests,These tests check the acceptance criteria and can be manual or automated.
168
Quadrant Q3 (business facing, critique the product).
contains exploratory testing,These tests are user-oriented and often manual.
169
Quadrant Q4 (technology facing, critique the product).
contains smoke tests and non-functional tests,These tests are often automated.
170
risk management activities 2
Risk analysis Risk control
171
risk-based testing
test activities are selected and managed based on risk analysis and risk control
172
A risk can be characterized by factors: 2
Risk likelihood Risk impact
173
types of risks 2
project risks product risks
174
when project risk occurs , it has impact on
have an impact on the project schedule, budget or scope
175
Product risk analysis consists of... 2
risk identification risk assessment
176
Risk assessment approaches 3
quantitative qualitative mix of them
177
quantitative risk assesment approach
risk level =multiplication of risk likelihood and risk impact.
178
qualitative risk assesment approach
the risk level can be determined using a risk matrix.
179
Product risk control consists of...2
risk mitigation risk monitoring
180
Actions that can be taken to mitigate the product risks by testing are 6
testers with level of experience/skills appropriate level of independence of testing reviews and static analysis appropriate test techniques and coverage levels appropriate test types doe quality characteristics dynamic testing, regression testing
181
Test monitoring information is used to...
to assess test progress and test exit criteria/test tasks satisfied
182
Test control uses the information from test monitoring to provide...
guidance and the necessary corrective actions
183
Test completion collects data from completed test activities to...
to consolidate experience, testware
184
Test completion activities occur...
at project milestones
185
Test monitoring gathers a variety of metrics to...
to support the test control and test completion.
186
Test reporting-
summarizes and communicates test information during and after testing
187
Test completion reports-
summarize a specific stage of testing
188
During test monitoring and control, the test team generates..
generates test progress reports for stakeholders to keep them informed
189
difference reporting on test progress and completion
progress=is frequent and informal completion= follows a set template and occurs only once.
190
configuration management
for identifying, controlling, and tracking work products such as test plans...
191
For a complex configuration item (test env), configuration managment...
records the items it consists of, their relationships, and versions
192
If the configuration item is approved for testing....
it becomes a baseline and can only be changed through a formal change
193
Configuration management keeps a record of...
a record of changed configuration items when a new baseline is created.
194
To properly support testing, Configuration management ensures the following: 2
1 All configuration items are uniquely identified, version controlled...so that traceability can be maintained 2 All identified items referenced in test doc
195
defect management process includes a...2
1 includes a workflow for handling individual anomalies from their discovery to their closure 2 rules for their classification