Final Flashcards

(113 cards)

1
Q

Error

A

Mistake that introduces a fault (making a typo or conceptual misunderstanding)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Fault

A

Instance of incorrect code that can lead to a failure (a “bug” in the code)

A fault is something “wrong” with the code that leads to the software behaving in unexpected ways (aka failures).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Failure

A

Deviation from the expected behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why do failures occur?

A

Failures occur because there exists a fault (bug) in the code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What introduces a fault into a program?

A

A fault (bug) is introduced to the program when a programmer makes an error (mistake).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

When you make a typo it’s a _____, but the typo in the code itself is a ____.

A

Making a typo is an error, but the typo in the code itself is a fault.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How are failures discovered?

A

Failures are not discovered by looking at the code, but by observing that the “output” for a given “input” is not what we expected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Functional testing

A

Used to verify the software meets the requirement specifications when it comes to functionality: does it do what it is expected to do?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Types of functional testing

A

Unit testing
Integration testing
Regression testing
Acceptance testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Non-functional testing

A

Used to verify software performs at the required levels (performance, usability, reliability, and robustness)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Types of non-functional testing

A

Performance testing
Scalability testing
Usability testing
Acceptance testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Pros and cons of manual testing

A

Pros: intuitive, no upfront cost
Cons: time-consuming, human mistakes could miss software failures, not easily repeatable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Pros and Cons of Automated Testing

A

Pros: Easy to repeat, fewer mistakes, very efficient / Cons: high upfront cost/time, not suited for everything (like UI testing), test maintenance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

5 categories of software development process

A

Requirements, design, implementation, verification, maintenance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

SDP: requirements

A

determine what the software must do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

SDP: design

A

planning how to bring requirements to life

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

SDP: implementation

A

coding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

SDP: verification

A

ensure the implementation meets the requirements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

SDP: maintenance

A

bug fixes, add features, fix non-functional requirements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is software testing?

A

trying to generate a fail state in software with ultimate goal of being unable to do so.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Testing Framework

A

Used for automated testing, provides following functionality: 1) test fixture, 2) test case, 3) test suite, 4) test runner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

TF: test fixture

A

a way to set up elements required for a test and then roll back the setup when test is complete

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

TF: test case

A

a way to test a particular unit of the software with a specific input for a given response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

TF: Test suite

A

a collection of test cases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
TF: Test runner
a way to execute the tests and report the results
26
Testing Framework for this class
unittest in python
27
When is black box testing useful?
When you want to validate the functionality of the software based on specifications without needing to examine the internal code structure
28
Advantages of black box testing
Focuses on input domain for software, no need for actual code (non-devs can write tests, can be written before code - TDD), can catch logic errors that other types of testing cant, can be used at all levels of testing (unit, integration, etc.)
29
Disadvantages of black box testing
Isn't possible to test every input, so tests may miss logic branches/program paths, no way to know why the failure occurs, just that it indicates a fault, poorly written specifications can lead to inaccurate tests
30
Why use partition testing?
identifies sub-domains that can allow for more intelligent testing but with fewer test cases to cover the entire input domain
31
Six steps to partition (equivalence testing)
identify independently testable features, identify categories, partition categories into choices, identify constraints among choices, produce/evaluate test case specifications, generate test cases from test case specifications
32
Black box testing techniques
Random testing, boundary testing, partition testing
33
Random testing
test varying random inputs across the input domain; a form of black box testing where the inputs are generated randomly across the input domain
34
Boundary testing
test values around the boundaries of the input domain
35
Partition testing
Identify subdomains that can allow for more intelligent testing with fewer cases to cover entire input domain (inputs within each subdomain share an equivalence)
36
Advantages of random testing
quick to write, can cover large poritions of input domain with very little code, tests have no bias, can potentially generate an input nobody considered
37
Disadvantages of random testing
many inputs can fall under same "test case" so they are redundant, may not test tricky parts of input domain (eg. Edge cases), can easily miss errors (without any targeting), some runs may pass while others may fail due to random nature of testing
38
Advantages of white box testing
based on code, so quality tests can be measured objectively, can be used to compare test suites by measuring their quality, can directly test the coded behavior
39
Disadvantages of white box testing
Can't discover errors from missing paths (unimplemented specifications), large software systems make it difficult to test every part of code, tests must be written by developers
40
Subsumption hierarchy
100% branch and condition guarantees 100% statement coverage; 100% branch coverage guarantees 100% statement coverage
41
When should white box testing be used?
Measuring code quality (code coverage - how much is tested?), identifying uncovered code (ensure all branches and conditions are tested), testing internal logic, comparing test suites
42
How does white box testing complement black box testing?
White box testing doesn't care about specifications, it just needs as much of the program to execute as possible
43
Types of coverage
Code coverage, statement coverage, branch (decision coverage), condition (predicate) coverage, branch and condition (decision/condition) coverage, path coverage
44
Code coverage
the extent to which a given test suite executes the source code of the software
45
Statement coverage
way of measuring the quality of a testing suite based on the number of statements the tests execute in the program
46
Branch (decision) coverage
way of measuring the quality of a testing suite based on the number of branches that are covered; ensure each conditional is tested as evaluating as T and F
47
Condition (predicate) coverage
way of measuring the quality of a testing suite based on the number of conditions (predicates) that are covered; concerned with each condition within the conditionals; requires we evaluate each condition/predicate of each conditional as T or F; truth tables help!
48
Branch and Condition (Decision/Condition) coverage
way of measuring the quality of a testing suite based on the number of branches and conditions (predicates) that are covered; attempts to have 100% branch and 100% condition coverage; can result in a large number of tests
49
Path coverage
tests strive to evaluate every path through the code (path - unique series of branches)
50
Oracle
part of random testing system that monitors for error states in the software and saves the random inputs that generated those states for later inspection; can be as simplea s displaying the input to the screen, or can be a formal piece of software that generates formal bug reports
51
How can we make our random testers "smarter"?
Unguided random testing, guided random testing, heuristic
52
Unguided random testing
inputs are generated relatively evenly through the input domain
53
Guided random testing
inputs are generated following a heuristic that informs "smarter" input choices
54
Heuristic
cognitive tool to help make decisions - to make random generate smarter; guide tests to cluster around boundary values and values guided by error guessing knowledge
55
Advantages of random testing for large input domains
automate test writing process, quick to write, can cover large poritions of input domain with very little code, tests have no bias, generate an input no one considered
56
Disadvantages of random testing for large input domains
Many random inputs could fall under the same "test case" (redundant), might not test tricky parts of code (edge cases), could easily miss glaring errors, some runs might pass others will fail, need to test logic in the random tests
57
Input domain
the pool of all possible inputs that a unit/program can take
58
When is it good to use random testing?
Broad input domain coverage, finding unexpected bugs, summplementary testing, overnight testing, system testing
59
Parts of random testing system
Random case generator, software under test, oracle
60
What software design approach is most directly associated with TDD?
Agile process
61
Agile process
employs an iterative approach with a focus on getting a minimum viable product out the door as soon as possible
62
What does it mean to write the "bare minimum" of code?
writing just enough code to make a failing test pass; in the context of TDD, this means implementing the simplest possible solution that fulfills the requirements of the test, without adding any additional functionality or complexity
63
How do you know you're done with TDD?
We are done when we have added enough tests to cover all the requirements specified and all tests pass without triggering any new failures; ensures that the code meets the specifications and requirements without any unnecessary additions.
64
Test Driven Development
the approach where one only writes new code if there exists at least one failing unit test; not primarily a verification process, it’s a way of approaching implementation
65
Steps of TDD
1) Write a test, 2) run all currently written tests - if the tests all pass, go to step 1, if fails, go to step 3. 3) write the bare minimum of code to make the test pass 4) run all the currently written tests - if tests pass, go to step 1, if fail, go to step 3, 5) occassionally evaluate if the code can be refactored to reduce duplication or eliminate no longer used parts of the code. 6) eventually stop development after adding "enough" tests without triggering a new failure
66
Why use TDD?
Forces you to think about requirements during implementation; sets of a good indicator of "done" by writing tests to specifications and then writing just enough code to pass the tests. ; Reduce duplicate code, if new feature/functionality needed, code should only be written if the new tests fail.
67
What software development approach is most commonly assocated with continuous integration?
Agile, specifically extreme programming
68
Extreme programming
A form of agile development that stresses frequent releases that can be shown to customers to gather feedback to inform the next phase of development; changes to codebase had to be checked-in mulptiple times per day
69
What role does code review play in continuous integration?
Ensures changes to code base are sound and don't introduce new issues; when a dev makes changes, they create a pull request which triggers an automated build and test suite; if tests pass, the changes are then reviewed by other devs to ensure that they meet the project's standards and do not introduce new problems; helps to catch errors, enforce coding standards, and share knowledge among team members.
70
Continuous integration
A set of guiding principles on how to manage a team working on a shared codebase.
71
Continuous integration Principles
1) use a VCS to maintain central codebase, 2) building the software should be automated and easily triggered, 3) Once built, software should be able to test itself against a provided test suite. 4) Everyone needs to commit work to the shared codebase at least once a day, 5) Every commit to main should be built and tested, 6) Mandatory code review when requesting changes be merged into the shared codebase
72
Why is continuous integration helpful in a team environment?
Early detection of errors, Maintaining code quality (mandatory code reviews and automated tests), Facilitating collaboration (regular communication, code reviews), Streamlining deployment (automated build process)
73
Fagan Inspection
Formal code review; very structured code review process with 6 steps and defined roles; time-consuming and resource-intensive, most companies have moved to less formal code review processes
74
Steps of Fagan Inspection
Planning (gather participants and resources needed for the inspection process), Overview (meeting to discuss important aspects of project), Preparation (review material before meeting), Inspection meeting (code is inspected), Rework (defects fixed), Follow-up (follow up with dev to ensure reworks done correctly)
75
Fagan Inspection Participant Roles
Moderator (leader, schedules meetings, contacts participants, follow-ups), Author (code dev), Reader (reads code during inspection), Recorder (makes notes during inspection), Inspector
76
Lightweight code review methods
Pair-programming, Over-the-shoulder, Change-based, Meetings
77
Code Review Pair-programming
All coding is done in teams of two; each takes turns writing code while the other sits next to them and provides feedback; very collaborative process; other dev acts as real-time code reviewer, catching errors and giving advice (con - usually similar level, so not much additional knowledge)
78
Code Review Over-the-shoulder
a dev will complete some task and then ask the reviewer to come over to desk to provide feedback as the developer describes the code
79
Code Review Change Based
developers submit changes to the codebase that are then inspected later by the reviewer; once the review request is made, the developer is able to continue to work on another part of the task while they wait for feedback. When review is complete, the initial dev will be notified of any feedback and required changes (con - might be slow to get feedback)
80
Code Review Meetings
round table discussion of code being reviewed, everyone acting as inspector, but prep beforehand is limited
81
setup()
method in unittest framework; creates an artificial testing environment before each test method in a test case class
82
tearDown()
method in unittest framework; cleans up/destroys testing environment after each test method in a test case class
83
setUpClass()
class method in unittest framework; sets up the testing environment once before all tests runs
84
tearDownClass()
class method in unittest framework; cleans up/destroys testing environment once after all tests run
85
Basic idea of mocking?
Hijack calls to dependencies and simulate responses and behaviors
86
Stub:
contains predefined data that is returned when called, but does not imitate behavior (like parrot mimicing speech). Focus is on the verification of data handling, define as response to mimic a dependency. Unaware of what it is passed; Usually mimics function calls, not full objects
87
Mock:
simulates behavior of a service and its actions can be verified. "like siri mimics human understanding". More complex simulations that allow for testing object behavior, often simulate entire objects or interfaces; are self aware and can tell you how many times they have been called and with what
88
What does unittest's patch do?
used fo replacing objects in a test with mock objects. Allows you to mock dependencies or external resources used by the code under test; can be used as a decorate or a context manager to temporarily replace the specified object with a mock during the execution of the test
89
When to use mocks
Best suited when you want to simulate the behavior of dependencies or external systems without actually invoking them during testing. When you need to isolate the code under test from its dependencies to focus on testing specific behaviors or interactions. Useful when testing complex interactions or when the dependencies are slow, unreliable, or difficult to set up in a testing environment
90
Modified Condition/Decision Coverage (MC/DC):
purpose is to only test the important conditions to limit the number of test cases required; only include test cases where each condition affects the outcome of the statement independently (done by writing out truth table for all conditions, then mark the ones that are the same for each condition (except 1) and have different outcomes. Those are conditions to test)
91
When to use MC/DC?
When 100% branch coverage is not good enough, like safety-critical industries with high "costs" for failure with complex systems where full B&C coverage is impractical
92
Which type of testing requires the input of the client/user?
Acceptance testing
93
Acceptance testing:
requires the input of the client/user and any other stakeholders
94
Mutation-Based Fuzzer:
starts by selecting a valid input, then mutates it in some way and throw it at the software under test. Does not have to be completely random, tester can configure fuzzer to only modify valid inputs in specific ways or only make a certain number of mutations. Very helpful in certain situations where the software performs input validation before accepting it.
95
Generation-Based Fuzzer:
uses some "knowledge" of input domain to create random inputs (Similar to how "rules" generate random inputs)
96
Load testing
software is tested for performance under expected operating conditions (ex. For a word processor - how well can it handle being switched back and fourth between applications?)
97
Stress testing
software is tested for performance under extreme operating conditions (ex. Black Friday)
98
Load vs Stress
Load under normal operating conditions, done during software development process. Stress is extreme conditions and reserved for a more complete piece of software
99
How MC/DC relates to Statement Coverage
Statement coverage ensures that each line of code is executed at least once. MC/DC subsumes statement coverage because to achieve MC/DC, every possible decision in the code must be evaluated for both T and F outcomes, inherently executing each statement.
100
How MC/DC relates to Branch Coverage
Branch coverage ensures that every possible branch (T/F) of each decision point is executed. MC/DC also subsumes branch coverage. In the process of ensuring that each codnition within a decision independently affects the outcome, all branches will necessarily be evaluated.
101
How MC/DC relates to Condition Coverage
Condition coverage ensures that each condition in a decision is evaluated to both T and F. MC/DC goes beyond conditional coverage by not only ensuring each condition is tested for T and F but also ensuring that each condition's effect on the decision's outcome is independently tested.
102
Integration Testing
next step up from unit testing, takes a broader look at how units and modules interact. (ex. Testing clicking send in an email client). Uses both black box and white box testing. Must be done by devs familiar with the software
103
System testing:
next step up from integration testing, attempts to verify the entire program is working together. (test all pieces of app together). Form of black box testing. Can be conducted by non-devs. Teams will use quality assurance testers to try and "break" the software.
104
Acceptance testing:
devs of the software present a version to the customer/client/end user for the stakeholders to "sign off" on if the software meets their expectations.
105
Fuzzing vs random testing
Fuzzing for system testing, random is for unit testing
106
Types of performance testing
Load and Stress
107
When is load testing conducted in the SDP?
Whenever
108
When is stress testing conducted in the SDP?
Usually at the end
109
Parts of a fuzzer
test case generator, software under test, oracle to detect crashes/exceptions, ways to save the test case and machine state at the time of the crash
110
What are we fuzzing for?
To find nearly any type of software bug. Try to trigger crashes to test stability. Find memory leaks. Security exploits (penetration testing)
111
Common approaches for stress testing
Spike testing and Soak/Endurance testing
112
Spike testing:
run a spike of data/users/etc. through software
113
Soak/Endurance testing:
slowly add more data/users/etc. to software until it crashes to find its breaking point.