Automated Testing - CL1 Flashcards

1
Q

Unit testing

A
In computer programming, unit testing is a software testing method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures, are tested to determine whether they are fit for use.
Unit tests are typically automated tests written and run by software developers to ensure that a section of an application (known as the "unit") meets its design and behaves as intended. In procedural programming, a unit could be an entire module, but it is more commonly an individual function or procedure. In object-oriented programming, a unit is often an entire interface, such as a class, but could be an individual method. By writing tests first for the smallest testable units, then the compound behaviors between those, one can build up comprehensive tests for complex applications.
To isolate issues that may arise, each test case should be tested independently. Substitutes such as method stubs, mock objects, fakes, and test harnesses can be used to assist testing a module in isolation.
During development, a software developer may code criteria, or results that are known to be good, into the test to verify the unit's correctness. During test case execution, frameworks log tests that fail any criterion and report them in a summary.
Writing and maintaining unit tests can be made faster by using Parameterized Tests (PUTs). These allow the execution of one test multiple times with different input sets, thus reducing test code duplication. Unlike traditional unit tests, which are usually closed methods and test invariant conditions, PUTs take any set of parameters. PUTs have been supported by TestNG, JUnit and its .Net counterpart, XUnit. Suitable parameters for the unit tests may be supplied manually or in some cases are automatically generated by the test framework. In recent years support was added for writing more powerful (unit) tests, leveraging the concept of theories, test cases that execute the same steps, but using test data generated at runtime, unlike regular parameterized tests that use the same execution steps with input sets that are pre-defined.

Link:
https://en.wikipedia.org/wiki/Unit_testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

F.I.R.S.T. principles:

  • fast
  • isolates
  • repeatable
  • self-validating
  • timely
A
  1. Fast
    Tests must be fast. If you hesitate to run the tests after a simple one-liner change, your tests are far too slow. Make the tests so fast you don’t have to consider them.
    A test that takes a second or more is not a fast test, but an impossibly slow test.
    A test that takes a half-second or quarter-second is not a fast test. It is a painfully slow test.
    If the test itself is fast, but the setup and tear down together might span an eighth of a second, a quarter second, or even more, then you don’t have a fast test. You have a ludicrously slow test.
    Fast means fast.
    A software project will eventually have tens of thousands of unit tests, and team members need to run them all every minute or so without guilt. You do the math.
2. Isolated
Tests isolate failures. A developer should never have to reverse-engineer tests or the code being tested to know what went wrong. Each test class name and test method name with the text of the assertion should state exactly what is wrong and where. If a test does not isolate failures, it is best to replace that test with smaller, more-specific tests.
A good unit test has a laser-tight focus on a single effect or decision in the system under test. And that system under test tends to be a single part of a single method on a single class (hence "unit").
Tests must not have any order-of-run dependency. They should pass or fail the same way in suite or when run individually. Each suite should be re-runnable (every minute or so) even if tests are renamed or reordered randomly. Good tests interferes with no other tests in any way. They impose their initial state without aid from other tests. They clean up after themselves.
  1. Repeatable
    Tests must be able to be run repeatedly without intervention. They must not depend upon any assumed initial state, they must not leave any residue behind that would prevent them from being re-run. This is particularly important when one considers resources outside of the program’s memory, like databases and files and shared memory segments.
    Repeatable tests do not depend on external services or resources that might not always be available. They run whether or not the network is up, and whether or not they are in the development server’s network environment. Unit tests do not test external systems.
  2. Self-validating
    Tests are pass-fail. No agency must examine the results to determine if they are valid and reasonable. Authors avoid over-specification so that peripheral changes do not affect the ability of assertions to determine whether tests pass or fail.
  3. Timely
    Tests are written at the right time, immediately before the code that makes the tests pass. While it seems reasonable to take a more existential stance, that it does not matter when they’re written as long as they are written, but this is wrong. Writing the test first makes a difference.
    Testing post-facto requires developers to have the fortitude to refactor working code until they have a battery of tests that fulfill these FIRST principles. Most will take the expensive shortcut of writing fewer, fatter tests. Such large tests are not fast, have poor fault isolation, require great effort to make them repeatable, tend to require external validation. Testing provides less value at higher costs. Eventually developers feel guilty about how much time they’re spending “polishing” code that is “finished” and can be easily convinced to abandon the effort.

Link:
http://agileinaflash.blogspot.com/2009/02/first.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly