Manual testing Flashcards

1
Q

What is the difference between Verification and Validation?

A

Verification:
Ensures the product is being built correctly according to requirements and design specifications.
Focuses on static testing (reviews, walkthroughs, inspections)
Example: Reviewing requirement documents, design documents, or code.
Validation:
Ensures the product meets the user’s needs and works as intended in a real-world scenario.
Focuses on dynamic testing (actual testing of the application).
Example: Executing test cases, performing functional testing, and checking if the software behaves as expected.

Key Difference: Verification is about “building the product right,”
while validation is about “building the right product.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the different levels of testing?

A

Answer:

  1. Unit Testing:
    Focuses on testing individual components or modules of the application.
    Performed by developers.
  2. Integration Testing:
    Verifies the interaction between integrated modules or components.
    Example: Testing APIs or communication between modules.
  3. System Testing:
    Tests the entire application as a whole to ensure it meets the specified requirements.
    Performed by testers.
  4. Acceptance Testing:
    Ensures the application meets business requirements and is ready for deployment.
    Performed by end-users or clients.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the difference between Smoke Testing and Sanity Testing?

A

Answer:

● Smoke Testing:
Performed to ensure that the basic functionalities of the application are working after a new build.
○ Broad but shallow testing.
○ Example: Verifying that the application launches and main features are accessible.

● Sanity Testing:
○ Performed to ensure that specific functionalities or bug fixes are working as expected.
○ Narrow but deep testing.
○ Example: Retesting a login feature after a bug fix to ensure it works correctly.

Key Difference: Smoke testing is a high-level check of the overall system, while sanity testing focuses on specific areas of functionality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do you prioritize test cases when time is limited?

A
  1. Identify Critical Areas:
    Focus on functionalities that are critical to the application, such as login, payments, or data integrity.
  2. Risk-Based Testing:
    Prioritize tests in areas prone to defects or those that have undergone recent changes.
  3. Business Impact:
    Execute test cases that cover features with the highest business value or end-user impact.
  4. Frequent Use Scenarios:
    Test features or workflows that are used frequently by users.
  5. Regression Testing:
    Ensure that previously working features are not broken by recent changes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the difference between Priority and Severity in defect management?

A

● Severity:
Refers to the impact of the defect on the functionality or system.
○ Assigned by testers.
○ Levels: Critical, Major, Minor, Trivial.
○ Example: A critical defect causes a system crash, while a minor defect might be a UI misalignment.

● Priority:
○ Refers to the urgency of fixing the defect.
○ Assigned by developers or project managers.
○ Levels: High, Medium, Low.
○ Example: A defect on the home page might have high priority even if it’s minor.

Key Difference: Severity is about the technical impact, while priority is about the business impact and urgency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the purpose of a Traceability Matrix?

A

A Traceability Matrix is a document that maps test cases to requirements to ensure 100% test coverage.

Purpose:

  1. Verify that all requirements are covered by test cases.
  2. Identify gaps in testing.
  3. Trace defects back to specific requirements.

Example:

Requirement

ID

Requirement

Description

Test Case

ID

Test Case Description

Statu s

R1

User Login

Functionality e

TC1

Test Login with valid credentials

Pass

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Difference between Sanity Testing, Regression Testing, Retesting?

A

✅ Sanity Testing
Goal: Quickly check if a specific part of the application is working after minor changes.
When: After receiving a new build with small code changes or bug fixes.
Scope: Very narrow (focuses on one or few functionalities).
Example: Developer fixes a login issue. You perform sanity testing just to verify that login works and nothing else is broken around it.
✅ Regression Testing
Goal: Ensure that new changes haven’t broken existing functionalities.
When: After bug fixes, enhancements, or code merges.
Scope: Wide (check full or partial system).
Example: After adding a new feature like “Forgot Password”, you do regression testing to make sure login, signup, profile, and other related flows still work.
Time: Takes longer than sanity, often automated.
Time: Fast and shallow.
✅ Retesting
Goal: Verify if a specific bug or issue is fixed.
When: After a bug has been marked as fixed by developers.
Scope: Very specific (only the fixed defect).
Example: Bug ticket says “User can’t change password” — once it’s marked fixed, you retest just that scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the difference between Verification and Validation?

A

✅ Verification – “Are we building the product right?”
Goal: Make sure the software meets design specs, requirements, and architecture.
Focus: Process-oriented – checking documents, design, code structure, and requirements before execution.
Performed by: Developers, QA, business analysts.
Methods: Reviews, walkthroughs, inspections, static testing.
Example: Reviewing a requirement document or checking if the login page design matches the wireframe.
✅ Validation – “Are we building the right product?”
Goal: Make sure the actual software works as expected for the user.
Focus: Product-oriented – checking the final product through actual execution.
Performed by: QA/testers (usually after development).
Methods: Functional testing, system testing, UAT (User Acceptance Testing), dynamic testing.
Example: Testing if the login feature works correctly with real user credentials.

📌Verification is reviewing the blueprint of a house before building it.
📌Validation is walking through the house after it’s built to make sure it feels like home.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is exploratory testing, and when would you use it?

A

● Exploratory Testing: A testing approach where testers actively explore the application without predefined test cases.

● Purpose:
1. Discover hidden defects.
2. Validate usability and design issues.
3. Test areas not covered by scripted tests.

● When to Use:
1. When requirements are incomplete or unclear.
2. During early stages of testing to identify major flaws.
3. For testing complex workflows or edge cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you ensure a defect report is effective?

A
  1. Clear Title:
    Use a descriptive title summarizing the defect.
  2. Detailed Steps to Reproduce:
    Provide precise steps, test data, and environment details.
  3. Expected vs. Actual Result:
    Clearly state what was expected and what actually happened.
  4. Attachments:
    Include screenshots, logs, or videos to provide evidence.
  5. Severity and Priority:
    Assign appropriate levels based on impact and urgency.
  6. Environment Information:
    Specify browser, device, or OS details where the defect was observed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How would you test a login page?

A

Testing a login page involves a combination of functional, security, and UI testing. Key test cases include:

  1. Functional Testing:
    ○ Valid credentials: Verify login with correct username and password.
    ○ Invalid credentials: Check error messages for wrong username/password.
    ○ Blank fields: Ensure validation messages appear when fields are left empty.
  2. Boundary Value Analysis:
    ○ Test username and password with minimum and maximum character limits.
  3. Negative Testing:
    ○ Test with SQL injections, special characters, or script tags. ○ Check for error messages when submitting without input.
  4. Security Testing:
    ○ Verify password encryption and secure data transmission (HTTPS).
    ○ Ensure the application prevents brute-force attacks by locking accounts after multiple failed attempts.
  5. Usability Testing:
    ○ Validate field alignment, placeholder text, and ease of navigation.
  6. Cross-Browser Testing:
    ○ Verify the login functionality on different browsers and devices.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you ensure complete test coverage?

A
  1. Understand Requirements:
    ○ Thoroughly analyze requirements and create a traceability matrix to map them to test cases.
  2. Categorize Test Scenarios:
    ○ Cover all functional, non-functional, edge cases, and integration points.
  3. Include Positive and Negative Tests:
    ○ Test normal workflows as well as scenarios with invalid or unexpected inputs.
  4. Use Equivalence Partitioning and Boundary Value Analysis:
    ○ Divide test inputs into valid and invalid partitions.
  5. Perform Exploratory Testing:
    ○ Execute unscripted tests to identify gaps in predefined test cases.
  6. Review by Peers:
    ○ Conduct peer reviews of test cases to ensure no requirements are missed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is regression testing, and how do you decide what to include?

A

○ Regression Testing: Ensures that recent changes or fixes do not break existing functionality.
○ Inclusions:
1. Test cases for modules impacted by code changes.
2. Core features of the application.
3. High-priority defects fixed in recent builds.
4. Tests for integrations with external systems.
○ Approach:
1. Use a risk-based strategy to focus on critical workflows.
2. Maintain a regression suite and update it as features evolve.
3. Automate repetitive regression tests to save time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How do you handle incomplete or ambiguous requirements?

A
  1. Clarify with Stakeholders:
    Collaborate with business analysts, product owners, or clients to get clarity.
  2. Refer to Similar Features:
    Use knowledge from similar modules or past projects as a reference.
  3. Document Assumptions:
    Clearly outline assumptions about expected behavior and share them for approval.
  4. Create a Risk-Based Plan:
    Focus on critical functionalities and perform exploratory testing for the unclear areas.
  5. Communicate Effectively:
    Keep all stakeholders informed about the challenges and testing strategy.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the difference between Retesting and Regression Testing?

A
  1. Retesting:
    ○ Verifies that a specific defect is fixed.
    ○ Focuses on failed test cases.
    ○ Performed in the same environment with the same inputs.
  2. Regression Testing:
    ○ Ensures recent changes do not impact existing functionality.
    ○ Focuses on both fixed and unaffected areas. ○ Covers a broader scope of test cases.
    Key Difference: Retesting confirms defect fixes, while regression testing checks for unintended side effects.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is usability testing, and what aspects do you check?

A

○ Usability Testing: Evaluates how user-friendly and intuitive an application is.
○ Aspects to Check:
1. Navigation: Ensure menus, links, and buttons are easy to find and use.
2. Content Clarity: Verify labels, error messages, and instructions are clear.
3. Consistency: Check uniformity in font styles, colors, and layout.
4. Accessibility: Ensure the application is usable for differently-abled users (e.g., screen readers, keyboard navigation).
5. Performance: Validate response times for user actions.

17
Q

How would you test a payment gateway?

A
  1. Functional Testing:
    Verify successful transactions using valid card details.
    Test failed transactions with invalid, expired, or insufficient funds cards.
  2. Security Testing:
    Ensure sensitive data (card number, CVV) is encrypted. Validate secure communication via HTTPS.
  3. Boundary Value Testing:
    Test amount limits (minimum and maximum transaction values).
  4. Integration Testing:
    Verify integration with third-party payment processors like PayPal or Stripe.
  5. Negative Testing:
    Test transaction interruptions (e.g., network failure, session timeout).
  6. Performance Testing:
    Simulate high transaction loads to ensure stability under peak usage.
18
Q

What is exploratory testing, and how do you approach it?

A

○ Exploratory Testing: An unscripted approach to uncover hidden defects by exploring the application.
Approach:
1. Define a charter or focus area (e.g., testing login functionality or user profile).
2. Start with basic functionality, then explore edge cases.
3. Take notes on any unusual or unexpected behavior.
4. Prioritize areas with high complexity or recent changes.
5. Use tools (e.g., session recorders) to document findings.

19
Q

What is a Test Plan?

A

A Test Plan is a formal document that outlines the strategy, objectives, schedule, scope, resources, and approach for testing a software product.

Think of it as a blueprint or master guide for how testing will be conducted on a project or feature.

🧠 Purpose of a Test Plan
- Ensure everyone is on the same page about how testing will happen.
- Provide a clear roadmap for QA team and stakeholders.
- Identify what will be tested, how it will be tested, who will test it, and what tools or environments are needed.

📝 Typical Contents of a Test Plan

📦 Example Test Plan Summary

Imagine you’re testing a login feature in Sprint 1:

> Test Plan ID: TP001
Objective: Verify that login functionality works with valid and invalid credentials.
Scope: Login page only. Registration, reset password are out of scope.
Test Items:
- PB001: Login with email/password
- PB002: Error message on invalid credentials
Test Approach: Manual testing, TestRail used for test case tracking.
Resources: 1 QA (you), 1 Dev, 1 Staging environment
Schedule: Testing starts April 21, ends April 23
Entry Criteria: Code is deployed to staging
Exit Criteria: All critical test cases passed, no major bugs open
Deliverables: Test Cases, Bug Reports, Final Test Summary Report

🔄 Test Plan vs Test Case

| Test Plan | Test Case |
|————————————-|—————————————-|
| High-level document | Detailed step-by-step scenario |
| Covers overall strategy & scope | Covers one specific test scenario |
| Created once per project/feature | Created for each user story or AC |

✅ Summary

  • A Test Plan helps organize and communicate how testing will be done.
  • It ensures that all stakeholders understand the testing strategy.
  • It’s especially useful in larger teams or complex projects.

Would you like a template or real example of a test plan based on one of your stories like login or password reset?

Section | Description |
|————————|———————————————————————————|
| Test Plan ID | A unique identifier for the test plan |
| Objective | What are we testing and why? |
| Scope | What’s in scope and out of scope for this round of testing |
| Test Items | Features, components, or user stories being tested |
| Test Types | Functional, Regression, Smoke, API, UI, etc. |
| Test Approach | Manual, Automation, tools used (e.g., Cypress, Postman, JIRA, etc.) |
| Resources | QA Engineers, Developers, Test Environments, Tools |
| Schedule | Timeline for when testing will start, end, and key deadlines |
| Entry/Exit Criteria| When to start and stop testing |
| Risk & Mitigation | Known risks (tight deadlines, unclear requirements) and how to reduce them |
| Deliverables | What will be produced (Test Cases, Bug Reports, Final Test Report, etc.) |