Software Development & Testing Flashcards

1
Q

The Six Lean Principles

A
  1. Value
  2. Value Stream
  3. Flow
  4. Pull
  5. Continuous Improvement
  6. Respect for People
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Lean Principle of Value in Software Engineering

A
  1. Identify and prioritize features and functionalities that provide the most value to users and stakeholders.
  2. Focus on delivering valuable software increments in each development iteration.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Lean Principle of Value Stream in Software Engineering

A
  1. Understand the end-to-end software development process, from requirements gathering to deployment and maintenance.
  2. Identify and eliminate non-value-adding activities and bottlenecks that hinder the delivery of value.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Lean Principle of Flow in Software Engineering

A
  1. Optimize the flow of work by maintaining a continuous delivery pipeline.
  2. Reduce batch sizes and cycle times to achieve a steady flow of high-quality features.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Lean Principle of Pull in Software Design

A
  1. Adopt a pull-based approach to work, where new work is pulled into the development process based on the team’s capacity and customer demand.
  2. Avoid overloading team members with excessive work items.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Lean Principle of Continuous Improvement in Software Design

A
  1. Foster a culture of continuous improvement by encouraging feedback and retrospectives.
  2. Regularly review processes, tools, and practices to identify areas for optimization.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Lean Concept of Respect for People in Software Design

A
  1. Recognize the importance of collaboration and communication within the development team and with stakeholders.
  2. Empower team members to make decisions and contribute to the success of the project.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The Six Kanban Principles

A

The core principles of Kanban include:
1. Visualize the Workflow
2. Limit Work in Progress (WIP)
3. Manage Flow
4. Make Process Policies Explicit
5. Feedback and Improvement
6. Collaborative Approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Kanban

A

Kanban is a specific implementation of Lean principles, initially developed by Toyota for inventory management. In the context of software development and project management, Kanban is a visual management method that helps teams manage their work and optimize workflow.
Kanban boards often use visual cues like cards to represent work items, with each card progressing through the different stages of the workflow. This visual representation makes it easy for teams to understand the status of work and identify potential areas for improvement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Lean Philosophy

A

Lean is a philosophy and a set of principles aimed at maximizing value while minimizing waste in a process. It was first developed by Toyota in the 1950s and revolutionized the manufacturing industry. Lean principles have since been applied to various domains, including software development.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the Software Development Life Cycle?

A

Series of General Steps in software development (will not always be exactly the same from one methodology to the next)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

12 Steps of the Software Development Life cycle

A
  1. Requirements Gathering and Analysis
  2. System Design
  3. Detailed Design
  4. Implementation (Coding)
  5. Testing
  6. Deployment
  7. Maintenance and Support
  8. Documentation
  9. Project Management
  10. Quality Assurance
  11. Version Control and configuration Management
  12. Deployment and Release Management
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Requirements Gathering and Analysis in Software Engineering

A

The very first step in SDFC
1. Understand the needs and requirements of stakeholders and users.
2. Analyze and document the functional and non-functional requirements of the software.
Example: Conduct interviews and surveys with stakeholders to understand their needs and preferences for a new e-commerce website. Document the required features, such as user registration, product catalog, shopping cart, and payment options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

System Design for Software Development

A
  1. Create a high-level system design that outlines the architecture and components of the software.
  2. Break down the system into smaller modules and define their interactions.
    Example: Design the architecture of a mobile application. Plan to use a three-tier architecture with a front-end for the user interface, a middle-tier for business logic, and a back-end for data storage and retrieval.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Creating detailed designs from system designs in Software Engineering

A
  1. Design each module in detail, specifying algorithms, data structures, and interfaces.
  2. Create detailed design documents or diagrams to guide the development.
    Example: For the mobile application, design the login module in detail. Specify the algorithms for password hashing and user authentication, as well as the data structures for storing user credentials.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

“Implementation” in Software Design

A
  1. Write the actual code for the software based on the detailed design.
  2. Use programming languages and tools to develop the functionality.
    Example: Write the code for the login module using a programming language like Java or Python. Develop the necessary functions and classes for handling user authentication and data storage.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Testing in Software Design

A
  1. Conduct various types of testing, such as unit testing, integration testing, system testing, and user acceptance testing (UAT)
  2. Identify and fix defects to ensure the software meets the requirements.
    Example: Perform unit testing on the login module to verify that individual functions work correctly. Conduct integration testing to ensure that the module interacts seamlessly with other components.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Software Deployment

A
  1. Package the software and prepare it for installation on the target environment.
  2. Deploy the software to production or a testing environment for final validation.
    Example: Package the mobile application and make it available for download on app stores like Google Play or the Apple App Store.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Maintenance and Support for Software

A
  1. Provide ongoing support and maintenance for the software.
  2. Address bug fixes, enhancements, and updates as needed.
    Example: After deployment, provide ongoing support for the mobile application, address bug reports, and release updates with new features and improvements.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Software Documentation

A

Create comprehensive documentation throughout the development process, including design documents, user manuals, and technical guides.
Example: Create user manuals that explain how to use the e-commerce website or the mobile application. Prepare technical documentation detailing the system architecture and API specifications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Project Management for Software

A
  1. Plan and monitor the project, including resource allocation, timeframes, and risk management.
  2. Collaborate with stakeholders to manage expectations and communicate progress.
    Example: Use project management tools like Jira or Trello to track progress, allocate tasks to team members, and monitor deadlines.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Quality Assurance in Software

A

Implement quality assurance practices to ensure that the software meets the required standards and quality criteria.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Version Control and Configuration Management in Software

A
  1. Use version control systems to track changes and manage different versions of the software.
  2. Apply configuration management practices to control changes and maintain consistency.
    Example: Use Git as the version control system to manage code changes for the software. Ensure that every code change is committed and tracked with appropriate comments.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Deployment and Release Management for Software

A
  1. Plan and manage the release of software updates and new features to users.
  2. Ensure smooth deployment and minimize downtime during releases.
    Example: Plan a controlled deployment of a new version of a web application during off-peak hours to minimize the impact on users. Have a rollback plan in case of any unexpected issues during the release.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Object-Oriented Design (OOD)
OOD is one of the most widely used methodologies. It focuses on designing software using objects, which encapsulate data and behavior. OOD emphasizes principles like inheritance, encapsulation, and polymorphism to create modular and reusable software components. Example: Designing a banking software system using OOD principles, where classes like "Account," "Transaction," and "Customer" are designed to encapsulate relevant data and behavior.
26
Model-Driven Architecture (MDA)
MDA is an approach that emphasizes modeling software at a higher level of abstraction. It uses models to represent system requirements, design, and implementation. Model transformations are applied to automatically generate code from the models. Example: Using UML models to represent the structure and behavior of a web application, and then automatically generating code from these models using model transformation tools.
27
Domain-Driven Design (DDD)
DDD focuses on modeling software based on the domain or business context it serves. It involves close collaboration between domain experts and developers to create a rich and expressive domain model. Example: Employing DDD to build an e-commerce platform, where the domain model represents concepts like "Product," "ShoppingCart," and "Order" in a way that closely aligns with the business domain.
28
Service-Oriented Design (SOD)
SOD is an architectural approach that designs software as a collection of services. Services are loosely coupled, autonomous, and communicate through well-defined interfaces. This approach supports interoperability and scalability. Certainly! Here's an example of each of the software design methodologies listed earlier: Object-Oriented Design (OOD): Example: Designing a banking software system using OOD principles, where classes like "Account," "Transaction," and "Customer" are designed to encapsulate relevant data and behavior. Model-Driven Architecture (MDA): Example: Using UML models to represent the structure and behavior of a web application, and then automatically generating code from these models using model transformation tools. Domain-Driven Design (DDD): Example: Employing DDD to build an e-commerce platform, where the domain model represents concepts like "Product," "ShoppingCart," and "Order" in a way that closely aligns with the business domain. Example: Designing a distributed system using a microservices architecture, where each microservice represents a specific service with its well-defined API and responsibilities.
29
Component-Based Design (CBD)
CBD involves designing software by assembling pre-built, reusable components. Components encapsulate specific functionality and can be composed to create larger systems. Example: Developing a content management system (CMS) using pre-built components for user authentication, content editing, and user interface elements that can be assembled to create the complete CMS.
30
Data-Driven Design
Data-driven design focuses on designing software by understanding the data requirements and modeling the data structures and relationships first Example: Designing a data analytics platform, where the structure and flow of data are at the center of the design to ensure efficient data processing and analysis.
31
Event-Driven Design (EDD)
EDD involves designing software based on the handling of events. Components respond to events and messages asynchronously, enabling systems to be more reactive and loosely coupled. Example: Implementing a real-time notification system that relies on event-driven architecture, where events like "NewMessage" or "PaymentProcessed" trigger relevant actions and notifications.
32
Structured Design
Structured design uses a systematic approach to divide the software into smaller, manageable modules. It employs techniques like data flow diagrams and structure charts to organize the system. Example: Creating a software solution using structured programming techniques, with a clear hierarchy of functions and a top-down approach to problem-solving.
33
Rapid Application Development (RAD)
RAD is a methodology that prioritizes rapid prototyping and iterative development. It involves close collaboration between developers and stakeholders and focuses on quickly delivering a functional product. Example: Prototyping and iterating on the design of a mobile app to quickly incorporate user feedback and deliver a minimum viable product (MVP) in a short time frame.
34
Agile Design
Agile design methodologies, like Scrum and Extreme Programming (XP), emphasize flexibility and adaptability. They promote incremental and iterative development with a focus on customer collaboration and feedback.
35
User-Centered Design (UCD)
UCD focuses on designing software with a strong emphasis on understanding user needs, preferences, and behaviors. It involves user research, usability testing, and user feedback throughout the design process Example: Building a user-friendly interface for a video conferencing application, where designers conduct usability tests and user interviews to inform the design decisions.
36
Aspect-Oriented Design (AOD)
AOD is an extension of object-oriented design that focuses on separating cross-cutting concerns, such as logging, security, and error handling, from the core business logic. It allows developers to address concerns that cut across multiple modules or components in a more modular and maintainable way. Example: Applying aspect-oriented design to a web application to separate cross-cutting concerns, such as logging, security, and error handling, from the core business logic.
37
Data-Flow Design
Data-Flow Design focuses on designing software by modeling the flow of data between different components or modules. It emphasizes how data moves through the system and how it is processed at various stages.
38
Responsibility-Driven Design (RDD)
RDD is a design methodology that focuses on identifying responsibilities for each module or component in a software system. It emphasizes designing modules based on their responsibilities and interactions with other modules.
39
Architectural Design Patterns in software engineering
Architectural design patterns, such as MVC (Model-View-Controller), MVVM (Model-View-ViewModel), and Hexagonal Architecture, provide reusable solutions to common architectural challenges. They guide the overall structure and organization of software systems.
40
Key Benefits for the Model-View-Controller Architecture
1. Separation of Concerns: Each component has a specific role, making the codebase easier to maintain and understand. 2. Reusability: The Model and View can be reused in different parts of the application or even in different applications altogether. 3. Flexibility: Changes to one component can be made without affecting the others, facilitating code modifications and updates. 4. Testability: Isolated components allow for easier unit testing of the application's logic.
41
"Flow of interaction" in the MVC architecture
1. A user interacts with the View by providing input, such as clicking a button or entering data into a form. 2. The View forwards the user input to the Controller. 3. The Controller processes the input and, if necessary, updates the Model. 4. The Model notifies the View of any changes in the data. 5. The View updates its display based on the new data from the Model.
42
"Model" in Architectual design patterns
The Model represents the application's data and business logic. It encapsulates the data and provides methods to access, modify, and manipulate that data. The Model is independent of the user interface and user input. It notifies the View of any changes to the data so that the View can update itself accordingly.
43
"View" in MVC
The View is responsible for presenting the data to the user and displaying the user interface. It represents the visual representation of the Model's data. Views observe the Model for changes and update themselves whenever the data changes. Views do not contain any business logic; they only display information provided by the Model.
44
"Controller" in Architectual design patterns
The Controller acts as an intermediary between the Model and the View. It handles user input and updates the Model or View accordingly. When a user interacts with the user interface, the Controller processes the input, modifies the Model if needed, and updates the View to reflect any changes in the data. The Controller facilitates communication between the Model and the View without the two components being directly aware of each other.
45
What is Model-View-Controller (MVC) in Software engineering?
software architectural pattern commonly used in software engineering to design user interfaces and organize the interaction between components in a software application. It separates the concerns of data management, user interface, and user input into distinct components, allowing for more maintainable and flexible designs. The MVC pattern is widely used in web development frameworks, desktop applications, and other software systems.
46
Model-View-ViewModel (MVVM) architectural pattern
An evolution of the Model-View-Controller (MVC) pattern. MVVM is commonly used in software engineering, especially in the context of developing user interfaces for modern applications. It was first introduced by Microsoft as part of its development framework, Windows Presentation Foundation (WPF), and is now widely used in other frameworks like Xamarin and Angular. MVVM separates the concerns of data, user interface, and user interaction into distinct components. However, MVVM introduces a more explicit and systematic approach to handling data binding and user interactions, making it particularly suitable for applications with graphical user interfaces (GUIs) and data-driven views.
47
"View" in MVC v. MVVM
The View in MVVM corresponds to the user interface. It is responsible for rendering the visual representation of the data provided by the ViewModel and handling user interactions. Unlike the traditional View in MVC, the View in MVVM has no direct dependency on the Model.
48
ViewModel in MVVM
The ViewModel is a new component introduced in MVVM, and it serves as an intermediary between the Model and the View. The ViewModel exposes data and commands to the View, allowing the View to bind and interact with the data without being aware of the Model's details. The ViewModel is responsible for preparing and shaping the data from the Model into a form suitable for the View to present.
49
The flow of interaction in the MVVM pattern
1. The ViewModel retrieves data from the Model or other data sources and formats it for presentation. 2. The View binds to the data provided by the ViewModel and displays it on the user interface. 3. The user interacts with the View (e.g., clicks a button, enters data). 4. The View communicates the user's interactions to the ViewModel. 5. The ViewModel processes the user input, updates the Model if necessary, and prepares any changes to be reflected in the View. 6. The View updates its display based on the changes made by the ViewModel.
50
The key benefits of using the MVVM pattern
1. Separation of Concerns: MVVM cleanly separates the responsibilities of data presentation, user interaction, and data management. 2. Testability: The ViewModel can be easily tested in isolation, independent of the View and the Model, allowing for comprehensive unit testing. 3. Data Binding: The MVVM pattern leverages data binding techniques to establish a dynamic connection between the ViewModel and the View, ensuring that UI elements automatically update when underlying data changes. 4. Code Reusability: The ViewModel can be reused in multiple Views, promoting code reusability.
51
Hexagonal Architecture
Software design pattern that emphasizes the separation of concerns and the independence of the application core from external systems, frameworks, and interfaces. The architecture gets its name from its characteristic hexagonal shape, with the core application logic at the center and various ports and adapters around it.
52
Key principles of Hexagonal Architecture
1. Core Application Logic: The core application logic represents the business rules and domain-specific functionality. It is at the heart of the architecture and is completely decoupled from external dependencies. 2. Ports: Ports define interfaces through which the core application communicates with the external world. These interfaces act as entry and exit points for data and interactions. 3. Adapters: Adapters are the implementations of the ports. They provide the necessary conversions and mappings to connect the core application with external systems, such as databases, user interfaces, or third-party services. 4. Dependency Inversion Principle: Hexagonal Architecture adheres to the Dependency Inversion Principle, where higher-level modules (the core application) do not depend on lower-level modules (adapters and frameworks). Instead, both depend on abstractions (ports).
53
The flow of communication in Hexagonal Architecture
1. External systems or actors interact with the application through the defined ports. 2. The core application logic processes the incoming data and business rules without being aware of the specific external sources. 3. When necessary, the core application interacts with external systems (e.g., persisting data to a database or sending notifications) through the defined ports. 4. The adapters, which implement the ports, handle the communication between the core application and external systems. These adapters handle the specifics of integration and communication protocols.
54
Benefits of Hexagonal Architecture
1. Separation of Concerns: The clear separation between the core application and external systems makes the codebase easier to maintain and test. 2. Testability: The core application can be extensively unit-tested in isolation since it is independent of the external dependencies. 3. Flexibility and Adaptability: The architecture allows for easy swapping or modification of adapters to integrate with different external systems without affecting the core application. 4. Clean Code: Hexagonal Architecture encourages a clean and structured design, promoting maintainability and readability. 5. Domain Focus: The core application can focus solely on domain logic without being cluttered with technical concerns.
55
Design by Contract (DbC)
DbC is a methodology that emphasizes specifying explicit contracts between components. Contracts define the expectations and responsibilities of components, providing a clear and enforceable set of rules for interaction.
56
User Story Mapping
User Story Mapping is a visual technique that helps in understanding and organizing the features and functionalities of a software system from a user's perspective. It allows teams to prioritize and plan development based on user needs.
57
Data-Centric Design
Data-Centric Design focuses on designing software systems around data and databases. It ensures that data is the central focus and all functionalities are designed to efficiently handle and process data
58
Test-Driven Design (TDD)
TDD is a methodology where developers write automated tests before writing the actual code. The design of the software evolves through test creation, which leads to better testability and maintainability.
59
Iterative and Incremental Design
This approach involves designing and developing the software in small increments, with each iteration adding new features or enhancements based on user feedback.
60
Formal Methods/Formal Design
Formal methods involve using mathematical techniques for the specification, validation, and verification of software. It ensures high levels of correctness and reliability.
61
Adaptive Software Development (ASD)
ASD is a flexible methodology that adapts to changing requirements and priorities. It promotes collaboration and continuous learning to deliver the most valuable features.
62
Service-Oriented Architecture (SOA)
SOA is an architectural design approach that focuses on building software as a collection of loosely coupled services that communicate through standardized interfaces. It promotes reusability and interoperability.
63
Event-Driven Architecture (EDA)
EDA is an architectural approach that emphasizes the production, detection, and consumption of events in a software system. It enables loosely coupled and reactive systems
64
Feature-Driven Development (FDD)
FDD is an iterative and incremental design methodology that organizes software development around feature-driven activities. It emphasizes modeling, design inspection, and regular builds.
65
Cognitive Walkthrough
Cognitive Walkthrough is a design methodology that involves simulating user interactions with the software to identify potential usability issues and improvements
66
Six Sigma for Software Design
Six Sigma is a data-driven approach that aims to improve the quality of software design by reducing defects and variations in the development process.
67
SOLID Principles
SOLID is an acronym for five principles (Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion) that guide object-oriented design to promote maintainable and flexible software.
68
Concurrent Design
Concurrent Design emphasizes designing software with concurrent or parallel processing in mind. It is used for applications that require multiple threads or processes to run simultaneously.
69
Aspect-Oriented Modeling (AOM)
AOM is a modeling methodology that focuses on capturing cross-cutting concerns or aspects in software systems, allowing for separate management of concerns such as security, logging, and error handling
70
Contextual Design
Contextual Design is a user-centered design methodology that emphasizes understanding the context in which users will interact with the software. It involves observation and analysis of users in their work environment to inform the design process.
71
Rational Unified Process (RUP)
RUP is an iterative software development process that follows the Unified Modeling Language (UML) and emphasizes iterative development, use cases, and architecture-centric design.
72
Dynamic Systems Development Method (DSDM)
DSDM is an agile methodology that focuses on delivering software in a fixed time frame and budget while prioritizing the most critical features.
73
Data Modeling
Data Modeling is a methodology that involves creating a conceptual, logical, and physical representation of data in a software system. It helps ensure data integrity and consistency.
74
User-Centered Analysis (UCA)
UCA is an analysis methodology that focuses on understanding user needs and goals to inform the design and development of software
75
Domain-Specific Modeling (DSM)
DSM is an approach that involves creating models and languages specifically tailored to a particular domain, making it easier to express domain concepts in software.
76
Hierarchical Input Process Output (HIPO)
HIPO is a top-down design methodology that uses a hierarchical structure to represent the modules or components of a software system.
77
Use Case-Driven Development
Use Case-Driven Development focuses on designing software functionality based on specific use cases or user scenarios.
78
Software Architecture Patterns
Software Architecture Patterns, such as Client-Server, Peer-to-Peer, and Microservices, provide high-level structures and guidelines for organizing software systems.
79
Information Hiding
Information Hiding is a design principle that emphasizes encapsulating implementation details within modules to reduce complexity and improve maintainability.
80
Single Responsibility Principle (SRP)
A class or module should have only one reason to change. It should be responsible for only one specific functionality or behavior.
81
Open/Closed Principle (OCP)
Software entities (classes, modules, functions) should be open for extension but closed for modification. New functionality should be added through extension, not by changing existing code.
82
Liskov Substitution Principle (LSP)
Subtypes should be substitutable for their base types. Objects of derived classes should be able to replace objects of the base class without affecting the correctness of the program
83
Interface Segregation Principle (ISP)
Clients should not be forced to depend on interfaces they do not use. Keep interfaces focused and specific to the needs of their clients.
84
Dependency Inversion Principle (DIP)
High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details, but details should depend on abstractions.
85
Overengineering in Software Design
Unintentionally adding unnecessary complexity to your design. Overengineering can make the software difficult to maintain, understand, and modify in the future.
86
11 Benefits of Modular Code
1. Ease of Maintenance: 2. Code Reusability: 3. Scalability 4. Parallel Development 5. Testing and Debugging 6. Encapsulation and Information Hiding 7. Flexibility 8. Reduced Complexity 9. Code Organization 10. Improved collaboration 11. Domain Understanding (By dividing the software into modules that reflect the domain or functional areas, developers can better align the software design with the real-world problem it solves.)
87
Modular Code for Improving Software Maintenence
Want to Design your software using a modular approach, where each module represents a specific functionality or feature. Encapsulate implementation details within modules, exposing only essential interfaces to other parts of the system.
88
"Separation of Concerns" for Software Maintenance
Want to ensure that different concerns (e.g., user interface, business logic, data access) are separated into distinct components. This separation makes it easier to understand and modify specific parts of the codebase without affecting others.
89
Consistent Coding Standards for Software Maintenence
Want to enforce consistent coding standards and best practices across the development team. Consistency in code style and structure improves readability and makes maintenance tasks more predictable
90
Documentation for Software Maintenance
Want to provide clear and concise comments and documentation within the code. Explain complex algorithms, non-obvious decisions, and the purpose of functions or classes. Well-documented code helps future developers understand the intent behind the design.
91
Version control for Software Maintenence
Want to use a version control system (e.g., Git) to track changes to the codebase and collaborate effectively with the team. Version control allows you to revert to previous versions and understand the evolution of the software.
92
Automated Testing for Software Maintenance
Want to implement automated unit tests, integration tests, and regression tests. Tests ensure that changes to the codebase do not introduce unintended bugs and verify that the software functions correctly after modifications.
93
Error Handling for Software Maintenance
Implement robust error handling and logging mechanisms to identify and diagnose issues effectively. Proper error handling helps maintainers understand the system's state and identify the root cause of problems.
94
Refactoring for Software Maintenance
Want to regularly refactor the code to improve its structure, readability, and maintainability. Refactoring helps eliminate technical debt and ensures the codebase remains clean and organized.
95
Minimizing Dependencies for Software Maintenance
Keep dependencies between modules and components as minimal as possible. Reducing dependencies makes it easier to make changes to individual modules without affecting the entire system.
96
Continuous Integration and Continuous Deployment (CI/CD) for Software Maintenance
Want to implement CI/CD pipelines to automate the build, testing, and deployment processes. CI/CD helps ensure that changes are quickly validated and delivered to production.
97
Code Reviews for Software Maintenance
Want to conduct regular code reviews to catch potential issues, share knowledge, and maintain code quality standards.
98
Factory Method Pattern
The Factory Method Pattern allows for the creation of objects without specifying the exact class of the object that will be created. This pattern helps decouple the client code from the concrete implementation, making it easier to introduce new classes without modifying existing code.
99
Dependency Injection Pattern
Dependency Injection (DI) is a technique used to inject dependencies into a class rather than having the class create them. By injecting dependencies, the code becomes more modular, easier to test, and promotes loose coupling.
100
Strategy Pattern
The Strategy Pattern defines a family of algorithms and allows them to be interchangeable. It helps to isolate algorithmic logic, making it easier to add or modify algorithms without changing the context using them.
101
Observer Pattern
The Observer Pattern establishes a one-to-many dependency between objects, so that when one object (the subject) changes state, all its dependents (observers) are notified and updated automatically. This pattern is helpful for decoupling components and ensuring consistency between related objects.
102
Decorator Pattern
The Decorator Pattern allows behavior to be added to individual objects without affecting the behavior of other objects from the same class. It promotes the principle of open-closed design, enabling easy extension of functionality without modifying existing code.
103
Adapter Pattern
The Adapter Pattern allows incompatible interfaces to work together. It acts as a bridge between two interfaces, making it easier to integrate new components or systems without changing the existing codebase.
104
Facade Pattern
The Facade Pattern provides a unified interface to a set of interfaces in a subsystem, simplifying the client's interaction with the system. It helps hide complex system structures and provides a clear entry point for clients.
105
Template Method Pattern
The Template Method Pattern defines the skeleton of an algorithm but allows subclasses to override specific steps. This pattern promotes code reuse and consistency across multiple implementations.
106
Command Pattern
The Command Pattern encapsulates a request as an object, allowing clients to parameterize objects with queues, undo operations, and log requests. This pattern makes it easier to support undo/redo functionality and to decouple senders and receivers of commands.
107
Composite Pattern
The Composite Pattern treats individual objects and compositions of objects uniformly. It allows you to compose objects into tree-like structures to represent part-whole hierarchies, making it easier to work with complex object structures.
108
Proxy Pattern
The Proxy Pattern provides a surrogate or placeholder object that controls access to another object. It is useful for adding an additional layer of control or caching without altering the underlying object's implementation.
109
State Pattern
The State Pattern allows an object to change its behavior when its internal state changes. It helps manage complex conditional logic by representing each state as a separate class, making it easier to add or modify states without affecting other states.
110
Null Object Pattern
The Null Object Pattern provides an object that represents "null" or "no result" scenarios. It ensures that code can handle null values safely and reduces the need for explicit null checks.
111
Command-Query Responsibility Segregation (CQRS)
CQRS separates the read and write operations in a system, using different models for querying data (read) and updating data (write). It helps optimize performance and maintainability by focusing on specific concerns for each type of operation.
112
Event Sourcing
Event Sourcing is a pattern where the state of an application is determined by a sequence of events rather than the current state. This pattern facilitates auditing, versioning, and easy restoration of past states.
113
Mediator Pattern
The Mediator Pattern centralizes communication between objects, reducing direct dependencies between them. It helps to manage complex communication patterns and promotes loose coupling.
114
Chain of Responsibility Pattern
The Chain of Responsibility Pattern allows multiple objects to handle a request without the sender needing to know which object will process it. This pattern is helpful for decoupling sender and receiver and providing flexibility in handling requests.
115
Flyweight Pattern
The Flyweight Pattern is used to minimize memory usage by sharing common data between multiple objects. It is particularly useful when dealing with large numbers of similar objects.
116
Interpreter Pattern
The Interpreter Pattern defines a grammar for interpreting sentences in a language and provides an interpreter for the language. It is helpful for defining a domain-specific language and implementing parsers.
117
Command Dispatcher Pattern
The Command Dispatcher Pattern centralizes command handling and allows for easy extension of command processing. It helps maintain the separation of concerns and facilitates the addition of new commands
118
Snapshot Pattern
The Snapshot Pattern captures the current state of an object and allows it to be restored to that state later. It is useful for implementing undo/redo functionality or for restoring objects to specific states.
119
Composite View Pattern
The Composite View Pattern allows hierarchical composition of views, making it easier to work with complex user interfaces. Each view can have child views, forming a tree-like structure.
120
Double Dispatch Pattern
The Double Dispatch Pattern resolves method calls at runtime based on both the receiver and argument types. It is helpful for implementing flexible and extensible object interactions.
121
Specification Pattern
The Specification Pattern encapsulates business rules and conditions as separate objects, making it easier to modify or combine them to create complex queries or validations.
122
Mixin Pattern
The Mixin Pattern allows the dynamic addition of new behavior to objects at runtime. It promotes code reuse and flexibility by enabling classes to inherit from multiple sources.
123
Immutable Pattern
The Immutable Pattern ensures that objects cannot be modified after creation, reducing the risk of unintended side effects and promoting thread safety.
124
Object Pool Pattern
The Object Pool Pattern manages a pool of reusable objects to avoid the overhead of object creation and destruction. It improves performance and reduces memory usage
125
Resource Acquisition Is Initialization (RAII)
RAII is an idiom rather than a design pattern, but it is essential for managing resources (e.g., memory, files) in C++ and other languages. It ensures that resources are properly initialized and released automatically.
126
Scalability
Designing software that can handle increased loads and data volume without sacrificing performance and responsiveness.
127
Flexibility and Extensibility
Creating software that can easily accommodate future changes and additions to features or functionality.
128
Modularity
Breaking down complex systems into smaller, manageable modules to promote code organization and maintainability.
129
Maintainability
Designing software that is easy to understand, modify, and enhance over its lifecycle.
130
Performance Optimization
Balancing performance considerations and resource usage while ensuring the software meets its performance requirements.
131
Concurrency and Multithreading
Handling multiple threads and concurrent processes without introducing race conditions and synchronization issues.
132
Security
Addressing potential vulnerabilities and ensuring that the software is protected against security threats and attacks.
133
Interoperability
Ensuring that the software can interact and integrate seamlessly with other systems and technologies.
134
User Experience (UX)
Creating intuitive and user-friendly interfaces to enhance user satisfaction and usability.
135
Error Handling and Fault Tolerance
Devising robust error-handling mechanisms to detect, report, and recover from errors gracefully
136
Data Management
Designing efficient data storage, retrieval, and manipulation mechanisms to handle large datasets effectively.
137
Integration of Third-Party Services
Designing systems that must process data and respond quickly in real-time or near-real-time scenarios.
138
"Cross-Platform Compatibility"
Ensuring that the software functions correctly on different platforms and devices.
139
Adhering to Standards and Regulations
Complying with industry standards, legal regulations, and best practices related to the software's domain.
140
Code Reusability
Maximizing code reuse to minimize duplication and improve maintainability.
141
Version Control and Collaboration
Effectively managing code changes and facilitating collaboration among team members.
142
Abstraction
Encapsulate implementation details and create abstract interfaces to represent the functionality of modules. Abstract classes, interfaces, and inheritance allow different implementations to share common behavior, enhancing code reuse.
143
Design Patterns
Design patterns provide proven solutions to recurring design problems and often encourage code reuse.
144
Dependency Injection
Use dependency injection to inject dependencies into classes rather than creating them within the class itself. This promotes loose coupling and allows for swapping out implementations without modifying the dependent class.
145
"Separation of Concerns" for Reusability
Separate business logic from presentation and data access. This separation ensures that business logic can be reused without being tied to specific user interfaces or data sources.
146
Libraries and Frameworks for reusability
Leverage existing libraries, frameworks, and open-source components that are designed for reusability. Many popular libraries offer reusable functionalities that can save development time and effort.
147
Generic Programming for Reusability
Use generics and templates to create code that works with various data types. This approach allows algorithms and data structures to be reused with different data types, increasing code versatility.
148
Single Responsibility Principle (SRP) for code reusability
Want to Design classes with a single responsibility, making them more focused and reusable. Classes with clear responsibilities are easier to understand and more likely to be reused in different contexts.
149
APIs for Reusability
Design clear and intuitive APIs for reusable components. Well-designed APIs make it easier for other developers to understand and utilize the code effectively
150
Singleton Pattern
Singleton Pattern ensures that a class has only one instance and provides a global point of access to it. This pattern can be useful when certain resources or objects need to be shared across the application.
151
Mixin Inheritance Pattern
The Mixin Inheritance Pattern allows a class to inherit from one or more mixins to acquire the behaviors and properties of those mixins. This pattern enables code to combine functionalities from multiple sources.
152
Horizontal Scaling
Can Design the system to scale horizontally by adding more servers or instances to distribute the workload. This approach allows you to handle increased traffic by adding more resources
153
Load Balancing
Implement load balancing to evenly distribute incoming requests across multiple servers or instances. Load balancers help prevent bottlenecks and ensure resources are efficiently utilized.
154
Stateless Architecture
Minimize server-side state as much as possible. Stateless architecture allows requests to be handled independently, making it easier to add or remove servers without affecting the application's overall state.
155
Caching
Use caching mechanisms to store frequently accessed data and reduce the need for repeated computations. Caching improves response times and reduces the load on backend services.
156
Asynchronous Processing
Offload time-consuming tasks to asynchronous queues or background jobs. This approach frees up resources to handle incoming requests more efficiently.
157
Database Optimization
Optimize database queries, use indexing, and denormalize data where appropriate. Proper database design and optimization are crucial for handling increased data loads.
158
Microservices Architecture
Divide the application into smaller, independent services that can be deployed and scaled separately. Microservices allow you to scale specific parts of the application as needed.
159
Content Delivery Networks (CDNs) for scalability
Utilize CDNs to cache and distribute static assets, reducing server load and improving content delivery speed for users globally.
160
Auto-Scalaing
Implement auto-scaling mechanisms to automatically add or remove resources based on demand. Cloud platforms like AWS and Azure offer auto-scaling features for easy scalability.
161
Performance Monitoring and Profiling:
Continuously monitor the application's performance and identify potential bottlenecks or resource-intensive operations. Profiling helps optimize critical sections of code.
162
Distributed Caching
Use distributed caching systems to share cached data across multiple nodes, enabling better utilization of memory resources.
163
Decoupling Components
Want to Decouple components to promote independent scaling. Services should communicate through well-defined APIs, allowing you to scale individual components as needed.
164
"Design for Failure"
Plan for potential failures and implement redundancy and failover mechanisms to ensure high availability and fault tolerance
165
Cloud Computing for Scalability
Utilize cloud computing platforms that offer scalable infrastructure, allowing you to adjust resources as demand fluctuates.
166
Hotspots
Specific areas or components of a software system that experience a disproportionately high volume of activity or usage compared to other parts of the system. These hotspots can be in the form of specific functions, classes, database tables, or any other resource that is frequently accessed or heavily utilized. Hotspots can significantly hurt scalability
167
Flexibility
Refers to adapting to changing requirements, integrating new features, and maintaining the system over time
168
Configuration and Externalization
Externalize configuration settings and parameters from the code. This approach allows changes to be made without recompiling the application, making it more flexible and adaptable.
169
Feature Toggles
Use feature toggles or feature flags to enable or disable specific features at runtime. This technique allows for easy experimentation and enables you to roll back or activate features without redeploying the entire application.
170
Configuration Management for Flexibility
Adopt a robust configuration management process to manage variations between different deployments and environments, making it easier to adapt the software for different use cases.
171
Robust
Refers to the quality of a software system to remain stable, reliable, and perform well under various conditions, even when facing unexpected or erroneous inputs or events
172
Input Sanitization
Robust systems validate and sanitize user inputs and external data to prevent security vulnerabilities and unexpected behavior caused by invalid or malicious inputs.
173
Graceful Degradation
In situations where certain features or components fail or become unavailable, a robust system should degrade gracefully, maintaining core functionality and informing users about the degraded state.
174
Compatibility
Robust software is compatible with various platforms, browsers, and operating systems, providing consistent functionality across different environments.
175
Security
Robust software prioritizes security measures to protect against potential threats and attacks, safeguarding sensitive data and ensuring system integrity.
176
Monitoring and Alerting
Robust software incorporates monitoring and alerting mechanisms to promptly detect anomalies and performance degradations, enabling timely responses and proactive maintenance.
177
Progressive Disclosure for UX design
Present complex information in a layered or progressive manner to prevent overwhelming users with too much information at once. Reveal additional details as needed.
178
User Empowerment for UX design
Allow users to have control over their actions and the product's behavior. Avoid forcing users into unwanted interactions or decisions.
179
Accessibility for UX Design
Design the product to be accessible to users with disabilities. Ensure that all users can interact with and understand the content, regardless of their abilities.
180
Unit Testing
Unit testing involves testing individual components or units of code in isolation to ensure they function as expected. Developers typically write unit tests to verify that each unit works correctly and to detect bugs early in the development process.
181
Integration Testing
Integration testing focuses on testing how different components or modules of the software interact with each other. It aims to identify issues that arise when integrating units into a larger system.
182
Functional Testing
Functional testing verifies that the software's features and functions work as intended and align with the specified requirements. Test cases are designed to validate the software's functionality against the business and user requirements.
183
User Interface (UI) Testing
UI testing checks the user interface's usability, responsiveness, and appearance. It ensures that the interface elements are correctly displayed and that users can interact with them effectively.
184
Regression Testing
Regression testing is performed after making changes or enhancements to the software to ensure that existing functionality remains unaffected. It helps detect unintended side effects that might arise due to code modifications.
185
Performance Testing
Performance testing evaluates the software's responsiveness, scalability, and resource usage under different conditions. It assesses the system's speed, stability, and capacity to handle anticipated workloads.
186
Security Testing
Security testing assesses the software's vulnerability to security threats, such as unauthorized access, data breaches, and other potential risks. It ensures that sensitive data and resources are adequately protected.
187
Usability Testing
Usability testing evaluates the software's user-friendliness and measures how easy it is for users to navigate and interact with the application.
188
Load Testing
Load testing determines how the software performs under expected and peak loads. It helps identify performance bottlenecks and ensures the system can handle concurrent user activities.
189
Stress Testing
Stress testing pushes the software beyond its limits to assess its behavior under extreme conditions. It helps identify the software's breaking point and potential failure modes.
190
Acceptance Testing
Acceptance testing involves validating that the software meets the end-users' requirements and is ready for deployment. It typically includes user acceptance testing (UAT) performed by actual end-users
191
Exploratory Testing
Exploratory testing is an informal and unscripted approach where testers explore the software freely to discover defects and assess the overall user experience.
192
Data Migration
Updating the software may require migrating existing data to a new schema or format. Data migration can be challenging, especially when dealing with large datasets or complex data structures.
193
Rollback Plan
Having a well-defined rollback plan is essential in case the update encounters unexpected issues. Knowing how to revert to the previous version swiftly and safely is crucial.
194
Interoperability
Refers to the ability of different software systems, applications, or components to communicate, exchange data, and work together seamlessly. It is a critical aspect of software development, especially in today's interconnected and distributed computing environments
195
Communication Protocols for interoperability
Interoperability requires agreement on communication protocols that define how different systems exchange data and interact. Common communication protocols include HTTP, REST, SOAP, and MQTT
196
Data Formats for interoperability
Systems must agree on data formats to ensure that information can be accurately interpreted and processed by both the sender and receiver. Common data formats include JSON, XML, and CSV
197
Sources of software standards
ISO standards, W3C recommendations, and OASIS standards
198
Middleware
Middleware technologies act as intermediaries between disparate systems, translating and facilitating communication between them to achieve interoperability.
199
Documenting your exceptions
You absolutely positively need to do it for any code that you write so that anyone using your code can plan for it when it comes to exception handling
200
Exception Handler
A section of program code that is executed when a particular exception occurs
201
Exception
Basically an anomaly in the output of your program. An unusual event, detectable by software or hardware, that requires special processing
202
Six options for handling exceptions
1. Assume errors will not occur (not an option) 2. Print a descriptive error message 3. Return an unusual value to indicate an error has occurred 4. Alter a status variable's value 5. Use assertions to block further execution 6. Use exception handlers
203
Four types of errors in program output
1. A user may accidently or deliberately (hackers) enter incorrect inputs 2. Hardware does not actually have the resources that you need to actually execute what you need to execute (disk drives and random access memory have size limits) 3. Hardware devices may fail or becomes inaccessible 4. Software components may contain defects (bugs)
204
Robustness
The ability of a system to recover following an error
205
Robustness v. Fault-Tolerance
Robustness Characteristics: A robust system can handle deviations from ideal or expected conditions without catastrophic failures or unacceptable degradation in performance. It is resilient against perturbations, uncertainties, or variations in input, environment, or operating conditions. Robustness helps a system to gracefully degrade its performance, recover, and continue functioning under less-than-ideal circumstances. Fault Tolerance Characteristics: A fault-tolerant system can detect, isolate, and recover from faults to maintain essential functionality and prevent system-wide failures. It involves designing redundancy, error detection mechanisms, and fault recovery strategies to ensure uninterrupted operation in the face of faults. Fault tolerance aims to minimize downtime and data loss, ensuring the system remains available and operational even during failures.
206
Features of Good Code
It works: delivers required functionality/compatibility/reliability/security It can be modified without excessive time/effort It is reusable It is complete ON TIME and WITHIN BUDGET: This is what makes the difference between successful software companies and unsuccessful software companies
207
Exponential cost growth of debugging software
The longer you leave a bug in your software, the more expensive it will be to correct it
208
Cost breakdown in software engineering
Development: 25% Maintenance: 75% Main takeaway: maintenance is 3 times more expensive than development so you need to make sure that you code is maintainable as possible
209
Expected Error rate when writing code
You can expect about one error for every ten lines of code. Example: If you have 200 lines of code, you're looking at roughly 20 errors that you will need to look out for in debugging
210
Six phases of Software Development
1. Requirements derivation phase 2. Requirements Specification phase 3. Design phase 4. Implementation phase 5. Testing and Verification phase 6. Postdelivery Maintenance phase
211
Requirements DERIVATION Phase
First phase in the software development procedure. These requirements typically come from the customer Prototype and/or high-level description of product
212
Requirements SPECIFICATION Phase
Comes immediately after requirements derivation Phase. Involves actual software engineers developing a detailed description of functional requirements and non-functional requirements (constraints)
213
Functional Software Requirements
Describe the specific functions or features that the software system must provide to meet the needs of its users. - Functional requirements are typically expressed as specific actions or operations that the software should perform. - They are often described in use cases, user stories, or process flow diagrams. - Functional requirements are directly related to the system's functionality and how it interacts with users, other systems, or hardware.
214
Non-functional Software requirements
Define the quality attributes or constraints that the system must satisfy. They focus on how the system should perform its functions rather than what functions it should perform. - Non-functional requirements are concerned with aspects like performance, reliability, usability, security, maintainability, and scalability. - They define the overall behavior and attributes of the system, impacting its effectiveness, efficiency, and user satisfaction. - Non-functional requirements are often harder to measure and quantify compared to functional requirements.
215
Design Phase
Architectural design (high level design) and detailed design (low-level design) - Often done using UML
216
Implementation Phase
Translation of the design into program code
217
Testing and Verification Phase
Detecting and fixing errors and demonstrating th correctness of the program
218
Postdelivery Maintenance Phase
Correct defects reported by users, modify or enhance functionality This phase alone is three times more expensive than all the other phases combined
219
Software Process
A standard sequence of steps for the development or maintenance of software
220
Waterfall Process
-Delivered activities conducted in order previously presented -Each activity produces a document or product that is the input for the next activity
221
Agile Process
A family of software development processes that emphasize - Individuals and interactions > processes and tools -Working software > comprehensive documentation -Customer collaboration > contract negotiation -Responding to change > following a plan
222
SCRUM
1. Continue breaking down the problem into smaller problems until you have a list of individual tasks (the set of tasks is called a story) 2. Group tasks into two week time segments called "sprints"
223
Program Specification Process
1. Start with a problem statement 2. Ask questions (inputs/outputs) 3. Describe the interaction between users and the software (use cases) *The specification should be detailed enough that a programmer not familiar with the project can follow them to produce the product
224
Three elements of Program Design
1. Abstraction 2. Information Hiding 3. Step-Wise Refinement
225
Abstraction
Model of a complex system including only key details
226
Information Hiding
Hiding data and function details to limit access to implementation details
227
Step-Wise Refinement
Iterative, incremental approach to problem solving. Comes in two varieties: 1. Top-Down 2. Bottom-Up
228
Top-Down Program Design
Basically, you are deferring the details as long as you possibly can. You are starting with them most abstract elements and behaviors and decomposing them into more and more detail as you go
229
Bottom-Up Program Design
Basically, you are starting with the details and working your way up to a picture design (usually so that you can enjoy the advantages of modular programming) EXAMPLE: If you know that your software project will involve a handful of algorithms and that many of those algorithms will share individual steps, but in different order... then you can work on writing the code for those individual steps first, and then combine them into larger algorithms
230
Classical Design
-Focuses on actions (functions or operations) to be performed -Functional decomposition
231
Functional Decomposition
Functional decomposition is a method used to break down a complex system into smaller, more manageable functional components. It focuses on identifying the major functions or tasks that the system needs to perform and then decomposing these functions into smaller sub-functions or sub-tasks. This decomposition helps in understanding the functionality of the system and the relationships between different functions. Key Points: Begins with identifying the major functions or tasks of the system. Decomposes functions into smaller sub-functions or sub-tasks. Focuses on understanding the functions and their relationships.
232
Top-Down Design v. Functional Analysis
Key Differences: Focus and Starting Point: Top-down design starts with a high-level understanding and gradually decomposes the system into smaller components, focusing on architecture and major features. Functional decomposition starts by identifying major functions or tasks and breaks them down into smaller functional components, focusing on understanding the functions and their relationships. Level of Detail: Top-down design emphasizes defining the overall architecture and major features before delving into detailed components. Functional decomposition focuses on breaking down functions into smaller sub-functions, providing detailed views of the system's functionality. Purpose: Top-down design is more concerned with the design and architectural aspects of the system. Functional decomposition is focused on understanding the functionality and tasks that the system needs to perform.
233
Metric-Based Testing
Measurable factors used to evaluate thoroughness of test. Need to be able to judge how much, how complete, testing yuo have actually done. You also need to be able to tell your boss or customer this information to give confidence in your work.
234
Extreme programming
A software development methodology known for its iterative and incremental practices, promoting flexibility, collaboration, and adaptability.
235
Continuous Integration (CI)
The practice of automatically integrating code changes into a shared repository multiple times a day. Process: 1. Developers regularly merge their code changes into a shared version control repository (e.g., Git). 2. Automated build and test processes are triggered whenever new code changes are integrated. 3. This ensures early detection of integration issues and helps maintain a consistent and stable codebase.
236
Continuous Deployment
Continuous Deployment is the practice of automatically deploying code changes to production or staging environments after successful integration and testing. Process: 1. After successful integration and testing (CI), the code is automatically deployed to production or staging environments. 2. Automated deployment pipelines ensure that the software is packaged, deployed, and configured consistently across different environments. 3. This enables faster and more reliable releases to end-users.
237
Four Benefits of CI/CD
1. Accelerated Development: CI/CD automates the build, test, and deployment processes, allowing for faster development cycles and quicker feedback on code changes. 2. Enhanced Quality: Automated testing at each integration step helps catch and address issues early in the development process, improving software quality. 3. Consistency and Reliability: Automated deployment ensures a consistent and reliable process, reducing human error and ensuring identical deployments across different environments. 4. Rapid Feedback Loop: Developers receive immediate feedback on the quality and functionality of their code, enabling them to iterate and make improvements quickly.
238
CI/CD Pipeline
Represents the automated workflow from integrating code changes to deploying the software. It typically includes stages such as code build, unit testing, integration testing, packaging, and deployment.
239
Code coverage
Refers to the measure of the extent to which the source code of a software system has been tested. It quantifies the proportion of the code that has been executed during testing, providing insights into the thoroughness and effectiveness of the testing process. Four types of code coverage: 1. Statement Coverage 2. Branch coverage 3. Function Coverage 4. Path Coverage
240
Statement Coverage
Measures the percentage of individual statements in the code that have been executed at least once during testing.
241
Branch coverage
Measures the percentage of decision branches (e.g., if-else, switch) that have been traversed during testing.
242
Function Coverage
Measures the percentage of functions or methods that have been called during testing.
243
Path coverage
Measures the percentage of unique paths through the code that have been executed.
244
Three Benefits of Code Coverage
1. Identifies areas of the code that have not been tested, enabling the creation of additional test cases to increase coverage in those areas. 2. Assists in assessing the thoroughness of the testing process and determining the readiness of the software for release. 3. Guides developers and testers to improve the test suite by targeting untested or under-tested parts of the code.
245
15 ways to improve code reliability.
1. Read and obey the standards 2. Use consistent code formatting and style 3. Embrace modular programming (Break down the code into smaller, manageable modules or functions) 4. Use version control systems (like Git) 5. Do regular reviews: have your team try to 'break' your code 6. Prioritizing Erro Handling (Implement robust error handling mechanisms to gracefully handle exceptions, errors, and edge cases.) 7. Embrace the "Don't Repeat Yourself" (DRY) principle (Reuse common functionality through functions, modules, or libraries to maintain consistency and reduce the risk of errors.) 8. Perform code refactoring (Refactoring helps in optimizing performance, reducing technical debt, and eliminating potential sources of errors) 9. Manage your dependencies carefully 10. Perform code analysis (Utilize static code analysis tools to identify potential bugs, code smells, and adherence to coding standards) 11. Document your code 12. CAREFULLY Optimize (Premature optimization can introduce bugs and make the code less reliable) 13. Regression Testing: After making changes or updates to the codebase, re-run all relevant tests to ensure that the modifications haven't inadvertently introduced new bugs or affected existing functionality (regression testing) 14. Plan for Scalability and Load Testing (Design your code to handle increased loads gracefully. Conduct load tests to simulate heavy usage and identify potential bottlenecks or failure points.) 15. Stay up to date with the latest best practices, tools, and technologies
246
Data Dictionary
The Data Dictionary defines the structure, format, and meaning of data used within the system, ensuring consistency and understanding of data elements across the project.
247
Human-System Interface Design Document
The Human-System Interface Design Document provides guidelines and specifications for designing the user interface, ensuring it is intuitive, efficient, and aligns with user needs.
248
Operational Maintenance Plan
The Operational Maintenance Plan details how routine maintenance, updates, and patches will be applied to the system during its operational phase to ensure optimal performance and security.
249
System Configuration Index (SCI)
The System Configuration Index is a detailed index that correlates configuration items to specific versions or baselines, aiding in tracking changes and updates throughout the system development life cycle.
250
Quality Assurance Plan
The Quality Assurance Plan outlines the processes, procedures, and standards that will be employed to ensure the quality and correctness of the system throughout its development and maintenance.
251
Lessons Learned Report
The Lessons Learned Report summarizes the experiences and insights gained from the software development process. It provides valuable feedback for future projects and helps improve processes.
252
Configuration Management Database
The Configuration Management Database is a repository that tracks the configuration items, their versions, and relationships within the system. It supports configuration management by providing a centralized source of information.
253
Software Lifecycle Plan
The Software Life Cycle Plan outlines the phases, activities, and tasks involved in the system's entire life cycle, from concept to retirement. It helps manage the progression of the project from initiation to completion.
254
RAM Analysis Report
The RAM Analysis Report assesses the system's reliability, availability, and maintainability characteristics. It helps identify potential reliability issues and guides decisions to improve system performance and minimize downtime.
255
System Security Plan
The System Security Plan details the security measures and protocols implemented to protect the system from unauthorized access, data breaches, and cyber threats.
256
User Acceptance Test (UAT) Plan
The User Acceptance Test Plan outlines the procedures for testing the system's functionality from the user's perspective. It involves end-users validating that the system meets their needs and requirements.
257
Configuration Management Plan
The Configuration Management Plan defines how changes to the system's components, documents, and artifacts will be managed throughout the development process. It ensures that changes are controlled, documented, and properly communicated.
258
V&V Plan
The Verification and Validation (V&V) Plan outlines the strategy and approach for testing and validating the system. It defines the testing procedures, methodologies, and acceptance criteria to ensure that the system meets its requirements and functions as intended.
259
Scope
"scope" refers to the defined boundaries, features, functionalities, and deliverables of a software development project. It outlines the extent and depth of what will be included or excluded from the project. The scope essentially defines what the project will achieve and what it will not. Includes Six Parameters: 1) Features and Functionalities 2) Requirements Inclusions and Exclusions 3) Constraints and Assumptions 4) Data Inclusions and Exclusions 5) Interfaces 6) Quality Attributes
260
Role of Quality Attributes in software engineering
Describes the non-functional requirements related to performance, usability, reliability, security, and other quality aspects that need to be addressed in the project.
261
Role of Data Interfaces in Software engineering
Specifies the external systems, devices, or applications that the software will interact with or connect to.
262
Role of Data Inclusions and Exclusions in Software engineering
Defines the data that will be used or manipulated by the software, including data types, sources, and data-related functionalities.
263
Role of Constraints and Assumptions in Software engineering
Identifies the limitations or restrictions that will affect the project, such as time, budget, technology constraints, and any assumptions made during project planning.
264
Role of Requirements Inclusions and Exclusions in Software Engineering
Clearly outlines what is part of the project's requirements and what is not. This helps manage stakeholder expectations and prevent scope creep.
265
Role of "Features and Functionalities" in software engineering
Describes the specific capabilities and functions that the software will possess, typically based on the requirements gathered from stakeholders.
266
Defensive Code
Refers to a programming approach and practice that focuses on cautious error handling. Helps ensure that a program behaves robustly and gracefully even in the face of unexpected situations
267
Nine key elements of Defensive Code
1) Error Handling 2) Input Validation: Validating and sanitizing input data from users or external sources to prevent potential security vulnerabilities, buffer overflows, or other harmful effects that could result from malicious or incorrect input. 3) Boundary Checks: Verifying and validating boundaries and constraints for data, arrays, and other data structures to prevent issues like buffer overflows or out-of-bounds access, improving program robustness. 4) Resource Management: Properly managing resources such as memory, file handles, database connections, or network sockets by releasing them when they are no longer needed to prevent memory leaks or resource exhaustion. 5) Fail Safely: Ensuring that when an unexpected error occurs, the system fails in a safe and predictable manner, avoiding data corruption, crashes, or adverse effects on other parts of the system. 6) Testing and Validation: Rigorous testing of the code to identify potential weaknesses, vulnerabilities, or edge cases that might cause errors. Automated testing and manual testing are essential components of defensive coding. 7) Code Modularity: Encouraging modularity and breaking down code into manageable, well-encapsulated units. This improves maintainability and allows for easier identification and isolation of errors. 8) Documentation and Comments: Providing comprehensive and clear documentation, along with meaningful comments in the code, to aid in understanding the purpose and behavior of the code. This helps other developers and future maintainers handle the code effectively. 9) Robust Algorithms: Selecting algorithms and data structures that are robust and efficient, considering worst-case scenarios and handling them gracefully.
268
12 Elements of effective error handling
1) Detecting when/how the errors occur 2) Use meaningful error messages 2) Categorize errors based on their severity, origin, or impact and tailor error handling based on the error category 3) Use error codes, error objects, or enums to communicate error states 4) Use exception handling mechanisms (try-catch) to gracefully handle errors and exceptional conditions 5) Implement logging mechanisms to record errors (with contextual information such as timestamp, user, request details, and stack traces) 6) Implement graceful degradation strategies to ensure that the system remains functional even when errors occur 7) Implement graceful degradation strategies to ensure that the system remains functional even when errors occur 8) Implement retry mechanisms for transient errors, allowing the system to automatically retry the operation after a delay or a specified number of attempts 9) Write unit tests to cover error scenarios, ensuring that error paths are tested to validate correct error handling and recovery mechanisms. 10) Continuously monitor the software in production to detect errors and performance issues in real-time 11) Establish a feedback loop between users and developers to gather insights on encountered errors and prioritize improvements based on user feedback 12) Define and adhere to error handling standards and guidelines within the development team or organization to ensure consistent and effective error handling practices.
269
'Warnings' in software development
Cautionary messages about potential issues. The developer is notified but the program is allowed to proceed
270
'Errors' in software development
Critical issues that prevent normal program execution and require immediate attention and resolution.
271
'Exceptions' as a problem
Refers to an unexpected or exceptional condition that occurs during the execution of a program. This could be a divide-by-zero operation, an attempt to access an invalid memory location, or any situation that deviates from the normal flow of the program.
272
'Exceptions' as a mechanism
In some (but not all) programming languages, an "exception" is also a programming construct or mechanism provided by the language to deal with these exceptional conditions. It allows developers to write code that handles these unexpected conditions in a structured way, preventing the program from crashing and enabling recovery or appropriate actions.
273
'Exceptions' in software engineering
An "exception" can refer to an unexpected condition or problem that occurs during program execution. An "exception" can also refer to the mechanism provided by programming languages to handle and recover from these unexpected conditions in a structured and controlled manner. The context in which we use the term "exception" depends on whether we're discussing the problem itself or how the programming language allows us to handle and recover from such problems.
274
Error Handling
Error handling is the process of anticipating, detecting, and managing errors that may occur during the execution of a software program. The primary goal is to ensure that the software can respond to these exceptional conditions in a controlled and predictable manner (preventing undesirable consequences)
275
Things software engineers want to avoid
1) Crashes and application failures 2) Compromised data CIA (data confidentiality, data integrity, and data availability) 3) Corruption of data
276
Input Validation
Involves examining and verifying data provided by a user, a system, or another application before it is processed. The objective of input validation is to ensure that the data meets specified criteria, conforms to expected formats, and is safe for further use within the software system
277
Mechanisms for Input Validations
1) Implementing validation both on the client side AND the server side 2) Detecting and preventing injection attacks, such as SQL injection, XML injection, or command injection 3) Data Sanitization: Cleaning and removing any potentially harmful or unnecessary characters, scripts, or tags from the input to prevent cross-site scripting (XSS) attacks or SQL injection attacks 4) "Whitelisting": allowing only approved characters or patterns 5) "Blacklisting": disallowing specific characters or patterns known to be harmful or potentially problematic. 5) "Sanity Checking" the user input: -Correct Format? -Correct datatype? -Correct length/size? -Within expected range of values?
278
Boundry Checks
Verifying and validating boundaries and constraints for data, arrays, and other data structures to prevent issues like buffer overflows or out-of-bounds access, improving program robustness.
279
DevOps
DevOps is a cultural and organizational approach that aims to bridge the gap between development (Dev) and operations (Ops) teams. It promotes collaboration, communication, and integration between these traditionally separate teams to streamline the software development lifecycle, improve efficiency, and accelerate delivery. Focus: DevOps primarily focuses on integrating development and operations, enhancing collaboration, automation, and continuous delivery practices to achieve faster software releases. Goals: Goals of DevOps include faster delivery, more frequent releases, improved quality, and a more efficient and collaborative development process. Security Integration: While security is considered in DevOps, it's not the primary focus. Security measures are integrated as part of the development process but may not be as comprehensive or deeply ingrained compared to DevSecOps.
280
DevSecOps
DevSecOps extends the DevOps philosophy by incorporating security principles and practices directly into the software development lifecycle. It emphasizes "security as code" and shifts security left, integrating it from the very beginning of the development process. Focus: DevSecOps places equal importance on integrating security into the DevOps model, ensuring security measures are integrated early and throughout the development lifecycle. Goals: Goals of DevSecOps include not only faster delivery and improved quality but also enhanced security, reduced vulnerabilities, and more proactive threat detection and mitigation. Security Integration: Security is a core and integral component of the development process. Automated security checks, testing, and secure coding practices are incorporated from the outset.
281
DevSecCompliance
DevSecCompliance expands on DevSecOps by emphasizing compliance with regulatory requirements and industry standards. It ensures that software development not only incorporates security but also adheres to relevant compliance standards. Focus: DevSecCompliance adds a layer of compliance-focused practices to DevSecOps, ensuring that security measures meet regulatory and compliance requirements. Goals: Goals include aligning development and security practices with compliance mandates, reducing legal and regulatory risks, and ensuring that the software complies with industry standards. Security Integration: Security is tightly integrated, addressing both security concerns and compliance requirements throughout the development lifecycle.
282
"Edit time" in software engineering
Phase in which the programmer is writing and editing the source code, designing the architecture, and planning the software.
283
"Build Time" in software engineering
Phase in which the programmer is compiling the source code into machine code or an intermediate form (e.g., bytecode in languages like Java). Linking various code modules, libraries, and dependencies to create the final executable or deployable unit.
284
Eight Steps of "Edit Time" In Software Engineering
1) Requirement Analysis: Gather and analyze software requirements to understand what the software needs to achieve. 2) Specification and Planning: Define specifications based on requirements and plan the development process, including milestones, timelines, and resources. 3) System Design: Design the overall system architecture, including high-level design, module interactions, and technology choices. 4) Detailed Design: Create detailed designs for each component, defining algorithms, data structures, interfaces, and interactions. 5) Coding: Write the actual source code based on the designs, following coding standards and best practices. 6) Unit Testing: Develop and run tests to verify that individual units (e.g., functions, methods) of code function correctly. 7) Integration Testing: Test the integration of multiple units or components to ensure they work together as intended. 8) Code Review: Conduct peer reviews to ensure code quality, adherence to coding standards, and identify potential issues.
285
Four steps of the Build Time process
1) Setup and Configuration: Prepare the development environment, configure build tools, and set up necessary dependencies. 2) Compilation: Use compilers or interpreters to translate the source code into machine-readable instructions (e.g., machine code, bytecode). 3) Linking: Link various compiled modules, libraries, and dependencies to create the final executable or deployable unit. 4) Artifact Generation: Generate deployable artifacts, which could be executable files, libraries, or packages, based on the linked components.
286
Six steps of the Runtime process
1) Loading: Load the compiled code and required data into memory to prepare for execution. 2) Initialization: Initialize necessary variables, data structures, and components before the main execution begins. 3) Execution: Execute the software, allowing users to interact with the application and perform desired tasks. 4) Error Handling: Implement mechanisms to handle errors and unexpected situations that may occur during runtime. Performance 5) Monitoring: Monitor the software's performance, resource usage, and responsiveness during execution. 6) Shutdown and Cleanup: Gracefully shut down the application, release resources, and perform necessary cleanup tasks before exiting.
287
Dynamic Application Security Testing (DAST)
- DAST involves testing an application during RUNTIME (while it's running). It interacts with the live application, sending various requests and inputs to identify vulnerabilities and security flaws and simulate real-world attacks -It can mimic SQL injection, cross-site scripting (XSS), cross-site request forgery (CSRF), and more. -It dynamically analyzes the application's responses to the simulated attacks, looking for signs of vulnerabilities. - The vulnerabilities and security issues identified during DAST provide valuable feedback to developers, enabling them to fix the detected problems and improve the application's security.
288
Continuous Integration (CI)
A software development practice that involves the frequent and automated integration of code changes into a shared repository, typically several times a day. Each integration triggers an automated build and a suite of automated tests to ensure that the code changes don't break the application. The primary goal of continuous integration is to enable early detection of integration issues and to ensure that the software remains functional and maintainable throughout the development process.
289
Continuous Integration/Continuous Deployment (CI/CD) pipeline
An automated process that facilitates the integration of code changes, automated testing, and continuous deployment of software to production. It's a set of principles and practices aimed at automating and streamlining the software delivery process, from development to production, to achieve faster, more reliable releases.
290
Continuous Integration/Continuous Deployment (CI/CD) pipelines stages
1) Code Integration (Continuous Integration): Developers integrate their code changes into a shared version control repository, triggering an automated build process. 2) Automated Build and Testing: The CI server automatically builds the code, compiles it, and runs automated tests to ensure that the changes didn't introduce any errors or regressions. 3) Artifact Generation: The CI process generates deployable artifacts, which could be executable files, binaries, container images, or any other deployable package. 4) Deployment to Staging/Testing Environment (Continuous Deployment): The generated artifacts are deployed to a staging or testing environment, allowing further testing in an environment that closely resembles the production setup. 5) Automated Testing (Continuous Testing): The application undergoes various automated tests, including unit tests, integration tests, performance tests, security tests, and other types of checks to validate its functionality and quality. 6) Deployment to Production (Continuous Deployment): If all tests pass successfully and the code is deemed ready, the CI/CD pipeline automatically deploys the artifacts to the production environment. 7) Monitoring and Feedback: Once in production, the application is closely monitored to ensure it performs as expected. Monitoring data and user feedback are collected to guide future development and improvements.
291
Key benefits of a CI/CD pipeline
1) Faster Delivery: Automation reduces manual intervention, allowing for quicker and more frequent releases. 2) Early Bug Detection: Automated tests catch bugs early in the development cycle, reducing the cost of fixing issues. 3) Consistency and Reliability: Automation ensures a consistent and reliable deployment process every time. Increased 4) Collaboration: Teams collaborate more effectively since everyone works on a shared and consistent integration process. 5) Rapid Feedback: Developers receive immediate feedback on their code changes, encouraging continuous improvement. 6) Efficiency and Cost Reduction: Automation and streamlined processes reduce manual effort and operational costs.
292
Static Application Security Testing (SAST)
Testing from the "inside out": looking at the source code and predicating the output or behavior It's an analysis of a computer software performed without actually executing anything. Literally just looking at the source code -Examines the code files for security vulnerabilities and coding errors. The analysis includes reviewing the syntax, structure, and logic of the code. -Use predefined security patterns, rules, and signatures to identify known security issues. These patterns encompass common coding mistakes, insecure coding practices, and vulnerabilities like SQL injection, cross-site scripting, etc. -Analyze the paths that the application's control flow can take, searching for vulnerabilities related to incorrect or insecure control flow. -Trace the flow of data within the application to identify potential security risks related to how sensitive information is handled and manipulated
293
Dynamic Application Security Testing (DAST)
Testing from the "outside in": looking at the outputs and behavior and drawing conclusions about the source code It's an analysis of a computer software that tests the application by executing the it, with NO knowledge of the code/technologies/frameworks -The DAST tool navigates through the application, simulating how a user would interact with it, injecting various inputs to test for vulnerabilities like SQL injection, brute force attacks, cross-site scripting (XSS), and other security weaknesses. -It analyzes the application's responses, looking for signs of vulnerabilities based on deviations from expected behavior.
294
Docker Image
Docker image is an immutable (unchangeable) file that contains the source code, libraries, dependencies, tools, and other files needed for an application to run. key components of a Docker image: Filesystem Snapshot: An image is essentially a filesystem snapshot, capturing all the files and configurations needed to run an application. Metadata and Configuration: It includes metadata and configuration settings specifying how the application should run, what processes to start, which ports to expose, and other runtime settings. Layers: Docker images are composed of layers. Each layer represents a specific change or addition to the filesystem. Layers are additive and can be shared between multiple images, improving storage and download efficiency. Due to their read-only quality, these images are sometimes referred to as snapshots. They represent an application and its virtual environment at a specific point in time. This consistency is one of the great features of Docker. It allows developers to test and experiment software in stable, uniform conditions. Since images are, in a way, just templates, you cannot start or run them. What you can do is use that template as a base to build a container.
295
Benefits of Using a Docker Image
Portability: Docker images are portable and can be run on any system that supports Docker, ensuring consistency across different environments. Reproducibility: Docker images ensure that the application runs in the same way regardless of where it's deployed. Efficiency: Images are built using layers, allowing for efficient storage, caching, and distribution of common layers. Isolation: Containers created from images are isolated from the host system and other containers, enhancing security and minimizing conflicts.
296
Docker Container v. Docker Image
Definition and Purpose: A Docker image is a lightweight, standalone, and executable software package that includes the application code, runtime, system libraries, dependencies, and configurations. It's a static and read-only snapshot of an application and its environment at a specific point in time. But a Docker container is a runnable instance of a Docker image. It's a lightweight and portable executable that contains the application and its dependencies, running in an isolated and consistent environment. Containers are dynamic and can be started, stopped, deleted, and managed. Immutability: Docker images are immutable and cannot be changed after creation. Any changes result in the creation of a new image. Containers on the other hand can be modified during their runtime; however, these modifications are not persisted by default. If you want to keep the changes, you can commit the container to create a new image. Lifecycle: Images serve as a blueprint for containers. They are used to create and run containers, which are the runtime instances of images. Containers are created from images and can be started, stopped, deleted, or restarted. Usage: Images are used for distribution, sharing, and deployment. Developers create and share images, and these images serve as a foundation for creating containers, which are the actual executable units that run applications and services. Containers encapsulate the runtime environment and allow applications to run consistently across different environments. Persistence: Docker images are read-only and cannot be modified. Changes in the image result in the creation of a new image layer. Containers however can write data, modify the file system, and make changes during their runtime. However, these changes are lost by default when the container is removed, unless data persistence mechanisms are used (e.g., volumes).
297
Docker Container
Docker container is a runnable instance of a Docker image. It's a lightweight and portable executable that contains the application and its dependencies, running in an isolated and consistent environment. Containers are dynamic and can be started, stopped, deleted, and managed. Menas that containers can write data, modify the file system, and make changes during their runtime. However, these changes are lost by default when the container is removed, unless data persistence mechanisms are used (e.g., volumes). Containers can be modified during their runtime; however, these modifications are not persisted by default. If you want to keep the changes, you can commit the container to create a new image. Containers are the actual executable units that run applications and services. They encapsulate the runtime environment and allow applications to run consistently across different environments.
298
Docker
Docker is an open-source platform and a set of tools designed to automate the deployment, scaling, and management of applications using containers. It allows you to package an application and its dependencies into a standardized unit called a container, ensuring that the application runs consistently across various environments. Docker is widely used in software development, testing, and production environments. It simplifies the process of setting up development environments, accelerates continuous integration and continuous deployment (CI/CD), and enhances the scalability and reliability of applications.
299
Dockerfile
A Dockerfile is a simple text-based configuration file used to define the instructions and steps needed to create a Docker image. It specifies the base image, dependencies, environment setup, and how the application should run.
300
Docker Engine
The Docker Engine is the core of the Docker platform. It's a client-server application that manages and orchestrates containers. It includes a daemon (server) that manages container lifecycle and a command-line interface (CLI) that allows users to interact with Docker
301
Docker Hub
Docker Hub is a cloud-based registry that hosts a vast collection of pre-built Docker images. It allows users to share, distribute, and collaborate on Docker images. Users can also publish their own images for public or private use.
302
Continuous Deployment
Continuous Deployment is a software delivery practice where every code change that passes automated testing is automatically deployed to production or a production-like environment without manual intervention. The aim of CD is to release reliable and deployable software increments to production at any point, enabling a streamlined and automated release process.
303
Continuous Integration (CI) v. Continuous Deployment (CD)
1) Scope: CI focuses on integrating code changes and running automated tests to ensure the changes don't break the application, but it does not automatically deploy the code to production. CD includes CI but goes further by automating the deployment of code changes that pass testing directly to production or production-like environments. 2) Automated Deployment: CI does not involve automated deployment to production. It's primarily concerned with integration and testing. CD automates the deployment process, ensuring that every successful integration can potentially be released to production. 3) Deployment Decision: In CI, the decision to deploy to production is manual and separate from the integration process. In CD, once the code passes tests, the decision to deploy is automated, aiming for immediate or near-immediate production release. 4) Frequency of Deployment: CI does not dictate how often deployments should happen; it's primarily about integrating code frequently. CD advocates for automating deployments as frequently as possible, often after every successful integration.
304
"Secure Defaults"
One of the main elements of Zero-Trust security Principle where systems, applications, or devices are designed and configured with the max security settings by default. These defaults are established to provide a strong baseline level of security from the moment a system or application is deployed, minimizing potential vulnerabilities and risks.
305
"Fail Securely"
Principle in system design that emphasizes ensuring that when failures or errors occur within a system, they do so in a manner that minimizes potential harm, damage, or security risks. The objective is to have a system fail in a predictable, controlled, and safe manner to protect users, data, and the overall system's integrity.
306
"Secured environment" in software development
Refers to a controlled and protected computing environment where security measures are implemented to safeguard data. The primary goal of a secured environment is to ensure confidentiality/integrity/availability of information and services while minimizing the risk of security breaches When a failure occurs, the impact on users should be minimized. Critical services should still function, and users should be provided with clear and informative messages about the issue.
307
"Graceful degradation" in software development
The system should degrade gracefully in the face of failure, allowing essential functionalities to continue operating even if non-critical components or features are unavailable.
308
"Defense in Depth"
Involves implementing multiple layers of redundant security measures to protect an organization's data
309
Perimeter Security
An antiquated cybersecurity strategy, built around the concept of a trusted internal network protected by a strong perimeter defense (often referred to as the "castle and moat" approach) With the evolution of remote work, cloud computing, and the rise of sophisticated cyber threats, the perimeter-based security model proved inadequate. Attackers found ways to bypass perimeter defenses through various means like phishing, social engineering, and exploiting vulnerabilities
310
Zero-Trust v. Perimeter Security
Perimeter security relies heavily on fortifying the network boundary, assuming that threats are external. Once inside the network, users and devices are often given more trust, potentially creating vulnerabilities if a threat penetrates the perimeter. Zero Trust Security: - Assumes that threats can come from both external and internal sources - Emphasizes strict access controls and verification for every user/device/application (regardless of their location) -Trust is never assumed, even for those in the network
311
Benefits of the K.I.S.S. principle in software design
1) Fewer Interfaces/Attack Surfaces 2) Easier/Faster Code Review and Analysis 3) Reduced potential for mistakes/errors 4) Easier to maintain/update 5) Faster Incident Response/Mitigation 6) Easier Compliance with standards/ regulations 7) Easier to use (easier to train on) 8) Reduced need for third party solutions
312
Principle of Least Astonishment
Suggests that a system, particularly its user interface, should behave in a way that minimizes surprise or astonishment among users when they interact with it. Indicates that systems should: - Be Consistent and Predictable - Have an Intuitive Design - Avoid Misleading Elements - Help Minimize Cognitive Load
313
K.I.S.S. principle to simplify outsourcing in software development
Find a versatile vendor that can do many of the things you are looking for instead of insisting on the best vendor for each of the things you are looking for (Best-in-suite over best-in-breed) The fewer third party vendors, the better. Each third party vendor you use, the more attack surfaces you introduce into your system
314
Security Information Event Management (SIEM)
A comprehensive approach to security management that involves collecting, correlating, and analyzing security-related data from various sources across an organization's IT infrastructure. SIEM systems help in detecting and responding to security incidents by providing real-time insights into security events and incidents occurring within the network.
315
Spiral Process
Software development process that allows for multiple iterations of the waterfall process. Each "loop" in the process represents the development of a new prototype Solves a major criticism of the waterfall model: it allows developers to regularly return to the planning stage so that they can adapt to changing requirements Key word here is "iterative"
316
Capability Maturity Model (CMM)
A framework used in software engineering and development to evaluate/improve the processes and practices of an organization. The ultimate goal is enhancing the quality of software products and the efficiency of development. The CMM consists of five maturity levels, each representing a different stage in the organization's software development process improvement journey: Initial (Level 1): Processes are ad hoc and often chaotic. There is no defined process and success depends on individual efforts. Repeatable (Level 2): Basic project management processes are established to track cost, schedule, and functionality. Processes are defined enough to repeat past successes. Defined (Level 3): Processes are well characterized and understood. Detailed guidelines and procedures are established to standardize the software development lifecycle. Managed (Level 4): Quantitative metrics are used to manage the software development process. Process performance is measured and controlled. Optimizing (Level 5): Continuous process improvement is enabled by quantitative feedback and process change management. The focus is on optimizing processes based on quantitative understanding.
317
Level 1 of CMM
INITIAL: Processes are ad hoc and often chaotic. There is no defined process and success depends on individual efforts.
318
Level 2 of CMM
REPEATABLE: Basic project management processes are established to track cost, schedule, and functionality. Processes are defined enough to repeat past successes.
319
Level 3 of CMM
DEFINED: Processes are well characterized and understood. Detailed guidelines and procedures are established to standardize the software development lifecycle.
320
Level 4 of CMM
MANAGED: Quantitative metrics are used to manage the software development process. Process performance is measured and controlled.
321
Level 5 of CMM
OPTIMIZED: Continuous process improvement is enabled by quantitative feedback and process change management. The focus is on optimizing processes based on quantitative understanding.
322
IDEAL Model of Software Development
A framework used in software development for continuous process improvement. It stands for Initiating, Diagnosing, Establishing, Acting, and Learning. The IDEAL model guides organizations through a cycle of steps to drive improvement in their software development processes.
323
The "Initiate" step of the IDEAL model
Objective: Identify and initiate the need for process improvement. Activities: 1) Recognize Improvement Opportunities: Identify areas where processes can be enhanced or streamlined. 2) Create a Vision for Improvement: Define a clear vision of what the improved processes should look like. 3) Obtain Management Support: Secure commitment and support from management for the improvement initiative. Outcome: A clear understanding of the need for improvement and the initiation of the improvement process.
324
The "Diagnosing" step of the IDEAL model
Objective: Assess the current state of the processes and performance. Activities: 1) Analyze Current Processes: Evaluate existing processes to identify strengths, weaknesses, and areas for improvement. 2) Collect Data and Feedback: Gather data, feedback, and insights from relevant stakeholders. 3) Benchmarking: Compare the organization's processes with industry standards or best practices. Outcome: A thorough diagnosis of the current state of processes, identifying areas for improvement.
325
The "Establish" step of the IDEAL model
Objective: Establish specific goals and targets for process improvement. Activities: 1) Set Improvement Goals: Define achievable and measurable improvement goals that align with the organization's objectives. 2) Plan Improvement Activities: Develop detailed plans for implementing changes and achieving the defined goals. 3) Define Metrics and Measurement Plans: Establish metrics to measure progress and success in achieving the improvement goals. Outcome: Clear, defined improvement goals and a structured plan to achieve them.
326
The "Act" step of the IDEAL model
Objective: Implement the planned improvements and measure their effectiveness. Activities: 1) Implement Changes: Execute the planned process improvements and changes. 2) Collect Data and Measure Performance: Gather data during and after the changes to measure process performance and improvements. 3) Analyze Results: Evaluate the outcomes of the improvements and assess their impact on processes and product quality. Outcome: Implemented improvements and measured impact, providing valuable insights
327
The "Learn" step of the IDEAL model
Objective: Learn from the improvements and experiences to refine the process and plan for the next cycle. Activities: 1) Review and Evaluate: Review the results of the improvement efforts and evaluate the overall process changes. 2) Document Lessons Learned: Document lessons learned and best practices for future reference and improvement. 3) Update Processes and Plans: Use the knowledge gained to update processes and plans for the next improvement cycle. Outcome: Insights and knowledge to further refine processes and plan for the next round of improvements.
328
Request Control
Objective: Managing incoming requests for changes or enhancements to the web application. Scenario: 1) Request for Feature Addition: A stakeholder submits a request for adding a new feature to the web application, such as a real-time chat feature for improved user engagement. 2) Request Evaluation: The development team evaluates the request in terms of feasibility, impact on existing functionality, resource availability, and alignment with business objectives. 3) Decision: Based on the evaluation, the team decides whether to accept, decline, or defer the request. Outcome: Effective management and assessment of requests, ensuring alignment with project goals.
329
Change Control
Objective: Managing changes to the software and ensuring they are controlled, tested, and documented. Scenario: 1) Change Request Approval: After evaluating the request, if approved, it is considered for implementation. 2) Change Implementation: The development team implements the approved change in a controlled manner, following established procedures and guidelines. 3) Testing and Verification: The change undergoes thorough testing to ensure it doesn't introduce bugs or adversely affect existing functionality. 4) Documentation: Detailed documentation of the change, including code modifications, is recorded for future reference and audits. Outcome: Controlled and structured application of changes, maintaining system stability and reliability.
330
Release Control
Objective: Managing the release of new versions or updates of the software to production environments. Scenario: 1) Release Planning: The team plans the release, considering factors like feature completeness, bug fixes, and stakeholder expectations. 2) Versioning and Tagging: The release is assigned a version number, and codebase is tagged to ensure a specific set of changes is bundled together for deployment. 3) Deployment and Monitoring: The new version is deployed to the production environment, and the team closely monitors its performance and user feedback. 4) Rollback Plan: A rollback plan is prepared in case any critical issues arise post-release, ensuring a swift return to the previous stable version if needed. Outcome: Controlled and successful deployment of software updates with minimal disruptions to users.
331
Four main propagation techniques of viruses
1) File Infection 2) Service Injection 3) Boot Sector Infection 4) Macro Infection
332
Best way for a software application do defend itself from malware
Can build virus detecting algorithms into the software. The viruses work by detecting the ways in which the viruses spread through the system: 1) File Infection 2) Service Injection 3) Boot Sector Infection 4) Marco Infection
333
Virus propagation via File Infection
In this method, the virus attaches itself to executable files or program files. When a user runs an infected program, the virus activates and replicates by attaching itself to other executable files on the system. The infected files then spread the virus to other devices when shared or transferred.
334
Virus propagation via Service Injection
Service injection involves injecting malicious code into system processes or services running in the background. The virus modifies system executables or libraries, allowing it to execute whenever the affected service is initiated. This method is harder to detect and remove since the virus is integrated into critical system components.
335
Virus Propagation via Boot Sector Infection
Viruses that use boot sector infection target the boot sector of storage devices like hard drives, SSDs, or USB drives. When an infected device is booted, the virus loads into memory and runs before the operating system starts. The virus can spread to other devices when bootable media (like a USB drive) is connected to an infected system.
336
Virus Propagation via Macro Infection
Macro viruses utilize macros, which are sequences of instructions in a document or file format like Microsoft Word or Excel. When a user opens an infected document containing macros, the virus executes and can infect the system. These viruses can replicate by attaching to other documents or spreadsheets and spreading when shared or sent via email.
337
Importance of threat modeling in software development
This is the first step in building malware protection into any software project Four major frameworks: 1) PASTA 2) STRIDE 3) VAST 4) DREAD 5) TRIKE
338
PASTA Threat modeling framework
One of the strategies for planning built-in malware protection Stage 1) Definition of Objectives Stage 2) Definition of Technical Scope Stage 3) App Decomposition & Analysis Stage 4) Threat Analysis Stage 5) Weakness and Vulnerability Analysis Stage 6) Attack Modeling & Simulation Stage 7) Risk Analysis and Management
339
STRIDE Threat modeling framework
Helps you remember the categories of attacks you need to be aware of Spoofing Tampering Repudiation Information disclosure Denial of Service