Pocket Prep 18 Flashcards

1
Q

Client care representatives in your firm are now permitted to access and see customer accounts. For added protection, you’d like to build a feature that obscures a portion of the data when a customer support representative reviews a customer’s account. What type of data protection is your firm attempting to implement?

A. Encryption
B. Obfuscation
C. Tokenization
D. Masking

A

D. Masking

Explanation:
The organization is trying to deploy masking. Masking obscures data by displaying only the last four/five digits of a social security or credit card number, for example. As a result, the data is incomplete in the absence of the blocked/removed content. The rest of the information can appear to be there, but the user only sees “*” or a dot.

Tokenization is the process of removing data and placing a token in its place. The question is about part of the data being available, so that does not work.

Encryption takes the data and makes it unreadable. It’s unusual to encrypt, for example, the first part of a credit card number. So, this does not work, either.

Obfuscation is to “confuse.” If data has been obfuscated, the attacker would be left confused when looking at it. Think encryption. It is a way to obscure the data. There are other ways, though. Again, this is not going to work because the user sees a lot of asterisks or dots. That is not obfuscation, that is masking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following types of testing focuses on software’s interfaces and the experience of the consumer?

A. Integration Testing
B. Regression Testing
C. Usability Testing
D. Unit Testing

A

C. Usability Testing

Explanation:
Functional testing is used to verify that software meets the requirements defined in the first phase of the SDLC. Examples of functional testing include:

Unit Testing: Unit tests verify that a single component (function, module, etc.) of the software works as intended.
Integration Testing: Integration testing verifies that the individual components of the software fit together correctly and that their interfaces work as designed.
Usability Testing: Usability testing verifies that the software meets users’ needs and provides a good user experience.
Regression Testing: Regression testing is performed after changes are made to the software and verifies that the changes haven’t introduced bugs or broken functionality.

Non-functional testing tests the quality of the software and verifies that it provides necessary functionality not explicitly listed in requirements. Load and stress testing or verifying that sensitive data is properly secured and encrypted are examples of non-functional testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following is NOT one of the critical elements of a management plane?

A. Management
B. Orchestration
C. Scheduling
D. Monitoring

A

D. Monitoring

Explanation:
According to the CCSP, the three critical elements of a management plane are scheduling, orchestration, and management. Monitoring is not an element of the management plane.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Compliance with which of the following standards is OPTIONAL for cloud consumers and cloud service providers working in the relevant industry?

A. G-Cloud
B. PCI DSS
C. ISO/IEC 27017
D. FedRAMP

A

C. ISO/IEC 27017

Explanation:
Cloud service providers may have their environments verified against certain standards, including:

ISO/IEC 27017 and 27018: The International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) publishes various standards, including those describing security best practices. ISO 27017 and ISO 27018 describe how the information security management systems and related security controls described in ISO 27001 and 27002 should be implemented in cloud environments and how PII should be protected in the cloud. These standards are optional but considered best practices.
PCI DSS: The Payment Card Industry Data Security Standard (PCI DSS) was developed by major credit card brands to protect the personal data of payment card users. This includes securing and maintaining compliance with the underlying infrastructure when using cloud environments.
Government Standards: FedRAMP-compliant offerings and UK G-Cloud are cloud services designed to meet the requirements of the US and UK governments for computing resources. Compliance with these standards is mandatory for working with these governments.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following types of SOC reports provides high-level information about an organization’s controls intended for public dissemination?

A. SOC 1
B. SOC 3
C. SOC 2 Type II
D. SOC 2 Type I

A

B. SOC 3

Explanation:
Service Organization Control (SOC) reports are generated by the American Institute of CPAs (AICPA). The three types of SOC reports are:

SOC 1: SOC 1 reports focus on financial controls and are used to assess an organization’s financial stability.
SOC 2: SOC 2 reports assess an organization's controls in different areas, including Security, Availability, Processing Integrity, Confidentiality, or Privacy. Only the Security area is mandatory in a SOC 2 report.
SOC 3: SOC 3 reports provide a high-level summary of the controls that are tested in a SOC 2 report but lack the same detail. SOC 3 reports are intended for general dissemination.

SOC 2 reports can also be classified as Type I or Type II. A Type I report is based on an analysis of an organization’s control designs but does not test the controls themselves. A Type II report is more comprehensive, as it tests the effectiveness and sustainability of the controls through a more extended audit.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which stage of the IAM process relies heavily on logging and similar processes?

A. Identification
B. Authorization
C. Accountability
D. Authentication

A

C. Accountability

Explanation:
Identity and Access Management (IAM) services have four main practices, including:

Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering.
Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in.
Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this.
Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Your organization wants to address baseline monitoring and compliance by restricting the duration of a host’s non-compliant condition. When the application is deployed again, the organization would like to decommission the old host and replace it with a new Virtual Machine (VM) constructed from the standard baseline image.

What functionality is described here?

A. Blockchain
B. Infrastructure as Code (IaC)
C. Virtual architecture
D. Immutable architecture

A

D. Immutable architecture

Explanation:
Immutable means unchanging over time or unable to be changed. Immutability of cloud infrastructure is a preferred state. It is feasible to easily decommission all virtual infrastructure components utilized by an older version of software and deploy a new virtual infrastructure in cloud settings. Immutable infrastructure is a solution to the problem of systems deviating from baseline settings over time. This is using a golden image to start virtual machines.

IaC is a virtual environment. The infrastructure is no longer physical routers, switches, and servers; it is now virtual routers, switches, and servers. That could also be called a virtual architecture, although IaC is the common language today.

Blockchain technology has an immutable element. It is, or should be, impossible to alter the record of who it belonged to, such as what we have with cryptocurrency. The FBI has been able to recover stolen bitcoins and return them to the rightful owner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Information Rights Management (IRM) is the tool that a large manufacturing company has decided to use for their classroom books. It is important for them to control who has access to their content. One of the features that they are most interested in is the ability to recall their books and replace them with up-to-date content, since their training is very technical and they want to ensure that only their customers have access to the books.

Which phase of the data lifecycle does IRM fit best into?

A. Archive phase
B. Use phase
C. Share phase
D. Create phase

A

B. Use phase

Explanation:
IRM fits best into the use phase. It controls who has access to the content and allows for controls of copy, paste, print, and other features. It also allows the content to be expired, out of use, or replaced. It is about the control of how it is used by the customer.

The create phase does not fit the scenario because this phase is where the content is originally created by the authors. Once the data is created, it can be shared with the customers.

The share phase is not the primary phase of IRM. The exchange between the company and the user of the content is the share phase, but that is not where IRM focuses. The primary phase is the use phase because that is where the features fit.

The archive phase is incorrect because this phase is about long-term storage. IRM controls when the user is accessing it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A cloud provider needs to ensure that the data of each tenant in their multitenant environment is only visible to authorized parties and not to the other tenants in the environment. Which of the following can the cloud provider implement to ensure this?

A. Network security groups (NSG)
B. Physical network segregation
C. Geofencing
D. Hypervisor tenant isolation

A

D. Hypervisor tenant isolation

Explanation:
In a cloud environment, physical network segregation is not possible unless it is a private cloud built that way. However, it’s important for cloud providers to ensure separation and isolation between tenants in a multitenant cloud. To achieve this, the hypervisor has the job of tenant isolation within machines.

An NSG is a virtual Local Area Network (LAN) behind a firewall, which is beneficial to use. It is used to control traffic within a tenant or from the internet to that tenant, not between tenants.

Geofencing is used to control where a user can connect from. It does not isolate tenants from each other. Rather, it restricts access from countries that you do not expect access to come from.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the following SOC duties involves coordinating with a team focused on a particular task?

A. Threat Detection
B. Threat Prevention
C. Incident Management
D. Quality Assurance

A

C. Incident Management

Explanation:
The security operations center (SOC) is responsible for managing an organization’s cybersecurity. Some of the key duties of the SOC include:

Threat Prevention: Threat prevention involves implementing processes and security controls designed to close potential attack vectors and security gaps before they can be exploited by an attacker.
Threat Detection: SOC analysts use Security Information and Event Management (SIEM) solutions and various other security tools to identify, triage, and investigate potential security incidents to detect real threats to the organization.
Incident Management: If an incident has occurred, the SOC may work with the incident response team (IRT) to contain, investigate, remediate, and recover from the identified incident.

Quality Assurance is not a core SOC responsibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Ocean is the information security manager for a new company. They work with the company to ensure that it is in compliance with the specific laws that the company is worried about. Which of the following would they and the company be the least concerned with?

A. Data location
B. Containers
C. Type of data
D. Multi-tenancy

A

B. Containers

Explanation:
Containers are a type of virtualized storage that does not present significant compliance concerns on its own.

For regulated customers, type of data, data location, and multi-tenancy are frequently the primary compliance concerns. GDPR is a good example of this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A cloud provider has the capability to use a large pool of resources for numerous client hosts and applications. They are able to offer scalability and on-demand self-service. Which technology makes all this possible?

A. Software defined networking
B. Virtual media
C. Virtualization
D. Guest operating systems

A

C. Virtualization

Explanation:
Without virtualization, cloud environments as we know them would not be possible. This is because cloud environments are built on virtualization technology. It is virtualization that allows cloud providers to leverage a pool of resources for various customers and the ability to offer such scalability and on-demand self-service.

A Virtual Machine (VM) would be a guest operating system on top of a hypervisor, which is the host operating system. This is just part of what virtualization allows.

Virtual media or virtual Hard Disk Drives (HDD) or virtual Solid State Drives (SSD) is another benefit or part of what virtualization allows.

Software Defined Networking (SDN) is an advance in networking technology that can be used in a virtualized environment, but it is designed for a physical environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Halo, a cloud information security specialist, is working with the cloud data architect to design a secure environment for the corporation’s data in the cloud. They have decided, based on latency issues, that they are going to build a Storage Area Network (SAN) using Fibre Channel. Halo is working to identify the security mechanisms that need to be configured with the SAN.

Which of the following are security features that they should use to protect the storage controllers and all the sensitive data?

A. Authentication, LUN Masking, Transport Layer Security (TLS)
B. Authentication, Internet Protocol Security (IPSec), LUN masking
C. Authentication, LUN Masking, Secure Shell (SSH)
D. Kerberos, Internet Protocol Security (IPSec), LUN zoning

A

B. Authentication, Internet Protocol Security (IPSec), LUN masking

Explanation:
Security mechanisms that should be added to Fibre Channel include:

LUN Masking: Logical Unit Number (LUN) masking is another mechanism used to control access to storage devices on the SAN. LUN masking ensures that only authorized devices can access a particular LUN, helping to prevent unauthorized access or data theft.
Authentication: Fibre Channel supports several methods of authentication, including Challenge-Handshake Authentication Protocol (CHAP) and  Remote Authentication Dial-In User Service (RADIUS). These protocols ensure that only authorized devices are allowed to connect to the SAN.
Encryption: Fibre Channel traffic can be encrypted using IPsec or other encryption protocols. Encryption helps to protect data in transit against eavesdropping or interception by unauthorized parties.
Zoning: Fibre Channel switches support the concept of zoning, which allows administrators to control which devices can communicate with each other on the SAN. Zoning can be based on port, WWN (World Wide Name), or a combination of both.
Auditing and logging: Fibre Channel switches and devices should be configured to generate logs and audit trails of all SAN activity. This can help identify potential security incidents or anomalies and provide a record of activity for compliance purposes.

Kerberos is not commonly used to authenticate to a Fibre Channel SAN, CHAP is more likely.

LUN masking is used, not LUN zoning. The word zoning by itself would have been sufficient.

TLS and SSH are not common for encryption. TLS can be used if the traffic is being tunneled, Fibre Channel over IP (FCIP).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Frankie has been tasked with finding and understanding how certain data keeps being leaked. She is analyzing the circumstances that data is being used and transmitted under. What type of analysis is she doing as part of Data Loss Prevention (DLP)?

A. Contextual analysis
B. Data permanence
C. Data classification
D. Content analysis

A

A. Contextual analysis

Explanation:
Data Loss Prevention (DLP) is the set of tools, technologies, and policies that a business can use to protect their sensitive data from being sent or used by the wrong people or the wrong place.

Content analysis is when the DLP tools are looking for keywords, patterns, metadata, or anything to identify sensitive data, such as social security numbers or credit card numbers.

Contextual analysis is the analysis of the circumstances (context) in which data is being used. For example, an email being transmitted or received from an external system versus internal.

Data classification is understanding, labeling, and applying policies based on how sensitive data is.

Data permanence would be simply how long the data exists. It is not a formal term.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Mateo is working with the cloud provider that his business has chosen to provide Platform as a Service (PaaS) for their server-based needs. It is necessary to specify the Central Processing Unit (CPU) requirements to ensure that this solution works as they require. CPU needs would be specified within the what?

A. Business Associate Agreement (BAA)
B. Service Level Agreement (SLA)
C. Master Services Agreement (MSA)
D. Privacy Level Agreement (PLA)

A

B. Service Level Agreement (SLA)

Explanation:
A Service Level Agreement (SLA) specifies the conditions of service that will be provided by the cloud provider, such as uptime or CPU needs.

The MSA is a document that covers the basic relationship between the two parties. In this case, the customer and the cloud provider. It does not specify metrics such as CPU needs.

The BAA is found within HIPAA. It informs the cloud provider of the requirements to protect health data. In Europe, this would be called a Data Processing Agreement (DPA) under GDPR. More generically, this would be called a Privacy Level Agreement (PLA).
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An organization has a team working on an audit plan. They have just effectively defined all the objectives needed to set the groundwork for the audit plan. What is the NEXT step for this team to complete?

A. Perform the audit
B. Define scope
C. Review previous audits
D. Conduct market research

A

B. Define scope

Explanation:
Audit planning is made up of four main steps, which occur in the following order:

Define objectives
Define scope
Conduct the audit
Lessons learned and analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which of the following events is likely to cause the initiation of a Disaster Recovery (DR) plan?

A. A failure in the supply chain for the manufacturing process
B. The loss of the Chief Executive Officer (CEO) and Chief Financial Office. (CFO) in a plane crash
C. A fire in the primary data center
D. The main Internet Service Provider (ISP) experiences a fiber cut

A

C. A fire in the primary data center

Explanation:
NIST defines Disaster Recovery Plans as a written plan for recovering one or more information systems at an alternate facility in response to a major hardware or software failure or the destruction of a facility in NIST SP 800-34, so the primary data center being destroyed by fire is the correct answer.

NIST defines a Business Continuity Plan as a written document with instructions or procedures that describe how an organization’s mission/business processes will be sustained during and after a significant disruption. Because the question is looking for a disaster, the answer about a fire is a better answer here. That has the potential of destroying the facility or at least causing damage to the hardware and software in the data center.

If the ISP has a fiber cut, that would/could distract communications to the data center. If this happens, it is unlikely to require a move to an alternate site. This is a BC issue, not a disaster, at least according to the NIST definitions.

These definitions from NIST work well around this exam. If you disagree with the definitions or use the terms in another way that is fine—just know for the exam that this is another way to look at the terms.

If the CEO’s and CFO’s lives are lost, that is a sad event for their families and for the business. A succession plan should be created if this is a concern for a business.

A potential failure in the supply chain is something that needs to be managed. ISO/IEC 28000 is a useful document that begins that work. However, a DR plan is not needed for this. Perhaps a BC plan would be useful though.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

While building their virtualized data center in a public cloud Infrastructure as a Service (IaaS), a real estate corporation operating in Canada knows that they must be careful to care for all the data and personal information that they will be storing within the cloud. Since it is critical to protect the data that is in their possession, they are working to control access.

Which of the following is NOT a protection technique that they can use for their systems?

A. Privileged access
B. Standard configurations
C. Separation of duty
D. Least privilege

A

A. Privileged access

Explanation:
Privileged access must be strictly limited and should enforce least privilege and separation of duty. Therefore, it is not a virtualization system protection mechanism.

Least privilege means that a user of any kind should only be given as little access as possible. They should have access to what they need to access with only the permissions that they require and nothing more. This is a great idea to pursue but difficult to achieve in reality.

Separation of duty is the idea of taking a task and breaking it down into specific steps. They divide and assign those steps to a combination of at least two people. Those two people will have to perform their specific steps to accomplish that task. The purpose of this is to force collusion. This would require someone to convince someone else to help them commit fraud rather than being able to do it by themselves.

Standard configurations are agreed-upon baselines and aid in managing change, which provides protection for virtualization systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A large social media company that relies on public Infrastructure as a Service (IaaS) for their virtual Data Center (vDC) had an outage. They were not locatable through Domain Name Service (DNS) queries midafternoon one Thursday. In their virtual routers, a configuration was altered incorrectly. What did they fail to manage properly?

A. Input validation
B. Service level management
C. User training
D. Change enablement practice

A

D. Change enablement practice

Explanation:
ITIL defines change enablement practice as the practice of ensuring that risks are properly assessed, authorizing changes to proceed, and managing a change schedule to maximize the number of successful service and product changes. This is what happened to Facebook/Instagram/WhatsApp/Meta. They have their own network, but the effect would have been the same using AWS as an IaaS. This is change management.

Service level management is defined in ITIL as the practice of setting clear business-based targets for service performance so that the delivery of a service can be properly assessed, monitored, and managed against these targets.

Input validation needs to be performed by software to ensure that the values entered by the users are correct. The main goal of input validation is to prevent the submission of incorrect or malicious data and ensure that the software functions as intended. By checking for errors or malicious input, input validation helps to increase the security and reliability of software.

User training can help reduce the likelihood of errors occurring while using the software. By teaching users how to properly use the software, they become more aware of potential mistakes that may occur and can take measures to prevent them. This can help reduce the occurrence of mistakes, leading to less downtime, more accurate work, and improved outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Software Configuration Management (SCM) is widely used in all software development environments today. There are many practices that are part of a secure SCM environment. What are some of these practices?

A. Version control, build automation, release management, issue tracking
B. Secure Software Development Lifecycle, build automation, releas. management, issue tracking
C. Version control, build automation, Secure Software Development Lifecycle
D. Version control, release management, testing and tracking tools

A

A. Version control, build automation, release management, issue tracking

Explanation:
Correct answer: Version control, build automation, release management, issue tracking

Software Configuration Management (SCM) has many practices. Some common activities include:

Version control: This allows developers to track changes to code, collaborate, and revert changes if needed.
Build automation: This automates the compiling of the source code into executable software. Jenkins and Travis CI are common build tools today.
Release management: This helps to automate the process of deploying software and ensures releases are tested and approved.
Issue tracking: This is used to track and manage bugs, feature requests, and other issues. Jira, Trello, and Asana are common tools.

Secure Software Development LifeCycle (SSDLC) is a related yet distinct process. SSDLC is about developing software with security in mind. SCM can support the SSDLC process. SSDLC would not be a practice of SCM.

Testing and tracking tools are found within SCM, but it is not so much the process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Kathleen works at a large financial institution that has a growing software development group. They have a desire to “shift left” in their thinking as they build their Platform as a Service (PaaS) environment. The Developers and Operations (DevOps) are now working together to build and deploy a strong and secure cloud environment that will contain a Sofware as a Service (SaaS) product that will be used by the financial analysts. To ensure the software has the ability to withstand attack attempts that will surely happen, they need to hypothesis what can happen so as to do their best to prevent it.

What would you recommend that they do?

A. Threat modeling using both Damage, Reproducability, Exploitability, Affected users, Discoverability (DREAD) and vulnerability assessment
B. Threat modeling using both Spoofing, Tampering, Repudiation, Information disclosure, Elevation of privileges (STRIDE) and Damage, Reproducability, Exploitability, Affected users, Discoverability (DREAD)
C. Threat modeling using both Spoofing, Tampering, Repudiation, Information disclosure, Elevation of privileges (STRIDE) and open box testing
D. Threat modeling using both Damage, Reproducability, Exploitability, Affected users, Discoverability (DREAD) and penetration testing

A

B. Threat modeling using both Spoofing, Tampering, Repudiation, Information disclosure, Elevation of privileges (STRIDE) and Damage, Reproducability, Exploitability, Affected users, Discoverability (DREAD)

Explanation:
Correct answer: Threat modeling using both Spoofing, Tampering, Repudiation, Information disclosure, Elevation of privileges (STRIDE) and Damage, Reproducability, Expoloitability, Affected users, Discoverability (DREAD)

Threat modeling is the processing of finding threats and risks that face an application or system once it has gone live. This is an ongoing process that will change as the risk landscape changes and is, therefore, an activity that is never fully completed. DREAD and STRIDE, which were both conceptualized by Microsoft, are two prominent models recommended by OWASP. Together, they look at what could happen (STRIDE) and how bad it could be (DREAD).

Threat modeling techniques include STRIDE, DREAD, PASTA, ATASM, TRIKE, and a few others.

Open box testing is a type of software testing, where the code is known to the tester. It is not threat modeling; it is the actual test of actual code. Threat modeling is predictive, trying to understand how threats could be realized in the future.

The same is true with vulnerability assessment and penetration testing. They are looking for problems that exist, not predicting what could happen in the future.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Which of the following attributes of evidence deals with the fact that it is real and relevant to the investigation?

A. Convincing
B. Admissible
C. Authentic
D. Complete

A

C. Authentic

Explanation:
Typically, digital forensics is performed as part of an investigation or to support a court case. The five attributes that define whether evidence is useful include:

Authentic: The evidence must be real and relevant to the incident being investigated.
Accurate: The evidence should be unquestionably truthful and not tampered with (integrity).
Complete: The evidence should be presented in its entirety without leaving out anything that is inconvenient or would harm the case.
Convincing: The evidence supports a particular fact or conclusion (e.g., that a user did something).
Admissible: The evidence should be admissible in court, which places restrictions on the types of evidence that can be used and how it can be collected (e.g., no illegally collected evidence).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Which of the following risks associated with PaaS environments includes hypervisor attacks and VM escapes?

A. Interoperability Issues
B. Persistent Backdoors
C. Virtualization
D. Resource Sharing

A

C. Virtualization

Explanation:
Platform as a Service (PaaS) environments inherit all the risks associated with IaaS models, including personnel threats, external threats, and a lack of relevant expertise. Some additional risks added to the PaaS model include:

Interoperability Issues: With PaaS, the cloud customer develops and deploys software in an environment managed by the provider. This creates the potential that the customer’s software may not be compatible with the provider’s environment or that updates to the environment may break compatibility and functionality.
Persistent Backdoors: PaaS is commonly used for development purposes since it removes the need to manage the development environment. When software moves from development to production, security settings and tools designed to provide easy access during testing (i.e. backdoors) may remain enabled and leave the software vulnerable to attack in production.
Virtualization: PaaS environments use virtualized OSs to provide an operating environment for hosted applications. This creates virtualization-related security risks such as hypervisor attacks, information bleed, and VM escapes.
Resource Sharing: PaaS environments are multitenant environments where multiple customers may use the same provider-supplied resources. This creates the potential for side-channel attacks, breakouts, information bleed, and other issues with maintaining tenant separation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

At which stage of the process of developing a BCP should an organization plan out personnel and resource requirements?

A. Implementation
B. Testing
C. Auditing
D. Creation

A

A. Implementation

Explanation:
Managing a business continuity/disaster recovery plan (BCP/DRP) has three main stages:

Creation: The creation stage starts with a business impact assessment (BIA) that identifies critical systems and processes and defines what needs to be covered by the plan and how quickly certain actions must be taken. Based on this BIA, the organization can identify critical, important, and support processes and prioritize them effectively. For example, if critical applications can only be accessed via a single sign-on (SSO), then SSO should be restored before them. BCPs are typically created first and then used as a template for prioritizing operations within a DRP.
Implementation: Implementation involves identifying the personnel and resources needed to put the BCP/DRP into place. For example, an organization may take advantage of cloud-based high availability features for critical processes or use redundant systems in an active/active or active/passive configuration (dependent on criticality). Often, decisions on the solution to use depend on a cost-benefit analysis.
Testing: Testing should be performed regularly and should consider a wide range of potential scenarios, including cyberattacks, natural disasters, and outages. Testing can be performed in various ways, including tabletop exercises, simulations, or full tests.

Auditing is not one of the three stages of developing a BCP/DRP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

A cloud security engineer working for a financial institution needs to determine how long specific financial records must be stored and preserved. Which of the following specifies how long financial records must be preserved?

A. Gramm-Leach-Bliley Act (GLBA)
B. Privacy Act of 1988
C. General Data Protection Regulation (GDPR)
D. Sarbanes-Oxley (SOX)

A

D. Sarbanes-Oxley (SOX)

Explanation:
The Sarbanes-Oxley Act (SOX) regulates how long financial records must be kept. SOX is enforced by the Securities and Exchange Commission (SEC). SOX was passed as a way to protect stakeholders and shareholders from improper practices and errors. This was passed as a result of the fraudulent reporting by Enron.

GDPR is the European Union’s (EU) requirement for member countries, including Germany, to protect personal data in their possession.

GLBA is an extension to SOX, which requires personal data to be protected for the customers of the business that must be in compliance with SOX.

The privacy act of 1988 is an Australian law that requires personal data to be protected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

A SOC report is MOST related to which of the following common contractual terms?

A. Litigation
B. Right to Audit
C. Metrics
D. Compliance

A

B. Right to Audit

Explanation:
A contract between a customer and a vendor can have various terms. Some of the most common include:

Right to Audit: CSPs rarely allow customers to perform their own audits, but contracts commonly include acceptance of a third-party audit in the form of a SOC 2 or ISO 27001 certification.
Metrics: The contract may define metrics used to measure the service provided and assess compliance with service level agreements (SLAs).
Definitions: Contracts will define various relevant terms (security, privacy, breach notification requirements, etc.) to ensure a common understanding between the two parties.
Termination: The contract will define the terms by which it may be ended, including failure to provide service, failure to pay, a set duration, or with a certain amount of notice.
Litigation: Contracts may include litigation terms such as requiring arbitration rather than a trial in court.
Assurance: Assurance requirements set expectations for both parties. For example, the provider may be required to provide an annual SOC 2 audit report to demonstrate the effectiveness of its controls.
Compliance: Cloud providers will need to have controls in place and undergo audits to ensure that their systems meet the compliance requirements of regulations and standards that apply to their customers.
Access to Cloud/Data: Contracts may ensure access to services and data to protect a customer against vendor lock-in.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Occhave is consulting on a new project for a company that is expanding its retail operations to vacation destinations around the world. Some of these remote vacation spots have limited internet access at times. Credit card processing is their primary concern that pertains to network access.

If they do not have internet access at a location, what options do they have?

A. Use Voice over Internet Protocol (VoIP) to contact the bank to confirm available funds
B. Capture the credit card details locally and wait for internet access to return
C. Access the local cloud server to process the credit card rather than the bank server
D. Use the internet access on their mobile cell phones until the internet is back

A

B. Capture the credit card details locally and wait for internet access to return

Explanation:
The only choice, if you do not have internet access, is to wait for that access to return to actually charge a credit card. There is a financial risk that a card may not be valid and a customer will have already left the store with the product. However, if the company wants to make sales at those times, that is the risk they have to take. They could capture local address information to be able to connect with the customer later.

One of the conditions for cloud is broad network access. If you have access to the network (the internet), then you have access to the cloud. If you do not have access to the network, then there is no cloud.

No internet access means no internet access. So it is not possible to use the internet through the mobile phone.

There is no local cloud server since there is no internet. There could be a private cloud that has been built, but this company is not big enough in those remote locations to build a private cloud every place the internet is not reliable.

VoIP is the transmission of a phone call over an IP-based network. For this remote location, the IP-based network would be the internet. In the scenario, there is no internet access at all, so this is not an option.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

A cloud information security manager is working with the data architect to determine the best way to implement encryption in a specific database. They are analyzing the data that is stored in this particular database, and they have discovered that there are a few fields with very sensitive data.

What type of encryption would work best to protect this data and not overwhelm the administrators and systems with too much work?

A. Application-level encryption
B. Fully Homomorphic Encryption
C. Column-level encryption
D. Transparent Data Encryption

A

C. Column-level encryption

Explanation:
Column-level encryption can be performed to encrypt the few columns (fields or attributes) that are particularly sensitive. This allows for granular control where the columns that contain sensitive information such as social security numbers or credit card numbers can be encrypted.

Application-level encryption could be used for this, but it does require more work to manage and maintain. This involves encrypting the data at the application layer before it is stored in the database.

Transparent Data Encryption (TDE) encrypts the entirety of the database or specific columns. The application code does not need to be changed if TDE is used. So, this is a possible answer to this question, but given a choice, the closest answer to the question is usually better. Since the question mentions fields, which is another name for the column beyond attributes, it is the best answer here.

Fully Homomorphic Encryption (FHE) is a new and emerging technique to keep the data encrypted while it is in use. The three techniques above are for encrypting data at rest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Which of the following types of private data is protected by the GDPR, CCPA/CPRA, and similar data protection laws?

A. Personally Identifiable Information
B. Protected Health Information
C. Payment Data
D. Contractual Private Data

A

A. Personally Identifiable Information

Explanation:
Private data can be classified into a few different categories, including:

Personally Identifiable Information (PII): PII is data that can be used to uniquely identify an individual. Many laws, such as the GDPR and CCPA/CPRA, provide protection for PII.
Protected Health Information (PHI): PHI includes sensitive medical data collected regarding patients by healthcare providers. In the United States, HIPAA regulates the collection, use, and protection of PHI.
Payment Data: Payment data includes sensitive information used to make payments, including credit and debit card numbers, bank account numbers, etc. This information is protected under the Payment Card Industry Data Security Standard (PCI DSS).
Contractual Private Data: Contractual private data is sensitive data that is protected under a contract rather than a law or regulation. For example, intellectual property (IP) covered under a non-disclosure agreement (NDA) is contractual private data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

A startup cloud provider is building their first Data Center (DC). They have been doing their research into the constraints of what the American Society of Heating, Refrigeration and Air conditioning Engineers (ASHRAE) recommends for temperature and humidity inside of the DC. The DC will be close to a desert, and they are concerned about it being too dry in the DC. At the same time, they want to ensure that the moisture level is not too high in their data center.

What is the recommended maximum moisture level for a data center?

A. 60 percent relative humidity
B. 70 percent relative humidity
C. 50 percent relative humidity
D. 80 percent relative humidity

A

A. 60 percent relative humidity

Explanation:
The recommended maximum moisture level in a data center is 60 percent relative humidity.

The recommended minimum is 40 percent relative humidity. When there is too much moisture in the air, it can cause condensation to form, which may damage the systems. In addition, having the humidity levels too low may cause an excess of electrostatic discharge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Which of the following SIEM features may be necessary when dealing with data sets from various sources?

A. Automated Monitoring
B. Data Integrity
C. Normalization
D. Alerting

A

C. Normalization

Explanation:
Security information and event management (SIEM) solutions are useful tools for log analysis. Some of the key features that they provide include:

Log Centralization and Aggregation: Combining logs in a single location makes them more accessible and provides additional context by drawing information from multiple log sources.
Data Integrity: The SIEM is on its own system, making it more difficult for attackers to access and tamper with SIEM log files (which should be write-only).
Normalization: The SIEM can ensure that all data is in a consistent format, converting things like dates that can use multiple formats.
Automated Monitoring or Correlation: SIEMs can analyze the data provided to them to identify anomalies or trends that could be indicative of a cybersecurity incident.
Alerting: Based on their correlation and analysis, SIEMs can alert security personnel of potential security incidents, system failures, and other events of interest.
Investigative Monitoring: SIEMs support active investigations by enabling investigators to query log files or correlate events across multiple sources
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Amir is working for a large organization that has a Platform as a Service (PaaS) application that they created for their internal users. It is a web application that uses browser cookies for sessions and state. However, when the user logs out, the cookies are not properly destroyed. This has allowed another user that had access to the same browser as the previous user to log in using the same cookies from the previous session.

What is this an example of?

A. Security misconfiguration
B. Sensitive data exposure
C. Broken authentication
D. Broken access control

A

C. Broken authentication

Explanation:
Broken authentication is one of the OWASP Top 10 vulnerabilities. Broken authentication occurs when an issue with a session token or cookie makes it possible for an attacker to gain unauthorized access to a web application. This can occur when session tokens are not properly validated, making it possible for an attacker to hijack the token and gain access. Another example of this can occur when cookies are not properly destroyed after a user logs out, making it possible for the next user to gain access with their cookies.

Security misconfiguration occurs when someone does not understand how to configure the software, what configuration needs to be there, etc.

A great resource for the OWASP top 10 can be found OWASP’s website. It is good to be familiar with the top 10 and some of the solutions or fixes to prevent them from occurring.

Broken access control is not top of the OWASP Top 10 list. Broken access control occurs in a variety of ways, such as failing to setup access based on the logic of least privilege or if elevation of permissions is possible for the average user when it should not be.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

JoAnn has been configuring a server that will handle all network forwarding decisions, which allows the network device to simply perform frame forwarding. This allows for dynamic changes to traffic flows based on customer needs and demands. What is the name of the network approach described here?

A. Dynamic Name System Security (DNSSec)
B. Virtual Private Cloud (VPC)
C. Software-defined networking (SDN)
D. Dynamic Host Configuration Protocol (DHCP)

A

C. Software-defined networking (SDN)

Explanation:
In software defined networking, decisions regarding where traffic is filtered and sent are separate from the actual forwarding of the traffic. This separation allows network administrators to quickly and dynamically adjust network flows based on the needs of customers. Software defined networking is often referred to as Software Defined - Wide Area Network (SD-WAN) when it is used as the backbone network.

DNSSec is an extension to DNS. DNS converts domain names, such as pocketprep.com to IP addresses. DNS is a hierarchically organized set of servers within the internet and corporate networks. DNSSec adds authentication to allow verification of the source of DNS information.

DHCP is used to dynamically allocate IP addresses to devices when they join a network.

VPC is a simulation of a private cloud within a public cloud environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

An organization is looking to balance concerns about data security with the desire to leverage the scalability and cost savings of the cloud. Which of the following cloud models is the BEST choice for this?

A. Hybrid Cloud
B. Private Cloud
C. Public Cloud
D. Community Cloud

A

A. Hybrid Cloud

Explanation:
Cloud services are available under a few different deployment models, including:

Private Cloud: In private clouds, the cloud customer builds their own cloud in-house or has a provider do so for them. Private clouds have dedicated servers, making them more secure but also more expensive.
Public Cloud: Public clouds are multi-tenant environments where multiple cloud customers share the same infrastructure managed by a third-party provider.
Hybrid Cloud: Hybrid cloud deployments mix both public and private cloud infrastructure. This allows data and applications to be hosted on the cloud that makes the most sense for them. For example, sensitive data can be stored on the private cloud, while less-sensitive applications can take advantage of the benefits of the public cloud.
Multi-Cloud: Multi-cloud environments use cloud services from multiple different cloud providers. This enables customers to take advantage of price differences or optimizations offered by different providers.
Community Cloud: A community cloud is essentially a private cloud used by a group of related organizations rather than a single organization. It could be operated by that group or a third party, such as FedRAMP-compliant cloud environments operated by cloud service providers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

A cloud provider has assembled all the cloud resources together, from routers to servers and switches, as well as the Central Processing Unity (CPU), Random Access Memory (RAM), and storage within the servers. Then they made them available for allocation to their customers. Which term BEST describes this process?

A. On-demand self-service
B. Data portability
C. Reversibility
D. Resource pooling

A

D. Resource pooling

Explanation:
Cloud providers may choose to do resource pooling, which is the process of aggregating all the cloud resources together and allocating them to their cloud customers. There is pooling of physical equipment into the datacenter. Then there is a pool of resources within a server that are allocated to running virtual machines. That is the Central Processing Unity (CPU), the Random Access Memory (RAM), and the network bandwidth that is available.

Reversibility is the ability to get all the company’s artifacts out of the cloud provider’s equipment, and what is on the provider’s equipment is appropriately deleted.

Portability is the ability to move data from one provider to another without having to reenter the data.

On-demand self-service is the ability for the customer/tenant to use a portal to purchase and provision cloud resources without having much, if any, interaction with the cloud provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Communication, consent, control, transparency, and independent yearly audits are the five key principles focused on by which of the following standards?

A. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27001
B. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27018
C. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27050
D. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 31000

A

B. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27018

Explanation:
ISO/IEC 27018 is a standard for providing security and privacy within cloud computing. The ISO/IEC 27018 focuses on five key principles, which include communication (relaying information to cloud customers), consent (receiving permission before using customer data for any reason), control (cloud customers retain full control over their own data in the cloud), transparency (cloud providers inform customers of any potential exposure to support staff and contractors), and independent and yearly audits (cloud providers must undergo yearly audits performed by a third party).

ISO/IEC 27001 is information security, cybersecurity, and privacy protection - Information Security Management System (ISMS). This is good for building and auditing a corporation’s ISMS.

ISO/IEC 27050 is information technology - electronic discovery (e-discovery).

ISO/IEC 31000 is risk management guidelines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Cryptoshredding falls under which classification in NIST’s methods of media sanitization?

A. Purge
B. Wipe
C. Clear
D. Destroy

A

A. Purge

Explanation:
When data is no longer needed, it should be disposed of using an approved and appropriate mechanism. NIST SP 800-88, Guidelines for Media Sanitization, defines three levels of data destruction:

Clear: Clearing is the least secure method of data destruction and involves using mechanisms like deleting files from the system and the Recycle Bin. These files still exist on the system but are not visible to the computer. This form of data destruction is inappropriate for sensitive information.
Purge: Purging destroys data by overwriting it with random or dummy data or performing cryptographic erasure (cryptoshredding). Often, purging is the only available option for sensitive data stored in the cloud, since an organization doesn’t have the ability to physically destroy the disks where their data is stored. However, in some cases, data can be recovered from media where sensitive data has just been overwritten with other data.
Destroy: Destroying damages the physical media in a way that makes it unusable and the data on it unreadable. The media could be pulverized, incinerated, shredded, dipped in acid, or undergo similar methods.

Wipe is not a NIST-defined method of media sanitization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

A multinational conglomerate company manufactures smart appliances that include washing machines and espresso machines. Some of their products have ended up being used by a consulting firm. These products are in the buildings (lights and such) and in the breakrooms (refrigerators). These products are connected to the network and are sending their logs to the Security Information and Event Manager (SIEM). An analysist in the Security Operations Center (SOC) has been analysing an Indication of Compromise (IoC). The IoC indicates correctly that an attack has occurred by a bad actor that has compromised a virtual desktop that then led to a compromise of the database.

What does this say about the smart appliances?

A. True positive
B. True negative
C. False positive
D. False negative

A

B. True negative

ExplanatioTo understand true negatives, it is essential to grasp the concept of a confusion matrix, which is a table that summarizes the performance of a classification model. The confusion matrix consists of four elements:

True Positives (TP): The model correctly predicts positive outcomes when the actual outcomes are indeed positive.
True Negatives (TN): The model correctly predicts negative outcomes when the actual outcomes are indeed negative.
False Positives (FP): The model incorrectly predicts positive outcomes when the actual outcomes are negative.
False Negatives (FN): The model incorrectly predicts negative outcomes when the actual outcomes are positive.

Because there is nothing that the analyst sees about the smart appliances and there is a compromise between the virtual desktop and the database, there is no problem with the smart appliances. Therefore, it is true that there are no (negative) IoCs regarding the smart appliances being attacked.n:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Any information relating to past, present, or future medical status that can be tied to a specific individual is known as which of the following?

A. Payment Card Industry (PCI) information
B. Gramm Leach Bliley Act (GLBA)
C. Health Information Portability Accountability Act
D. Protected Health Information (PHI)

A

D. Protected Health Information (PHI)

Explanation:
Protected Health Information (PHI) is a subset of Personally Identifiable Information (PII). PHI applies to any entity defined under the U.S. Health Information Portability and Accountability Act (HIPAA) laws. Any information that can be tied to a unique individual as it relates to their past, current, or future health status is considered PHI.

The payment card industry defines the Data Security Standard (DSS) that we fully know as PCI-DSS. It demands that payment card information be protected.

GLBA is a U.S. act that ensures that personal data belonging to the customers of financial institutions must be protected. It is tied to Sarbanes Oxley (SOX).
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Teo is concerned about future attacks and their ability to perform the forensic analysis that is required of him and his team for his corporation. If they move into the cloud, they are concerned they will not be able to obtain the forensic evidence that they require. Which standard provides guidelines for handling digital evidence?

A. ISO/IEC 27037
B. ISO/IEC 27036
C. ISO/IEC 27041
D. ISO/IEC 27042

A

A. ISO/IEC 27037

Explanation:
ISO/IEC 27037 provides guidelines for handling digital evidence. It has specific activity guidance for identification, collection, acquisition, and preservation of digital evidence that could be valuable as proof of bad activities.

ISO/IEC 27041 provides guidance for methods and processes used in investigations to make sure they are “fit for purpose.”

ISO/IEC 27042 provides guidance on analysis and interpretation of digital evidence.

ISO/IEC 27036 provides guidance on cybersecurity and supplier relationships. It is an overview intended to assist corporations in securing their information and systems when working with suppliers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Documents and emails are examples of which of the following types of data?

A. Unstructured
B. Semi-structured
C. Mostly structured
D. Structured

A

A. Unstructured

Explanation:
The complexity of data discovery depends on the type of data being analyzed. Data is commonly classified into one of three categories:

Structured: Structured data has a clear, consistent format. Data in a database is a classic example of structured data where all data is labeled using columns. Data discovery is easiest with structured data because the data discovery tool just needs to understand the structure of the database and the context to identify sensitive data.
Unstructured Data: Unstructured data is at the other extreme from structured data and includes data where no underlying structure exists. Documents, emails, photos, and similar files are examples of unstructured data. Data discovery in unstructured data is more complex because the tool needs to identify data of interest completely on its own.
Semi-Structured Data: Semi-structured data falls between structured and unstructured data, having some internal structure but not to the same degree as a database. HTML, XML, and JSON are examples of semi-structured data formats that use tags to define the function of a particular piece of data.

Mostly structured is not a common classification for data.

42
Q

Which of the following emerging technologies improves portability in the cloud?

A. Containers
B. DevSecOps
C. Confidential Computing
D. Edge Computing

A

A. Containers

Explanation:
Cloud computing is closely related to many emerging technologies. Some examples include:

Containers: Containerization packages an application along with all of the dependencies that it needs to run in a single package. This container can then be moved to any platform running the container software, including cloud platforms.
Edge and Fog Computing: Edge and fog computing move computations from centralized servers to devices at the network edge, enabling faster responses and less usage of bandwidth and computational power by cloud servers. Edge computing performs computing on IoT devices, while fog computing uses gateways at the edge to collect data from these devices and perform computation there.
Confidential Computing: While data is commonly encrypted at rest and in transit, it is often decrypted while in use, which creates security concerns. Confidential computing involves the use of trusted execution environments (TEEs) that protect and isolate sensitive data from potential threats while in use.
DevSecOps: DevSecOps is the practice of building security into automated DevOps workflows. DevSecOps can be used to secure cloud-hosted applications. Also, infrastructure as code (IaC) involves automating the configuration of cloud-based systems and servers to reduce errors and improve scalability.
43
Q

When constructing cloud data centers, it is necessary to control the temperature within the data center. Given the size of some data centers, it would be wise to manage the heating and air conditioning as efficiently as possible. If the data center is constructed with rows of equipment that have servers facing each other and then the backs of servers facing each other, the area that needs to be cooled is less than if all servers are oriented facing the same direction in the data center (e.g., facing north).

If the data center is constructed with cold air aisles, where does the cold air flow into the servers?

A. Cold air aisles have the cold air coming into the front of the server racks
B. Cold air aisles have the cold air coming into the back of the server racks
C. Hot air aisles have the hot air coming out the front of the server racks
D. Hot air aisles have the hot air coming out the back of the server racks

A

A. Cold air aisles have the cold air coming into the front of the server racks

Explanation:
Cold air aisles have the cold air coming into the front of the server racks, and hot air exits the back of the server racks. That means that two answers are true but only one is correct. The correct answer is the one that answers the question. The question is about where the cold air flows in, which is into the front.

The other two answers are backward. If you say cold air aisle, the cold air comes into the front. If you say hot air, the hot air exits the front. The temperature (hot or cold) describes what happens at the front of the server.

Based on the direction, the cold air can be directed to that area to ensure that the servers pull in cold air rather than hot air. Depending on the overall design of the data center and location on the planet (in a mountain or near a desert), the data center can be constructed as efficiently as possible. Either configuration could be the right configuration.

44
Q

A cloud security professional has been tasked with creating a logical network segregation and isolation of systems in a cloud environment. The systems that they need to isolate include a server and a database. The corporation wants these systems to be highly isolated with a custom firewall in front of them.

Which networking concept can be used to achieve this?

A. Micro-segmentation
B. Hypersegmentation
C. Containers
D. Virtualized networking

A

A. Micro-segmentation

Explanation:
Micro-segmentation is a method of isolating a very small number of machines, possibly even one, behind a custom firewall. This is one solution for the corporation. Another would have been a security group, but that is not an option in the answers listed.

Hypersegmentation isolates a traffic flow from end to end, which is similar to the concept of a Virtual Private Network (VPN), but the traffic flow would not be visible to anyone due to the capability of the hypervisors to isolate tenants.

Virtualized networking involves many different types of options from Virtualized Local Area Networks (VLAN) to the virtualization of a data center and its network in an Infrastructure as a Service (IaaS) deployment.

Containers are a way to create a virtualized environment that contains specific applications. Containers use Kubernetes (K8S) for deployment and management. It isolates an application, not a server as the question requires.

45
Q

Quinn has been hired as the new information security manager at a regional hospital. He has been reviewing the hospital’s information security policies. In reviewing the data handling policies, he has discovered that it is necessary to redefine what data would be considered sensitive and require protection under the Health Insurance Portability and Accountability Act (HIPAA).

Of the following, which is considered sensitive data that must be protected as Protected Health Information (PHI)?

A. Political views
B. Demographic information
C. Current street address
D. Passport number

A

B. Demographic information

Explanation:
Protected Health Information (PHI) covers items such as demographic information, medical history, physical and mental health information, lab results, physician notes, and other health related items.

Passport numbers, political views, and current street addresses would be considered Personally Identifiable Information (PII) rather than PHI.

46
Q

In a large organization, a recent attack occurred. The attack has poisoned the Domain Name Server. What would be the effect of this?

A. Flooding the systems on the network with traffic so that they can’t reply to legitimate traffic
B. Stealing personally identifiable information (PII)
C. Redirecting legitimate users to compromised or spoofed systems
D. All files on the network are encrypted by an attacker

A

C. Redirecting legitimate users to compromised or spoofed systems

Explanation:
Correct answer: Redirecting legitimate users to compromised or spoofed systems

DNS servers provide information that converts domain names, such as PocketPrep.com, to Internet Protocol (IP) addresses. If poisoned, the wrong information would then be provided to the users. This would allow them to redirect legitimate users to compromised or spoofed systems.

Stealing PII would be a data breach. This is when information ends up in the wrong hands.

Flooding the systems on the network with traffic so that they can’t reply to legitimate traffic is a Denial of Service (DoS) attack. It could also be a Distributed DoS (DDoS). In a DDoS, the attack comes from many sources simultaneously.

If all files on the network are encrypted by an attacker, that would likely be a ransomware attack.

47
Q

Thian is performing a risk assessment with his information security team for a hospital. They have determined the likelihood and probable impact level of the most serious problems they believe they are susceptible to. Which of the following statements regarding responding to risk is FALSE?

A. An organization can transfer risk via insurance policies to cover financial costs of successful exploits
B. There is never an appropriate scenario in which to accept a risk
C. Risk mitigation typically depends on the results of a cost-benefit analysis
D. Organizations may opt to implement procedures and controls to ensure that a specific risk is never realized

A

B. There is never an appropriate scenario in which to accept a risk

Explanation:
There are times when a company may choose to simply accept a risk rather than do anything to deal with it. This is often done when the cost of mitigating the risk outweighs the cost of simply dealing with the consequences if the risk was to occur.

A company can opt to implement procedures and controls to ensure that a specific risk is never realized. Nothing is ever perfect, but that can certainly be their goal.

Insurance policies are a common risk transference option. It has the intention of covering the financial costs of successful exploits. It is necessary to ensure that the conditions of the policy are understood. It can easily be stated that if the company does not implement the appropriate controls, it will not cover the costs of the exploitation.

Performing a risk assessment allows a corporation to understand the likelihood and expected impact of specific scenarios. Based on that, an appropriate risk mitigation can be chosen based on a cost-benefit analysis.

48
Q

Which of the following types of testing looks for vulnerabilities that could cause the software to exhibit unexpected behavior?

A. Regression Testing
B. Integration Testing
C. Unit Testing
D. Abuse Testing

A

D. Abuse Testing

Explanation:
Abuse testing is when software is tested to see that it properly handles unexpected, malformed, or malicious inputs. It verifies that software not only performs correctly when used correctly but also is secure and robust when something unexpected happens.

Unit, integration, and regression testing verify that the software meets requirements and exhibits desirable behavior when used correctly.

49
Q

A cloud administrator needs to implement network segmentation but cannot use physical segmentation methods since they are using Platform as a Service (PaaS) cloud technology. Which segmentation can be used to ensure the isolation of a critical server that is protected by a very tailored firewall?

A. Virtual extensible segmentation
B. Micro segmentation
C. Hyper segmentation
D. Virtual segmentation

A

B. Micro segmentation

Explanation:
Micro segmentation allows very small network segments to be constructed—as small as one virtual machine. That one virtual machine would be protected by a firewall tailored to the network and software on that virtual machine.

Hyper segmentation is utilizing the capability of hypervisors to isolate virtual machines to ensure that the transmission of a specific traffic flow cannot be seen by anyone who should not be able to see it.

Virtual segmentation and virtual extensible segmentation are methods of building virtual Local Area Networks (LAN).

50
Q

The information security team responsible for Identity and Access Management (IAM) for a medium-sized business that is looking to expand through the use of contractors wants to implement a way for all their users to only be required to use a single set of authentication credentials to access all the organization’s resources.

What technology would allow for that to happen?

A. Open Identification (OpenID)
B. Single Sign On (SSO)
C. Security Assertion Markup Language (SAML)
D. Federated Identity Management (FIM)

A

D. Federated Identity Management (FIM)

Explanation:
Federated Identity Management (FIM), also known as Federated Identity, allows for single sign on across disparate organizations. FIM can use SAML, OAuth, and OpenID, among other technologies in the cloud. Because the question involves a company and its contractors, who work for other businesses, FIM is a better answer than SSO. If FIM was not here, SSO would be a good answer.

SAML or OpenID could be a good answer to this since it is a technology used for FIM in the cloud. However, we do not have any data in the question that would point us to SAML versus OpenID. So the more generic answer of FIM works better.

FIM does not mean that the user will not have other accounts though. It can be used as SSO.

Single Sign-On (SSO) allows an individual to authenticate once using a single set of authentication credentials and be given access to other independent systems.

51
Q

A Cloud Service Provider (CSP) must always be looking for ways to manage and grow their data centers as needed for their customers. What is the name of the ITIL practice that has the CSP aligning their services with changing business needs?

A. Service-level management
B. Continual service improvement management
C. Capacity management
D. Availability management

A

B. Continual service improvement management

Explanation:
Continual Service Improvement (CSI) is the practice within the ITIL framework that focuses on continually improving the quality of IT services delivered to customers. It is an iterative process that aims to enhance service performance, efficiency, and effectiveness over time. CSI involves monitoring, analyzing, and making changes to IT services, processes, and strategies to align them with changing business needs and goals.

Service Level Management (SLM) is a process within the ITIL framework that focuses on establishing and maintaining appropriate service levels to meet the needs of customers and align with business objectives. SLM aims to define, negotiate, and manage Service-Level Agreements (SLAs) between the service provider and the customer.

Availability management is a process within the ITIL framework that focuses on ensuring that IT services meet agreed-upon availability targets to support business operations. The primary goal of Availability management is to optimize the availability of IT services by identifying and addressing potential vulnerabilities and risks that may affect service availability.

Capacity management is a process within the ITIL framework that focuses on ensuring that IT resources are effectively and efficiently utilized to meet current and future business requirements. It involves the proactive management of resources to ensure that they are available when needed without underutilization or overutilization.
Reference:

52
Q

Which of the following terms is MOST related to the chain of custody?

A. Confidentiality
B. Availability
C. Integrity
D. Non-repudiation

A

D. Non-repudiation

Explanation:
Non-repudiation refers to a person’s inability to deny that they took a particular action. Chain of custody helps to enforce non-repudiation because it demonstrates that the evidence has not been tampered with in a way that could enable someone to deny their actions.

Confidentiality, integrity, and availability are the “CIA triad” that describes the main goals of security.

53
Q

Rashid has been working with his customer to understand the Indication of Compromise (IoC) that they have seen within their Security Information and Event Manager (SIEM). The logs show that a bad actor infiltrated their organization through a phishing email. Once the bad actor was in, they traversed the network till they gained access to a firewall. Once they were in the firewall, the bad actor assumed the role the firewall had to access the database. The database was then copied by the bad actor.

This is an example of which type of threat?

A. Command injection
B. Data breach
C. Account hijacking
D. Advanced persistent threat (APT)

A

B. Data breach

Explanation:
A data breach occurs when data is leaked or stolen, either intentionally or unintentionally. This is not an Advanced Persistent Threat (APT). An APT requires an advanced level of skill from bad actors who usually will be attacking for one nation state against another.

Account hijacking is a step along the way when the bad actor assumed the role that the firewall had to access the database. The whole attack was for the purpose of stealing the data, which is a data breach.

Command injection occurs when a bad actor types a command into a field that is interpreted by the server. This is similar to an SQL injection.
Reference:

54
Q

Dylan works for a medium-sized business that has been moving their processing and data storage to the public cloud using Platform as a Service (PaaS). She now has 100 virtual machines running and is starting to worry about keeping everything to date with security patches in particular. What can be used in the cloud to facilitate patch management?

A. Orchestration
B. Correlation
C. Live migration
D. Message queues

A

A. Orchestration

Explanation:
Orchestration enables the automation of patch management tasks. It can automatically scan cloud environments for vulnerabilities, identify systems that require patches, and initiate the patching process based on predefined rules or policies. This reduces manual effort, minimizes human errors, and ensures consistent patch deployment across the cloud infrastructure.

Live migration is a technique used in virtualization environments that allows a running Virtual Machine (VM) to be moved from one physical host to another without interrupting its operation or impacting the services running on the VM. It enables administrators to dynamically migrate VMs across different physical hosts for various purposes, such as load balancing, hardware maintenance, energy optimization, or improving resource utilization.

Cloud message queues are a type of asynchronous communication service provided by cloud platforms that enable the decoupling of components within distributed systems or microservice architectures. They provide a reliable and scalable way to send, store, and retrieve messages between different parts of an application or between different applications running in the cloud.

Security Information and Event Management (SIEM) correlation is a process of analyzing and correlating security events and log data from various sources within a network or system to identify patterns, relationships, and potential security incidents. SIEM correlation helps organizations gain better visibility into their security posture, detect threats, and respond effectively to security incidents.

55
Q

A medium-sized corporation has been looking for a technology that would aid their business in its decision-making processes. They are trying to learn from all the data that they have so that they can make better decisions. In particular, they are trying to figure out how to reduce the costs in their manufacturing processes. What could they use?

A. Descriptive analytics
B. Prescriptive analytics
C. Machine learning
D,. Predictive analytics

A

B. Prescriptive analytics

Explanation:
Prescriptive analytics uses mathematical models and optimization techniques to recommend actions or decisions based on a given set of constraints, objectives, and possible outcomes (such as how to gain a competitive advantage, reduce costs, and improve performance).

Predictive analytics focuses on analyzing historical data and predicting future outcomes.

Descriptive analytics works to describe the data. It looks to analyze the information and provide details regarding that data.

Machine learning is designed to apply the principles of data science to uncover hidden knowledge in our data.

These three possible answer options only begin to help the corporation. Prescriptive takes it a step further with information on what they should do about this knowledge.

56
Q

Which of the following is used to mitigate and control customer requests for resources if the environment doesn’t have enough resources available to meet the requests?

A. Reservations
B. Objects
C. Limits
D. Shares

A

D. Shares

Explanation;
The concept of shares is that CPU and memory resources are available for virtual machines that need them. The flexibility of the shared environment that is the cloud makes it possible for virtual machines to expand to meet the user’s needs or shrink back. This is the elasticity of the cloud. It is the shared space that it expands into. It generally works on a first-come first-serve basis.

Reservations set aside some of that CPU and memory for a specific virtual machine.

A limit restricts a virtual machine’s ability to expand past a certain point.

An object is a file of some type. It could be a virtual machine image, video, document, spreadsheet, and so on.

57
Q

A cloud operator is working on a current issue that has been identified within their Infrastructure as a Service (IaaS) public cloud deployment. After an Indication of Compromise (IoC) from the Security Information Event Manager (SIEM) pointed to a possible compromise, the Incident Response (IR) team was called in. The analysis of the alerts has led the network cloud operator to require visibility of the packets passing through their border router for what appears to be an ongoing attack.

What will this likely require?

A. A written contract and written permission
B. Input and involvement from the cloud provider
C. No special requirements are needed
D. Permission from all cloud customers

A

C. No special requirements are needed

Explanation:
This is an IaaS deployment. The edge router in the cloud is virtual within the IaaS deployment. The physical edge router that is in the cloud provider’s network is not visible to the customer. The virtual router belongs to the customer. It is their operating system. They are the only ones who have traffic passing through that router, unless, of course, there is a bad actor sending data through it.

In both Platform as a Service (PaaS) and Software as a Service (SaaS) deployments, the routers, virtual or physical, are not visible to the cloud customer. If they do want to see that traffic, they would need to work with the cloud provider. This is because the routers belong to the cloud provider and would have traffic from other cloud customers passing through it. For this reason, the “permission from all cloud customers,” “a written contract and written permission” and the “input and involvement from the cloud provider” are not correct.

58
Q

A covert government agency has hired highly skilled software developers to create a tool to infiltrate and control the power grid of an enemy state. The software is designed to slowly cause damage to the programmable logic computers (PLC) that control the physical systems of the power station. The software is also designed to send false information to the monitoring devices to reduce the chance that the damage will be noticed until it is too late.

What type of threat is this?

A. Command injection attack
B. Malicious insider
C. Advanced Persistent Threat (APT)
D. Denial of Service (DoS) attack

A

C. Advanced Persistent Threat (APT)

Explanation:
Advanced Persistent Threat (APT) is an attack, which aims to gain access to the network and systems while staying undetected. APTs will try not to do anything that could be disruptive, as their goal is to maintain access for as long as possible without raising any red flags.

The fundamental goal of the attack in the question is to shut down the power grid. However, DoS is not the best answer because of the description of who is attacking and how it is much more aggressive than just a DoS attack.

Since this is nation state against nation state and the attack is slow, combined with the goal of not being detected, APT is the best answer.

A command injection is when an Operating System (OS) command is entered into a field that is used by users during normal application activity.

This is not a malicious insider. If that were the subject of this question, it would be necessary to obtain a government agent hired by the opposition government with the intention of remaining there and causing damage from within the government.

59
Q

Tristan is the cloud information security manager working for a pharmaceutical company. They have connected to the community cloud that was built by the government health agency to advance science, diagnosis, and patient care. They also have stored their own data with a public cloud provider in the format of both databases and data lakes.

What have they built?

A. Hybrid cloud
B. Private cloud
C. Public cloud
D. Storage area network

A

A. Hybrid cloud

Explanation:
A hybrid cloud deployment model is a combination of two of the three options: public, private, and community. It could be public and private, private and community, or public and community as in the question. A public cloud example is Amazon Web Service (AWS). A private cloud is built for a single company. Fundamentally, it means that all the tenants on a single server are from the same company. A community example is the National Institute of Health (NIH), which built a community cloud to advance science, diagnosis, and patient care.

A Storage Area Network (SAN) is the physical and virtual structure that holds data at rest. SAN protocols include Fibre Channel and iSCSI.

60
Q

Organization A and Organization B are both cloud customers using the same cloud provider and are even sharing resources on the same physical server. Organization B was hit with a Denial-of-Service (DoS) attack, causing them to use more resources than they would normally need. Fortunately, Organization A will always receive, at least, the minimum Central Processing Unit (CPU) and memory resources that they need to operate their services. Organization B’s DoS attack will also not knock them out of service.

Which concept guarantees that Organization A will always receive the amount of resources needed to run their services?

A. Pooling
B. Shares
C. Reservations
D. Limits

A

C. Reservations

Explanation:
Reservations refer to the minimum guaranteed amount of resources that a cloud customer will receive, regardless of the resources being used by other cloud customers. This guarantees that the cloud customer will always have, at the very least, the minimum amount of resources needed to power and operate their services. Because of ideas such as multitenancy, in which many cloud customers are utilizing the same pool of resources, reservations protect cloud customers in the event that a neighboring cloud customer experiences an attack that causes them to overuse resources, making them limited.

A limit is the maximum amount of resources a Virtual Machine (VM), application, container, etc. is allowed to use. This is actually a good thing to do so that something like this attack would not cause a serious financial strain on Organization B.

Pooling is the collection of the resources that a VM, application, container, etc. can pull resources from. These pools would be inside of a single server. It is the CPU, the memory, the storage, and the network capacity that must be divided among the current tenants.

Shares is whatever is left over of the pool after the reservations are assigned.

61
Q

A cloud provider has received a notice that one of their customers is having trouble with a bad actor. Law enforcement has been involved in the investigation, and they have come to believe that there is critical information in the cloud provider’s logs. Because the logs contain sensitive information, potentially about many different customers, it is necessary for a judge to sign a warrant after hearing the facts of the case so far. In the meantime, it is critical that the cloud provider protects those logs to ensure that they will be available if the judge does sign the warrant.

What concept is used to ensure the existence of the information?

A. Subpoena
B. Stored communication
C. Legal hold
D. Warrant

A

C. Legal hold

Explanation:
Organizations or individuals may need to archive and retain data that meets specific requirements to be used in legal court proceedings. This type of data retention is known as legal hold. A legal hold occurs because law enforcement, a lawyer, or the regulators issue a document that says that the data must be protected and not destroyed.

A subpoena is defined in the Oxford dictionary as “a writ ordering a person to attend a court.”

A warrant is defined in the Oxford dictionary as “a document issued by a legal or government official authorizing the police or some other body to make an arrest, search premises, or carry out some other action relating to the administration of justice.”

The stored communication act is a U.S. law that compels third parties to turn over “stored wire and electronic communications and transactional records.” This applies to phone companies, service providers, and the like.

62
Q

Mordecai works within the Security Operations Center (SOC) for a marketing company. One of his team members has just reported that they have found an issue that needs to be addressed as soon as possible. They have discovered that bad actors could gain access to one of their critical systems because it is missing a critical security patch.

What threat is this?

A. System vulnerabilities
B. Accidental cloud data disclosure
C. Insecure Interfaces and Application Programming Interfaces (API)
D. Cloud storage data exfiltration

A

Explanation:
The Cloud Security Alliance (CSA) lists four main categories of system vulnerabilities. They are as follows:

Zero-day vulnerabilities
Missing security patches
Configuration-based vulnerabilities
Weak or default credentials

Vulnerabilities are flaws. They can exist in any architecture and any service model and could be the responsibility of the customer or the cloud provider.

If the bad actors did access the data, it could be an accidental cloud data disclosure issue. But the team is reporting that the bad actor could gain access, not that they did gain access. This is a small detail, but that is what you need to look for in this test.

Since the bad actor did not get the data, the cloud storage data exfiltration option is also incorrect.

It is plausible that the system vulnerability is in the API. The question does not specify that, so the more generic system vulnerabilities is a better answer.

More details can be found in the CSA Pandemic 11 document, which is recommended reading for this test.

63
Q

Which SIEM feature is MOST vital to maintaining complete visibility in a multi-cloud environment?

A. Normalization
B. Log Centralization and Aggregation
C. Automated Monitoring
D. Investigative Monitoring

A

B. Log Centralization and Aggregation

Explanation:
Correct answer: Log Centralization and Aggregation

Security information and event management (SIEM) solutions are useful tools for log analysis. Some of the key features that they provide include:

Log Centralization and Aggregation: Combining logs in a single location makes them more accessible and provides additional context by drawing information from multiple log sources.
Data Integrity: The SIEM is on its own system, making it more difficult for attackers to access and tamper with SIEM log files (which should be write-only).
Normalization: The SIEM can ensure that all data is in a consistent format, converting things like dates that can use multiple formats.
Automated Monitoring or Correlation: SIEMs can analyze the data provided to them to identify anomalies or trends that could be indicative of a cybersecurity incident.
Alerting: Based on their correlation and analysis, SIEMs can alert security personnel of potential security incidents, system failures, and other events of interest.
Investigative Monitoring: SIEMs support active investigations by enabling investigators to query log files or correlate events across multiple sources.
64
Q

Jorge is working with a cloud provider in their data center. This data center has 1,240 servers using hypervisor type 1 to provide Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) to their customers. In addition to the servers, they have routers, switches, firewalls, and a Storage Area Network (SAN). They have Uninterruptible Power Supplies (UPS) and generators to maintain power to the server racks if there’s a power outage. To supply fuel to the generators, they also have full fuel tanks. To ensure that their datacenter does not overheat, they also have chillers and cooling units.. What tier data center is this?

A. Tier 2
B. Tier 4
C. Tier 3
D. Tier 1

A

A. Tier 2

Explanation:
The Uptime Institute publishes the most widely used standard for data center topologies. The standard is based on a series of four tiers. The standard also incorporates compliance tests. A tier 2 data center has generators, UPS devices, pumps, and fuel tanks to ensure continued operations within the datacenter.

A tier 1 data center has exactly what it needs: a space dedicated to IT operations with its own dedicated cooling systems.

A tier 3 data center adds a redundant distribution path, the path the power takes. It also moves up to concurrently maintainable infrastructure. The servers and other equipment have the capacity to be hot swappable. It is not necessary to shut down equipment to swap out, for example, a power supply or line card. It is also often described as 2n, meaning it has double the equipment that it needs.

A tier 4 data center adds critical fault tolerance to the IT infrastructure. It is also often described as 2n+1. It has more than double the equipment needed for normal operations.

65
Q

Nabil works for a pharmaceutical company and is responsible for the protection of customers’ data. They have many drug trials that they perform as they are developing new drugs. The test results and comments from the patients who are trying these new drugs do need to be shared with the researchers. To protect the patients, the data must be anonymized.

What exactly must be removed?

A. Direct and indirect identifiers
B. All identification
C. Direct identifiers only
D. Indirect identifiers only

A

A. Direct and indirect identifiers

Explanation:
Anonymization is removing both direct and indirect identifiers. If only the direct identifiers are removed, it is called de-identification. Removing all the identifiers is basically what is being done, but the answer with direct and indirect identifiers is more technically accurate.

Direct identifiers in the context of Personally Identifiable Information (PII) refer to specific pieces of information that can be used to directly identify an individual. These identifiers are unique to an individual and can be linked directly to their identity without any further information.

Indirect identifiers in the context of PII refer to pieces of information that, on their own, may not directly identify an individual but can still be used to indirectly identify or link to an individual when combined with other data or in conjunction with additional information.

What is direct versus indirect is a bit debatable. It depends on, many times, the laws and regulations and how they have been written. But some examples of direct identifiers are full name, Social Security Number (SSN), and passport number. Some examples that are probably considered indirect are Vehicle Identification Number (VIN), religious beliefs, gender, and educational background.`

66
Q

An application utilizes a browser token to maintain state, but it doesn’t have any validation processes in place to ensure that the token is submitted by the original and valid obtained of the token. An attacker was able to hijack a browser token and gain unauthorized access to an application.

Which of the OWASP Top 10 vulnerabilities is this an example of?

A. Security misconfiguration
B. Identification and authentication failures
C. Software and data integrity failures
D. Injection

A

B. Identification and authentication failures

Explanation:
Identification and authentication failures (formerly Broken authentication) occur when applications do not have the proper controls or processes in place to secure their authentication and session tokens. This type of vulnerability allows for attackers to hijack session tokens and use them for their own nefarious purposes.

Injection refers to a class of vulnerabilities that occur when untrusted data is sent to an interpreter or a command execution component without proper validation or sanitization. This includes Structured Query Language (SQL) and Operating System (OS) command injections.

Software and data integrity failures occur when the integrity of software or data is compromised, leading to unauthorized modifications, data corruption, or the introduction of malicious code.

Security misconfiguration, which now includes XML External Entitites (XXE), refers to the improper configuration or implementation of security controls and mechanisms, leaving vulnerabilities that can be exploited by attackers. This includes things like leaving the default passwords and configurations in place in production.

67
Q

At which stage of the incident response process might the SOC transfer responsibility over to the IRT?

A. Respond
B. Detect
C. Post-Incident
D. Recover

A

B. Detect

Explanation:
An incident response plan (IRP) should lay out the steps that the incident response team (IRT) should carry out during each step of the incident management process. This process is commonly broken up into several steps, including:

Prepare: During the preparation stage, the organization develops and tests the IRP and forms the IRT.
Detect: Often, detection is performed by the security operations center (SOC), which performs ongoing security monitoring and alerts the IRT if an issue is discovered. Issues may also be raised by users, security researchers, or other third parties.
Respond: At this point, the IRT investigates the incident and develops a remediation strategy. This phase will also involve containing the incident and notifying relevant stakeholders.
Recover: During the recovery phase, the IRT takes steps to restore the organization to a secure state. This could include changing compromised passwords and similar steps. Additionally, the IRT works to address and remediate the underlying cause of the incident to ensure that it is completely fixed.
Post-Incident: After the incident, the IRT should document everything and perform a retrospective to identify potential room for improvement and try to identify and remediate the root cause to stop future incidents from happening.
68
Q

During the development/coding phase of the software development lifecycle, testing should begin. What type of testing is likely to be done in this phase?

A. Interactive Application Security Testing (IAST)
B. Quality assurance testing
C. Functional and nonfunctional testing
D. Dynamic Application Security Testing (DAST)

A

C. Functional and nonfunctional testing

Explanation:
As each portion of code is created and completed, functional testing is done on it by the development team. This testing is done to ensure that it compiles correctly and operates as intended.

DAST requires the application to be functional. At this point, the code is still being written, so it is too early. The same would be true for quality assurance testing and IAST. DAST is a method of testing to look for any possible attack that can be done from the user’s interface. IAST is a method of testing that analyzes the code as the program is run. Quality assurance really is a combination of all these methods of testing and more.

69
Q

Which of the following enables a cloud provider to offer services on a pay-by-usage basis?

A. Resource Pooling
B. Multitenancy
C. Metered Service
D. On-Demand Self-Service

A

C. Metered Service

Explanation:
The six common characteristics of cloud computing include:

Broad Network Access: Cloud services are widely available over the network, whether using web browsers, secure shell (SSH), or other protocols.
On-Demand Self-Service: Cloud customers can redesign their cloud infrastructure at need, leasing additional storage or processing power or specialized components and gaining access to them on-demand.
Resource Pooling: Cloud customers lease resources from a shared pool maintained by the cloud provider at need. This enables the cloud provider to take advantage of economies of scale by spreading infrastructure costs over multiple cloud customers.
Rapid Elasticity and Scalability: Cloud customers can expand or contract their cloud footprint at need, much faster than would be possible if they were using physical infrastructure.
Measured or Metered Service: Cloud providers measure their customers’ usage of the cloud and bill them for the resources that they use.
Multitenancy: Public cloud environments are multitenant, meaning that multiple different cloud customers share the same underlying infrastructure.

Reference:

70
Q

Chantria is working with the software development team as part of the security team. The application they are building does use a lot of different open-source software (OSS) components from GitHub. Open-source software has become so prevalent in today’s software development that it is critical that…

A. It is integrated properly into the software being developed
B. It is developed by trained software developers
C. It is tested by the source DevOps environment
D. Vendor management is done

A

D. Vendor management is done

Explanation:
The widespread use of open-source software has caused us to start focusing more on vendor management. Log4j is an example of not managing it. Integrating the OSS code properly is certainly something that needs to be done. A bigger security concern is the source of the OSS, the updates to that code, and other concerns of that nature.

Having trained software developers who test the open-source code would be ideal, but you cannot rely on that being done. The work must be done by Chantria and the software developers and testers.

71
Q

Which of these cloud-related factors has the biggest influence on vendor lock in?

A. Portability
B. Reversibility
C. Resiliency
D. Interoperability

A

D. Interoperability

Explanation:
Interoperability is defined, in ISO/IEC 17788, as “the ability of two or more systems or applications to exchange information and to mutually use the information that has been exchanged.” If a specific cloud provider has some specific format that the virtual machine image [for Infrastructure as a Service (IaaS)] or how the data is stored in Platform and Software as a Service (PaaS and SaaS), then it will be difficult to take those images or that data to another cloud provider. That is a vendor lock-in problem.

Portability is the ability to transfer data from one system to another without having to re-enter the data. That is very close to an answer to the question. However, interoperability is the immediate problem for vendor lock in. Portability is about moving the data.

Reversibility is defined in ISO/IEC 17788 as the “process for cloud service customers to retrieve their cloud service customer data and application artifacts and for the cloud service provider to delete all cloud service customer data as well as contractually specified cloud service derived data after an agreed period.” So reversibility is about getting out.

Resiliency is about maintaining a specific level of service. That is furthest from vendor lock in.
Reference:

72
Q

Sebastian is working on the contract negotiations with a cloud provider. One of the concerns that they have is the division of responsibility between them, the Cloud Customer (CC) and the Cloud Service Provider (CSP). One of the options that they are looking at is Platform as a Service (PaaS).

In the PaaS deployment model, who would be responsible for network controls?

A. Cloud Customer (CC)
B. Cloud Service Provider (CSP)
C. Both customer and provider
D. Cloud regulators

A

B. Cloud Service Provider (CSP)

Explanation:
In both the server-based and the server-less deployment options within PaaS, the CSP is responsible for the network controls. This is also true for Software as a Service (SaaS).

In Infrastructure as a Service (IaaS), it would be shared. The CSP is responsible for the physical network controls. The CC is responsible for the virtual network controls.

Cloud regulators are not responsible. They may assess controls. They may assess fines. But they are not responsible.

73
Q

A corporation that is considered heavily regulated would need to be in compliance with what U.S. standards?

A. Sarbanes Oxley (SOX)
B. Personal Information Protection and Electronic Documents Act (PIPEDA)
C. Gramm-Leach-Bliley Act (GLBA)
D. North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP)

A

D. North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP)

Explanation:
Correct answer: North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP)

The North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) is a not-for-profit organization that collaborates with industry stakeholders to set standards for the operations and monitoring of the power system and its enforcement. The national infrastructure, power, water, etc. is heavily regulated because the country relies on infrastructure.

GLBA is a U.S. regulation that requires the protection of personal information held by financial institutions. It is related to Sarbanes Oxley (SOX), which requires the protection of financial data.

PIPEDA is a Canadian regulation that requires the protection of personal information.

74
Q

A cloud provider has received a notice that one of their customers is having trouble with a bad actor. Law enforcement has been involved in the investigation, and they have come to believe that there is critical information in the cloud provider’s logs. Because the logs contain sensitive information, potentially about many different customers, it is necessary for a judge to sign a warrant after hearing the facts of the case so far. In the meantime, it is critical that the cloud provider protects those logs to ensure that they will be available if the judge does sign the warrant.

What concept is used to ensure the existence of the information?

A. Subpoena
B. Stored communication
C. Legal hold
D. Warrant

A

C. Legal hold

Explanation:
Organizations or individuals may need to archive and retain data that meets specific requirements to be used in legal court proceedings. This type of data retention is known as legal hold. A legal hold occurs because law enforcement, a lawyer, or the regulators issue a document that says that the data must be protected and not destroyed.

A subpoena is defined in the Oxford dictionary as “a writ ordering a person to attend a court.”

A warrant is defined in the Oxford dictionary as “a document issued by a legal or government official authorizing the police or some other body to make an arrest, search premises, or carry out some other action relating to the administration of justice.”

The stored communication act is a U.S. law that compels third parties to turn over “stored wire and electronic communications and transactional records.” This applies to phone companies, service providers, and the like.

75
Q

An organization is in the middle of creating a new cloud-based application that will use Application Programming Interfaces (API) to communicate with their partner companies. Due to the design of the application, they need to use multiple data formats, including both JavaScript Object Notation (JSON) and eXtensible Markup Language ( XML), in their cloud deployment.

Which API type should they use?

A. Representational State Transfer (REST)
B. SOAP (formerly Simple Object Access Protocol)
C. Remote Procedure Call (RPC)
D. JavaScript Object Notation- Remote Procedure Call (JSON-RPC)

A

A. Representational State Transfer (REST)

Explanation:
REpresentational State Transfer (REST) is a software architectural scheme that supports multiple data types, including both JSON and XML.

SOAP supports only the use of XML-formatted data types, so it would not work for the organization.

RPC can be considered an API. It is for commands where REST makes Create, Read, Update, Delete (CRUD) available to your application.

JSON-RPC uses just JSON, not XML, which is what the question is asking for.

The website for Smashing Magazine had a good write-up about some API implementations since there is very little in the ISC2 books.
Reference:

76
Q

John, an information security manager, has been advised by an ethical hacker that they have discovered a problem. The problem the ethical hacker has found was discovered when they were trying to log in to the software developers’ portal. The bad actor, let’s call them Jay, was trying to recall if they had a login at all. As they were trying to log in, they stumbled on a “feature” of the login. As Jay was trying to log in, the site was performing a username look-up for every character that he typed in on the signup page. As it turns out, the site did not use cookies with the connection, and he was able to determine 20% of the Fortune 1000’s login usernames.

Which one of the Pandemic 11’s threats is this?

A. Lack of cloud security architecture and strategy
B. Insecure interfaces and Application Programming Interfaces (API)
C. Misconfiguration and exploitation of serverless and container workloads
D. Misconfigurations and inadequate change control

A

B. Insecure interfaces and Application Programming Interfaces (API)

Explanation;
The Cloud Security Alliances (CSA) Pandemic 11 includes Insecure interfaces and Application Programming Interfaces (API) as the number two threat today. The API was not authenticating; it was simply performing lookups as the login name was being typed. This is a scenario that was played out by an ethical hacker against John Deer’s web-based login site that allowed customers to manage their million dollar machinery. For more information refer to the Pandemic 11 anecdotes and examples section.

Lack of cloud security architecture and strategy may be partly to blame for this failure, but this is more of an example of lack of planning as opposed to a specific software-based problem.

Misconfigurations and inadequate change control are the setups of the computing assets (for example, giving excessive permissions, using default credentials, or not patching systems). Again, the scenario is a code-level problem.

Misconfiguration and exploitation of serverless and container workloads refer to issues with the Cloud Service Provider (CSP) failing to secure the virtual environments that functions and containers are executed in.

77
Q

What is the FIRST stage in the Secure Software Development LifeCycle (SSDLC) in which cloud information security specialists should be involved when software is being developed for a Platform as a Service (PaaS) environment?

A. Testing
B. Requirement gathering and feasibility
C. Design
D. Development/coding

A

B. Requirement gathering and feasibility

Explanation:
Cloud information security specialists should be a part of every single phase of the SSDLC, including the first stage: requirement gathering and feasibility. It is much more efficient to add in security to an application as it’s being developed than to attempt to add in security features later (after it’s in production). During the requirement gather and feasibility stage of the SSDLC, cloud information security specialists look at the risks associated with the project. The addition to the question of the PaaS environment does not affect this decision.

78
Q

Virtual Update Manager (VUM) was developed by which of the following?

A. Linux
B. Apple
C. Microsoft
D. VMware

A

D. VMware

Explanation:
Virtual Update Manager (VUM) was developed by VMware. It is used to update both the vSphere hosts and the virtual machines that are running under them.

(You don’t need to know anything vendor specific for the exam, but this information might be useful at your work.)

79
Q

In the shared responsibility model, the consumer will always be responsible for what in the following service models: Infrastructure as a Service (IaaS), Software as a Service (SaaS,) and Platform as a Service (PaaS) models?

A. Application security
B. Identity and access management
C. Governance, Risk management, and Complicance (GRC)
D. Platform security

A

C. Governance, Risk management, and Complicance (GRC)

Explanation:
In any cloud deployment model, IaaS, PaaS, or SaaS, the cloud consumer will be responsible for any control over the data they store in the cloud. This requires that they do their Governance, Risk management, and Compliance (GRC).

Application security is shared between the customer and the cloud provider and includes setting up and managing identity and access management.

Platform security is the responsibility of the provider in SaaS. It is a shared responsibility in PaaS and the customer’s responsibility in IaaS.

80
Q

An information security professional who is working with the software development teams is responsible for ensuring security features are in place to protect a web application. What tool can be used to prevent possible injection attacks?

A. Security patching
B. Antimalware programs
C. Proper logging
D. Input validation

A

D. Input validation

Explanation:
Input validation is a crucial aspect of secure software development and refers to the process of validating and sanitizing user input to ensure its integrity, reliability, and security. The goal of input validation is to prevent malicious or unintended data from being processed or executed by a system, thereby reducing the risk of various vulnerabilities, such as injection attacks (e.g., SQL injection, XSS).

Proper logging is an essential practice in software development and system administration that involves recording relevant events, activities, and errors in a structured and systematic manner. Logging serves several important purposes, including troubleshooting, debugging, performance analysis, security monitoring, and compliance auditing. But it cannot prevent an injection attack.

Security patching, also known as software patching or patch management, is the process of applying updates or fixes to software systems, applications, and devices to address identified security vulnerabilities. It is a critical practice for maintaining the security and integrity of systems and protecting them from potential exploits and attacks.

Antimalware programs, also known as antivirus software or security software, are designed to detect, prevent, and remove malicious software, commonly referred to as malware, from computers and other devices. These programs play a crucial role in safeguarding systems and networks against various types of malware, such as viruses, worms, Trojans, ransomware, spyware, and adware.

81
Q

You are a cloud administrator and are configuring live migration for a Virtual Machine (VM). What does live migration provide?

A. Redundant hardware
B. Fault tolerance
C. Trusted Platform Module
D. Data replication

A

B. Fault tolerance

Explanation:
Virtualization technologies play a crucial role in cloud environments, enabling the creation and management of Virtual Machines (VMs) or containers. Fault tolerance mechanisms like live migration and high availability clusters are implemented in the virtualization layer. Live migration allows VMs to be moved to another physical host without disruption, ensuring continuity of service during hardware maintenance or failures. High availability clusters involve replicating VMs across multiple hosts, so if one host fails, the workload automatically fails over to another host.

TPM stands for Trusted Platform Module. It is a specialized hardware component that provides secure storage and processing capabilities for cryptographic keys, measurements, and other security-related functions. TPMs are typically integrated into computer systems, including desktops, laptops, servers, and IoT devices, to enhance their security and enable trusted computing features. But they do not provide a backup or alternative chip.

Data replication and backup strategies are implemented to create multiple copies of data across geographically distributed storage systems, providing resilience against hardware failures and disasters. This is not what live migration provides.

Redundant hardware is the physical server (for example, when you configure the live migration that allows a VM to be moved dynamically to another physical server, but you build the physical servers first). Live migration does not provide redundant hardware, but it uses it.

82
Q

Monaco has been working with a cloud provider as one of the cloud operators for some time now. Her job is the management of the virtual functionality in one of their data centers. One of the cloud provider’s customers uses a Platform as a Service (PaaS) implementation. This customer has built and sells a Software as a Service (SaaS) cloud offering. All these companies and elements have something in common with each other.

What is the key functionality of applications and the management of the cloud that they have in common?

A. Trusted Platform Modules (TPMs)
B. Hypervisors
C. REpresentation State Transfer (REST)
D. Application Programming Interfaces (API)

A

D. Application Programming Interfaces (API)

Explanation:
In a cloud environment, the key functionality of applications and the management of the cloud are based on APIs. It’s very important that APIs are implemented in a secure and appropriate manner. When possible, encryption should be used for API communication. One of the API options is REpresentation State Transfer (REST). There is also SOAP, RPC, and GraphQL.

Trusted Platform Modules (TPMs) are a specialized hardware component that provides secure storage and cryptographic functions to enhance the security of computing systems. TPMs are typically integrated into the motherboard or added as a separate chip.

Hypervisors, also known as a Virtual Machine Monitor (VMM), is a software, firmware, or hardware layer that enables the virtualization of computer hardware resources, allowing multiple Operating Systems (OS) or Virtual Machines (VMs) to run simultaneously on a single physical machine.

83
Q

Which of the following is NOT true of a legal hold?

A. It can be initiated by an investigation by a regulator or law enforcement
B. It can be initiated by a lawsuit by a private entity
C. It requires the organization to cease all data destruction efforts
D. It requires the organization to retain all relevant data

A

C. It requires the organization to cease all data destruction efforts

Explanation:
Correct answer: It requires the organization to cease all data destruction efforts

A legal hold can be initiated by an investigation by law enforcement or a regulator or a lawsuit brought by a private entity. It requires the organization to retain all data relevant to the investigation or lawsuit, but the organization can continue to delete any unrelated data.

84
Q

Multivendor network connectivity is MOST related to which of the following risk considerations of cloud computing?

A. Data Center Location
B. Compliance
C. Downtime
D. General Technology Risks

A

C. Downtime

Explanation:
Cloud computing risks can depend on the cloud service model used. Some risks common to all cloud services include:

CSP Data Center Location: The location of a CSP’s data center may impact its exposure to natural disasters or the risk of regulatory issues. Cloud customers should verify that a CSP’s locations are resilient against applicable natural disasters and consider potential regulatory issues.
Downtime: If a CSP’s network provider is down, then its services are unavailable to its customers. CSPs should use multivendor network connectivity to improve network resiliency.
Compliance: Certain types of data are protected by law and may have mandatory security controls or jurisdictional limitations. These restrictions may affect the choice of a cloud service model or CSP.
General Technology Risks: CSPs are a big target for attackers, who might exploit vulnerabilities or design flaws to attack CSPs and their customers.

Reference:

85
Q

An organization has implemented a Security Information and Event Manager (SIEM) solution in the cloud as a result of years of Information Technology (IT) and Security Operation Center (SOC) experience. What is the main security benefit of SIEM technology?

A. To automatically block traffic that appears suspicious
B. Enhanced analysis capabilities
C. To send alerts to administrators about suspicious activity
D. To encrypt data to the servers

A

B. Enhanced analysis capabilities

Explanation:
The main benefit is its ability to correlate and analyze the volume of logs that come from all the IT technology—from firewalls to Intrusion Detection System (IDS). As the cloud amplifies the number of devices (they become virtual as well), the number of logs will only increase, and it is not possible to humanly read them. Having technology that analyses the logs to find Indications of Compromise (IoC) so that alerts can be sent to administrators is essential today.

A SIEM is a device that devices send their logs to. It cannot block traffic. A firewall of Intrusion Prevention System (IPS) is needed for that.

A SIEM cannot encrypt traffic. The logs should be encrypted for transmission to the SIEM. Once the logs arrive at the SIEM, they are decrypted. This would normally be done with Transport Layer Security (TLS). TLS only protects the transmission of the data, not the storage

86
Q

Rafferty is a cloud information security manager who is working with the Incident Response Team (IRT). They have just detected a possible compromise of one of their systems. An Indication of Compromise (IoC) has been reported by the Security Information and Event Manager (SIEM). What the SIEM has seen indicates that a user has clicked on a Uniform Resource Locator (URL) that contains malicious script.

What type of attack is this?

A. Identification and authentication failures
B. Security misconfiguration
C. Insecure design
D. Cross-site scripting

A

D. Cross-site scripting

Explanation:
This is a Cross-Site Scripting attack (XSS). XSS was merged into the injection category in the OWASP 2021 list of threats. Cross-site scripting is a type of injection attack in which a malicious actor can send data to a user’s browser without going through proper validation. There are three different types of XSS attacks. This is specifically a reflected XSS. The three types are:

Reflected XSS: The injected script is embedded in a URL or input field and then reflected back in the response from the server. The attacker typically tricks the victim into clicking a malicious link containing the injected script.
DOM-based XSS: This type of XSS occurs when the vulnerability exists in the client-side code (typically JavaScript) that manipulates the Document Object Model (DOM) of the webpage. The attack targets the client-side code directly, modifying the DOM to execute malicious actions.
Stored XSS: The injected malicious script is permanently stored on the target server and served to multiple users when they access the affected page or view the compromised content.

Insecure design is from the beginning of the software development lifecycle. It involves performing threat modeling and other actions, not an actual attack as described in the question.

Security misconfiguration is when the software has insecure configurations or is left with the default configuration—not the attack itself such as in the question but rather how the developers or operations configured the software.

Identification and authentication failures include things like not adhering to the least privilege principle or leaving default account and default passwords on the systems.

87
Q

Cloud security is a difficult task, made all the more difficult by laws and regulations imposing restrictions on cross-border data transfers. The actual hardware in the cloud can be located anywhere, so it is critical to understand where your data resides. Which of the following statements is true regarding who is responsible for the data?

A. The cloud service provider (CSP) retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
B. The cloud administrator retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
C. Both the cloud service provider (CSP) and the cloud service customer (CSC) retain responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
D. The cloud service customer retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed

A

D. The cloud service customer retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed

Explanation:
Regardless of whether cloud or non-cloud services are utilized, the data controller [the Cloud Service Customer (CSC)] is ultimately responsible for the data’s security. Cloud security encompasses more than data protection; it also encompasses applications and infrastructure.

According to the European Union (EU) General Data Protection Regulation (GDPR) requirements, the cloud provider is responsible for the data in its protection. The reason the answer that says “both” is not correct is the correct answer contains the word ultimate. Ultimately, the cloud customer is always responsible for their data.

This question also does not mention GDPR, so it is difficult to determine if there is a legal responsibility for the data while it is in the cloud provider’s care, as we do not actually know where on the planet the question is referring to.

88
Q

Jia Li is working with the cloud data architect to design the storage types that they will be using in their new cloud service for their company. They need a storage type for all their documents, photos, and spreadsheets. Which type of storage system places files in a flat organization of containers and uses unique identifiers to retrieve them?

A. Object storage
B. Software-defined storage
C. Data lake
D. Volume storage

A

A. Object storage

Explanation:
Object storage utilizes a flat system and assigns files and objects a key value (ID) that can be used to retrieve the files later. This differs from traditional storage, which uses a directory and tree structure.

Volume storage appears to the user of a virtual machine as a virtual drive. It is very similar in presentation to the hard drive on your personal computer. You can create folders to store your files/objects within them.

Software-defined storage is exactly that. There is software that defines the use of the actual storage. It decouples the storage software from the hardware. You can then add more drives quickly and easily without having to redesign. The software just adds that space to the available storage space.

A data lake is similar to a data warehouse. They are both created to facilitate the analysis of data to make business decisions. A data lake is unstructured, like big data. A data warehouse is structured, like databases.

89
Q

Which of the following techniques for data discovery in unstructured data looks for sensitive data with a known structure?

A. Pattern Matching
B. Lexical Analysis
C. Hashing
D. Schema Analysis

A

A. Pattern Matching

Explanation:
When working with unstructured data, there are a few different techniques that a data discovery tool can use:

Pattern Matching: Pattern matching looks for data formats common to sensitive data, often using regular expressions. For example, the tool might look for 16-digit credit card numbers or numbers structured as XXX-XX-XXXX, which are likely US Social Security Numbers (SSNs).
Lexical Analysis: Lexical analysis uses natural language processing (NLP) to analyze the meaning and context of text and identify sensitive data. For example, a discussion of “payment details” or “card numbers” could include a credit card number.
Hashing: Hashing can be used to identify known-sensitive files that change infrequently. For example, a DLP solution may have a database of hashes for files containing corporate trade secrets or company applications.

Schema analysis can’t be used with unstructured data because only structured databases have schemas.

90
Q

A corporation is planning their move to the cloud. They have decided to use a cloud provided by a Managed Service Provider (MSP). The MSP will retain ownership and management of the cloud and all the infrastructure. The cloud will be more expensive than a Cloud Service Provider (CSP). However, the level of control that this cloud will offer is expanded from that with a CSP.

What type of cloud have they selected?

A. Public cloud
B. Community cloud
C. Private cloud
D. Hybrid cloud

A

C. Private cloud

Explanation:
Private clouds can be located at the service provider’s location or the customer’s. It can be owned by either the cloud provider or the customer. It can be managed by either the cloud provider or the customer. It could be with an MSP or a CSP.

If it is with a CSP, it could be public, private, or community. If it is with an MSP, it could be either private or community. As it is just one company in the question, it is not a community, so it must be a private cloud. For more data on this, the Cloud Security Alliance (CSA) guidance 4.0 (or 5 if it has been released) would be a great read.

A hybrid cloud is usually a combination of public and private clouds. The question is specifically about the private cloud though, so this is not the best answer.q

91
Q

Which of the following is MOST relevant when an organization is scoping its resource requirements for its BCP?

A. MTD
B. RPO
C. RTO
D. RSL

A

D. RSL

Explanation:
A business continuity and disaster recovery (BC/DR) plan uses various business requirements and metrics, including:

Recovery Time Objective (RTO): The RTO is the amount of time that an organization is willing to have a particular system be down. This should be less than the maximum tolerable downtime (MTD), which is the maximum amount of time that a system can be down before causing significant harm to the business.
Recovery Point Objective (RPO): The RPO measures the maximum amount of data that the company is willing to lose due to an event. Typically, this is based on the age of the last backup when the system is restored to normal operations.
Recovery Service Level (RSL): The RSL measures the percentage of compute resources needed to keep production environments running while shutting down development, testing, etc.

Reference:

92
Q

Your organization currently hosts its cloud environment in the organization’s data center. The organization utilizes a provider for their backup solution in accordance with their business continuity plan. Which configuration BEST describes their deployment?

A. Cloud service backup, third-party backup
B. Cloud service backup, private backup
C. Private cloud, cloud service backup
D. Private cloud, private backup

A

C. Private cloud, cloud service backup

Explanation:
The organization is using their own private data center for their cloud, which is a private cloud. It is not necessary to have the private cloud within their own data center, but that is what’s mentioned in the question.

They are then backing up their cloud using a public provider, which is the cloud service backup answer option. Therefore, answers that have private backup are not correct.

The answers that have cloud service backup and/or third-party backup are not correct because there is only the cloud service backup in the question.

93
Q

Which of the following potential risks is CREATED by a failure to properly reconfigure applications when moving them from PaaS-based development environments to production?

A. Interoperability Issues
B. Resource Sharing
C. Persistent Backdoors
D. Virtualization

A

C. Persistent Backdoors

Explanation:
Platform as a Service (PaaS) environments inherit all the risks associated with IaaS models, including personnel threats, external threats, and a lack of relevant expertise. Some additional risks added to the PaaS model include:

Interoperability Issues: With PaaS, the cloud customer develops and deploys software in an environment managed by the provider. This creates the potential that the customer’s software may not be compatible with the provider’s environment or that updates to the environment may break compatibility and functionality.
Persistent Backdoors: PaaS is commonly used for development purposes since it removes the need to manage the development environment. When software moves from development to production, security settings and tools designed to provide easy access during testing (i.e. backdoors) may remain enabled and leave the software vulnerable to attack in production.
Virtualization: PaaS environments use virtualized OSs to provide an operating environment for hosted applications. This creates virtualization-related security risks such as hypervisor attacks, information bleed, and VM escapes.
Resource Sharing: PaaS environments are multitenant environments where multiple customers may use the same provider-supplied resources. This creates the potential for side-channel attacks, breakouts, information bleed, and other issues with maintaining tenant separation.
94
Q

To access their cloud environment remotely, a cloud administrator sets up a web server in a demilitarized zone (DMZ) that is publicly accessible from the internet. She made it so that the server has been hardened to prevent attacks. Which of the following did the cloud administrator create?

A. Micro-segmentation
B. Virtual Private Cloud (VPC)
C. Bastion host
D. Firewall

A

C. Bastion host

Explanation:
A bastion host is a hardened and fortified device. To harden, you change the default password, close unnecessary ports, disable unnecessary services, etc.

A VPC is a virtualized environment that is isolated to make it harder for bad actors to interfere with business processes.

Micro-segmentation is when a virtual network is created that has one or just a few virtual machines behind its own firewall.

A firewall is a security device that blocks or allows traffic. It should be a hardened device as well, hopefully by design.

95
Q

Wyatt is part of the DevSecOps teams as an information security manager. They are currently at the beginning stage of creating a new application for a customer. The customer is in need of this software as soon as possible. They understand rushing has its consequences, but the team has promised continuous integration and delivery of new features and fixes as necessary. So, Wyatt and his team have made the choice to host this software as a Software as a Service (SaaS) deployment.

What software development model would fit this scenario the best?

A. Agile software development
B. Waterfall software development
C. Iterative software development
D. Extreme programming software development

A

A. Agile software development

Explanation:
Agile software development is an iterative and flexible approach to software development that prioritizes customer collaboration, adaptability, and the delivery of working software in shorter timeframes. It emerged as a response to the limitations of traditional waterfall development methods, which often struggled to meet changing requirements and deliver value in a timely manner.

Waterfall software development is a linear and sequential approach to software development that follows a predefined set of phases in a rigid manner. It is characterized by distinct and well-defined phases, with each phase typically dependent on the completion of the previous one. The waterfall model is often used in projects with stable requirements and predictable outcomes. So, this is going to take too long for this customer.

eXtreme Programming (XP) is an agile software development methodology that emphasizes close collaboration, continuous feedback, and a focus on delivering high-quality software. It aims to provide a flexible and adaptive approach to development while maintaining a strong emphasis on customer satisfaction. This is more than the customer needs. The customer is fine with the consequences of rushing a project, so there is not a focus on high-quality software.

Iterative software development is an approach that involves breaking down the software development process into smaller, incremental iterations. It is an alternative to the traditional waterfall model where the entire development process is planned and executed sequentially. this is also close to what the customer needs, but the planning and executing sequentially is not the focus for this customer.

96
Q

Contractual agreements with customers are MOST relevant in which of the following areas?

A. Information Security Management
B. Service Level Management
C. Continual Service Improvement Management
D. Change Management

A

B. Service Level Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
97
Q

Leodis is working for a large consulting firm that utilizes a particular Software as a Service (SaaS) solution from a public Cloud Service Provider (CSP). What he needs to do is collect metrics regarding the usage and performance of the SaaS application to verify that the CSP is providing the quality of product that they were promised.

What can he use to monitor this application?

A. Application Programming Interface gateway
B. Open source software
C. Supply chain manager
D. Application Programming Interface

A

D. Application Programming Interface

Explanation:
a related to the usage and performance of their SaaS applications. This can include information such as user activity, resource utilization, system health, response times, and other relevant performance indicators. By accessing these APIs, customers can monitor the behavior and performance of the SaaS product in real time or collect historical data for analysis and reporting.

A supply chain manager is responsible for overseeing and managing the entire process of the supply chain within an organization. They play a critical role in ensuring the smooth flow of goods, services, and information from the point of origin to the point of consumption or delivery.

Open source software refers to software whose source code is freely available and can be accessed, modified, and distributed by anyone. Open source software is typically developed in a collaborative manner, with a community of developers contributing to its creation and improvement. The API could be created with open source software, but it is the API that is needed in the question.

An API gateway is a server or software component that acts as an intermediary between clients (such as mobile apps, web applications, or other services) and backend services (such as databases, microservices, or legacy systems). It serves as a centralized entry point for API requests, providing various features and functionalities to enhance security, scalability, and manageability. The metrics needed come from within the application, not what passes through the gateway. Although this would be the second best answer to the question.

98
Q

Which of the following could be used by a DLP tool to identify attempted exfiltration of unstructured files that rarely change?

A. Lexical Analysis
B. Pattern Matching
C. Hashing
D. Schema Analysis

A

C. Hashing

Explanation:
When working with unstructured data, there are a few different techniques that a data discovery tool can use:

Pattern Matching: Pattern matching looks for data formats common to sensitive data, often using regular expressions. For example, the tool might look for 16-digit credit card numbers or numbers structured as XXX-XX-XXXX, which are likely US Social Security Numbers (SSNs).
Lexical Analysis: Lexical analysis uses natural language processing (NLP) to analyze the meaning and context of text and identify sensitive data. For example, a discussion of “payment details” or “card numbers” could include a credit card number.
Hashing: Hashing can be used to identify known-sensitive files that change infrequently. For example, a DLP solution may have a database of hashes for files containing corporate trade secrets or company applications.

Schema analysis can’t be used with unstructured data because only structured databases have schemas.

99
Q

Pierre has been leading the team that is building the Business Continuity and Disaster Recovery Plans (BCP/DRP). This is not the first time that they have worked through the process of building plans for their business, but they do need to start at the top of the process to review current plans. The review could result in changes to the current plans or even the creation of new specific procedures for recovering specific critical assets.

What is the first step that Pierre and his team need to begin with?

A. Business Impact Analysis (BIA)
B. Develop the document
C. Recovery strategies
D. Project management and initiation

A

D. Project management and initiation

Explanation:
The Business Continuity Institute (BCI) has the following steps to plan for disasters or business continuity issues:

Policy
Project management and initiation
Business Impact Analysis (BIA)
Recovery strategies
Develop the documents
Implementation, Test, Update
Embed the plan in the user community

Since policy is not an option within this question, the first step that they can take from the four answer options listed is the project management and initiation step. This is when the team is created, and the team leader is selected. The anticipated budget is identified along with the steering committee. This is a step steeped in the project management.

100
Q

Maxence is working with the data storage team to build a redundant and secure storage network for the corporation’s data. The corporation’s data includes personal data of their customers as well as credit card and account data. They are working to build a Storage Area Network (SAN) using Internet Small Computer Serial Interface (iSCSI) technology.

Which address is used with iSCSI to locate the end users’ data?

A. Media Access Control (MAC) and Port numbers
B. World Wide Node Number (WWNN)
C. Internet Protocol (IP), Transmission Control Protocol (TCP) port
D. World Wide Port Number (WWPN)

A

C. Internet Protocol (IP), Transmission Control Protocol (TCP) port

Explanation:
iSCSI uses a combination of pieces of information to locate users’ data. IP addresses and TCP port numbers are the beginning. There is also an iSCSI Qualified Name (IQN) and an Extended Unique Identifier (EUI).

Fibre Channel uses WWPN and WWNN addresses as well as Logical Unit Numbers (LUN).

MAC addresses are used by the layer 2 protocols that include IEEE 802.3 Ethernet. Port numbers could be physical addresses or TCP/UDP port numbers. This is not the right answer because iSCSI does not use MAC addresses.
Reference:

101
Q

It is necessary within a business to control data at all stages of the lifecycle. Erika is working at a corporation to setup, deploy, and monitor a Data Loss Prevention (DLP) solution. Which component of DLP is involved in the process of applying corporate policy regarding storage of data?

A. Discovery
B. Identification
C. Enforcement
D. Monitoring

A

C. Enforcement

Explanation:
DLP is made up of three major components. They include discovery, monitoring, and enforcement. Enforcement is the final stage of DLP implementation. It is the enforcement component that applies policies and then takes actions, such as deleting data.

Identification is the first piece of IAAA and is the statement of who you claim to be, such as a user ID.

The CSA SecaaS Category 2 document is a good read on the topic of DLP and the cloud and is highly recommended.

102
Q
A