Pocket Prep 10 Flashcards

1
Q

An information security manager, Cat, has uncovered a problem within the Information Technology (IT) department. The staff have been moving new firewall equipment directly into production without proper testing in a lab or controlled environment. When the staff has not properly tested the firewalls, there has been an immediate issue or two that has been seen by the users. He is looking for a process to implement that would control this better in the future.

What would you recommend?

A. Capacity management
B. Continuity management
C. Deployment management
D. Change management

A

C. Deployment management

Explanation:
Deployment management is the process of moving new or changed hardware or software to a live environment.

Change management is the process of managing the addition, modification, or removal of anything that could have a direct or indirect impact on IT services. Change management could have been the answer, but since deployment management is an option, that is a more specific fit to the issue of adding hardware to a live environment.

Continuity management is the process of making sure that the services that IT provides will still be available in the event of a disaster. ITIL does not use the term Disaster Recovery (DR).

Capacity management is the process of ensuring that resources are managed in a way to meet the corporation’s demand for IT services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

An auditor performing a manual audit pulls a registry file from a sample of windows servers and compares it to a baseline. Where would they be pulling the baseline from?

A. Code repository
B. Configuration Management DataBase (CMDB)
C. Security Information & Event Manager (SIEM)
D. Information Security Management System (ISMS)

A

B. Configuration Management DataBase (CMDB)

Explanation:
The organization’s Configuration Management DataBase (CMDB) should capture all Configuration Items (CIs) that have been placed under configuration management. The CIs are the required configuration. This database can be used for manual audits as well as automated scanning to identify systems that have drifted out of their secure state.

An ISMS is effectively the security program. The term ISMS comes from ISO/IEC 27001/2.

A SIEM is a device that collects the logs from all devices within the network and then correlates the events to determine when there is an Indication of Compromise (IoC) within the environment.

A code repository is a storage location for source code created within the business.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following cloud environments has cloud servers used only by a single organization?

A. Hybrid Cloud
B. Community Cloud
C. Private Cloud
D. Public Cloud

A

C. Private Cloud

Explanation:
Cloud services are available under a few different deployment models, including:

Private Cloud: In private clouds, the cloud customer builds their own cloud in-house or has a provider do so for them. Private clouds have dedicated servers, making them more secure but also more expensive.
Public Cloud: Public clouds are multi-tenant environments where multiple cloud customers share the same infrastructure managed by a third-party provider.
Hybrid Cloud: Hybrid cloud deployments mix both public and private cloud infrastructure. This allows data and applications to be hosted on the cloud that makes the most sense for them.
Multi-Cloud: Multi-cloud environments use cloud services from multiple different cloud providers. This enables customers to take advantage of price differences or optimizations offered by different providers.
Community Cloud: A community cloud is essentially a private cloud used by a group of related organizations rather than a single organization. It could be operated by that group or a third party, such as FedRAMP-compliant cloud environments operated by cloud service providers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A Hardware Security Module (HSM) vendor has had their product tested to ensure the physical security of the device. It has proven that it will overwrite the data and keys within if the box is ever opened. What certification would this be?

A. Federal Information Processing Standard (FIPS) 140-3 Level two
B. Common Criteria (CC/ISO/IEC 15408) Evaluation Assurance Level (EAL) three
C. Federal Information Processing Standard (FIPS) 140-3 Level three
D. Common Criteria (CC/ISO/IEC 15408) Evaluation Assurance Level (EAL) four

A

C. Federal Information Processing Standard (FIPS) 140-3 Level three

Explanation:
The Federal Information Processing Standard (FIPS) 140-3 is a test of the physical security of cryptographic products. Level three includes the zeroization of the key and data if the box is tampered with. Level 2 is only tamper evidence through a sticker or seal being cut.

ISO/IEC 15408 (Common Criteria) level three is methodically designed and tested. Level four is methodically designed, tested, and reviewed. What is being tested is specified in the Protection Profile (PP) and the Security Target (ST). This testing is for any security products, including HSMs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following is the MAIN consideration when determining retention periods for certain types of data such as personal data or company financial data?

A. Data Classification
B. Regulatory Requirements
C. Retention Requirements
D. BC/DR req

A

B. Regulatory Requirements

Explanation:
Data retention policies define how long an organization stores particular types of data. Some of the key considerations for data retention policies include:

Retention Periods: Defines how long data should be stored. This usually refers to archived data rather than data in active use.
Regulatory Requirements: Various regulations have rules regarding data retention. These may mandate that data only be retained for a certain period of time or the minimum time that data should be saved. Typically, the first refers to personal data, while the second is business and financial data or security records.
Data Classification: The classification level of data may impact its retention period or the means by which the data should be stored and secured.
Retention Requirements: In some cases, specific requirements may exist for how data should be stored. For example, sensitive data should be encrypted at rest. Data retention may also be impacted by legal holds.
Archiving and Retrieval Procedures and Mechanisms: Different types of data may have different requirements for storage and retrieval. For example, data used as backups as part of a BC/DR policy may need to be more readily accessible than long-term records.
Monitoring, Maintenance, and Enforcement: Data retention policies should have rules regarding when and how the policies will be reviewed, updated, audited, and enforced.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In which cloud service model is the cloud service provider (CSP) responsible for securing physical infrastructure for compute resources?

A. IaaS
B. PaaS
C. SaaS
D. All service models

A

D. All service models

Explanaion:
Compute resources include the components that offer memory, CPU, disk, networking, and other services to the customer. In all cases, the cloud service provider (CSP) is responsible for the physical infrastructure providing these services.

However, at the software level, responsibility depends on the cloud service model in use, including:

Infrastructure as a Service (IaaS): In an IaaS environment, the CSP provides and manages the physical components, virtualization software, and networking infrastructure. The customer is responsible for configuring and securing their VMs and the software installed in them.
Platform as a Service (PaaS): In a PaaS environment, the CSP’s responsibility extends to offering and securing the operating systems, database management systems (DBMSs), and other services made available to a customer’s applications. The customer is responsible for properly configuring and using these services and the security of any software that they install or use.
Software as a Service (SaaS): In a SaaS environment, the CSP is responsible for everything except the custom settings made available to the cloud customer. For example, if a cloud storage drive can be set to be publicly accessible, that is the customer’s responsibility, not the CSP’s.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Charlie is working with the developers as they build a new piece of software that will be able to store and retrieve data in the cloud. How does a piece of software access object, file, block, and database storage?

A. Application Programming Interface (API)
B. Internet Protocol Security (IPSec)
C. Transport Layer Security (TLS)
D. Security Assertion Markup Language (SAML)

A

A. Application Programming Interface (API)

Explanation:
Multiple types of cloud storage technologies use APIs to access data. Some common examples include:

Object Storage: Object storage systems like Amazon S3, Microsoft Azure Blob Storage, and Google Cloud Storage
File Storage: Cloud file storage services, such as Amazon Elastic File System (EFS), Azure Files, and Google Cloud Filestore
Block Storage: Cloud block storage services like Amazon Elastic Block Store (EBS), Azure Disk Storage, and Google Cloud Persistent Disk
Database Storage: Cloud database services, such as Amazon Relational Database Service (RDS), Azure SQL Database, and Google Cloud SQL

TLS is used to encrypt the transmission. TLS can be used to encrypt a RestFUL API and should be used. It is not the access method to actually find and retrieve a piece of data.

SAML can be used to authenticate the user before they are allowed to access the storage, but it too does not actually find and retrieve a piece of data.

Internet Protocol Security (IPSec) could be used to secure a Virtual Private Network (VPN) connection by encrypting the traffic. Or it could be used to connect the router at the office to the edge router in the cloud. Either way, it is like TLS. It is encrypting the data, not finding and retrieving the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of the following cloud audit mechanisms may only be possible in an IaaS environment OR with the help of the cloud provider?

A. Access Controls
B. Correlation
C. Log Collection
D. Packet Capture

A

D. Packet Capture

Explanation:
Three essential audit mechanisms in cloud environments include:

Log Collection: Log files contain useful information about events that can be used for auditing and threat detection. In cloud environments, it is important to identify useful log files and collect this information for analysis. However, data overload is a common issue with log management, so it is important to collect only what is necessary and useful.
Correlation: Individual log files provide a partial picture of what is going on in a system. Correlation looks at relationships between multiple log files and events to identify potential trends or anomalies that could point to a security incident.
Packet Capture: Packet capture tools collect the traffic flowing over a network. This is often only possible in the cloud in an IaaS environment or using a vendor-provided network mirroring capability.

Access controls are important but not one of the three core audit mechanisms in cloud environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Frederick works for a medium-sized company as the Chief Information Security Officer (CISO). They use a public Cloud Service Provider (CSP) for their Information Technology (IT) environment. They have built a large Infrastructure as a Service (IaaS) environment as a virtual Data Center (vDC). They did their due diligence and carefully constructed a contract with the CSP. They were able to determine who is responsible for Security Governance, Risk, and Compliance.

Who would that be?

A. Both the customer and the provider
B. Cloud service provider
C. Cloud service broker
D. Cloud service customer

A

D. Cloud service customer

Explanation:
In all cloud service types (IaaS, PaaS, SaaS), the roles and responsibility of Security Governance, Risk, and Compliance fall solely to the cloud service customer and not the CSP. Check the references listed at the bottom. (If you do not have either book, see the (ISC)2 website regarding responsibility and accountability in the cloud.) The CSP does have to do their own Governance, Risk, and Compliance work, but that is not the question. The exam will look from a customer’s perspective when looking at their public cloud provider unless stated differently in the question (some questions will be from the provider’s perspective).

A Cloud Service Broker (CSB) is a third-party intermediary that facilitates the interaction between CSPs and cloud service consumers (organizations or individuals). The role of a cloud service broker is to add value to the cloud computing ecosystem by providing various services that help organizations effectively use and manage cloud services. They are not responsible for the Governance of the cloud service customer. They may assist at some point, but they are not accountable nor responsible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the following types of data is regulated under PCI DSS?

A. Protected Health Information
B. Contractual Private Data
C. Personally Identifiable Information
D. Payment Data

A

D. Payment Data

Explanation:
Private data can be classified into a few different categories, including:

Personally Identifiable Information (PII): PII is data that can be used to uniquely identify an individual. Many laws, such as the GDPR and CCPA/CPRA, provide protection for PII.
Protected Health Information (PHI): PHI includes sensitive medical data collected regarding patients by healthcare providers. In the United States, HIPAA regulates the collection, use, and protection of PHI.
Payment Data: Payment data includes sensitive information used to make payments, including credit and debit card numbers, bank account numbers, etc. This information is protected under the Payment Card Industry Data Security Standard (PCI DSS).
Contractual Private Data: Contractual private data is sensitive data that is protected under a contract rather than a law or regulation. For example, intellectual property (IP) covered under a non-disclosure agreement (NDA) is contractual private data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which of the following is a type of durable storage that may include immutable storage and integrity protections?

A. Raw
B. Ephemeral
C. Object
D. Long-term

A

D. Long-term

Explanation:
Cloud-based infrastructure can use a few different forms of data storage, including:

Ephemeral: Ephemeral storage mimics RAM on a computer. It is intended for short-term storage that will be deleted when an instance is deleted.
Long-Term: Long-term storage solutions like Amazon Glacier, Azure Archive Storage, and Google Coldline and Archive are designed for long-term data storage. Often, these provide durable, resilient storage with integrity protections.
Raw: Raw storage provides direct access to the underlying storage of the server rather than a storage service.
Volume: Volume storage behaves like a physical hard drive connected to the cloud customer’s virtual machine. It can either be file storage, which formats the space like a traditional file system, or block storage, which simply provides space for the user to store anything.
Object: Object storage stores data as objects with unique identifiers associated with metadata, which can be used for data labeling
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following is NOT checked when using the DREAD threat model?

A. Measure of how easy it is to reproduce an exploit
B. Measure of damage to the system should a successful exploit occur
C. Measure of the restoration time needed after a successful exploit
D. Measure of the skill level or resources needed to successfully exploit a threat

A

C. Measure of the restoration time needed after a successful exploit

Explanation:
DREAD is about measuring how severe an exploit could be. This does not involve how much time will be needed to restore afterward. It does need to be managed, but that is the Disaster Recovery (DR) topic.

The DREAD threat model focuses on the quantification of risk and threat evaluation. DREAD is based on the equation below, which calculates the value based on risk quantification in specific categories, with a value ranging from 0 to 10:

Risk DREAD = (Damage + Reproducibility + Exploitability + Affected users + Discoverability) / 5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A cloud architect has been working with operations to determine a few of the features that they should build into their Infrastructure as a Service (IaaS) deployment. One of their concerns is having a user directed to a malicious website due to a Domain Name System poisoning.

Which of the primary information security principles does DNS Security (DNSSec) primarily ensure?

A. Privacy
B. Confidentiality
C. Integrity
D. Availability

A

C. Integrity

Explanation:
DNSSec addresses DNS integrity. When a DNS server passes information on to another DNS server regarding the Internet Protocol (IP) addresses that a domain name can be found at, it is necessary these days to confirm that information is coming from a trusted source. If the hacker can pretend to be a DNS server for just a moment and pass along incorrect information without authentication, the DNS server will simply pass this information on to other DNS servers. DNSSec includes a digital signature on updated DNS information so that a DNS server can first authenticate the source before adding or changing the information it already has.

Digital signatures provide information that confirms the identity of the sender. This is considered part of integrity.

Confidentiality is to keep information out of view, but DNS information needs to be made available to the users so there is no confidentiality built into the protocol. Privacy falls under the topic of confidentiality. Most of the time, when the word privacy is used, it refers to the need to keep personal information protected.

Availability of the DNS servers is necessary for functionality, but that does not address the problem in the question.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which of the following threat models was developed by Microsoft but has since fallen out of widespread use?

A. PASTA
B. STRIDE
C. ATASM
D. DREAD

A

D. DREAD

Explanation:
Several different threat models can be used in the cloud. Common examples include:

STRIDE: STRIDE was developed by Microsoft and identifies threats based on their effects/attributes. Its acronym stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege.
DREAD: DREAD was also created by Microsoft but is no longer in common use. It classifies risk based on Damage, Reproducibility, Exploitability, Affected Users, and Discoverability.
ATASM: ATASM stands for Architecture, Threats, Attack Surfaces, and Mitigations and was developed by Brook Schoenfield. It focuses on understanding an organization’s attack surfaces and potential threats and how these two would intersect.
PASTA: PASTA is the Process for Attack Simulation and Threat Analysis. It is a seven-stage framework that tries to look at infrastructure and applications from the viewpoint of an attacker.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which of the following types of SOC reports could include an extended assessment of the effectiveness of an organization’s security controls?

A. SOC 1
B. SOC 2 Type I
C. SOC 2 Type II
D. SOC 3

A

C. SOC 2 Type II

Explanation:
Service Organization Control (SOC) reports are generated by the American Institute of CPAs (AICPA). The three types of SOC reports are:

SOC 1: SOC 1 reports focus on financial controls and are used to assess an organization’s financial stability.
SOC 2: SOC 2 reports assess an organization's controls in different areas, including Security, Availability, Processing Integrity, Confidentiality, or Privacy. Only the Security area is mandatory in a SOC 2 report.
SOC 3: SOC 3 reports provide a high-level summary of the controls that are tested in a SOC 2 report but lack the same detail.  SOC 3 reports are intended for general dissemination.

SOC 2 reports can also be classified as Type I or Type II. A Type I report is based on an analysis of an organization’s control designs but does not test the controls themselves. A Type II report is more comprehensive, as it tests the effectiveness and sustainability of the controls through a more extended audit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A hospital has uncovered that they have a nurse that has been breaking a policy that they have. When the nurse has a few minutes of free time, they have a bad habit of browsing through patient records. As it turns out, the nurse has been using this information to blackmail some of these patients. What term is used to describe this nurse?

A. Malicious insider
B. Man-in-the-Middle (MitM)
C. Escalation of privilege
D. Advanced persistent threat

A

A. Malicious insider

Explanation:
A malicious insider is any user with legitimate network or system access who uses their access for purposes other than those authorized. Malicious insiders are regularly listed as one of the top sources of breaches and compromises. The best way to mitigate the risk of the malicious insider is to implement active monitoring and auditing.

An Advanced Persistent Threat (APT) is a serious threat that usually originates from a nation state that is attacking another. The advanced part refers to the level of sophistication in the coding of the malicious software and its deployment. Persistent refers to the time that the malware is in place and functioning. It would be over a long period of time. A commonly used example to describe APTs is Stuxnet.

A MitM would exist between the sender and receiver. The nurse is not between two points or two parties in a transmission. The nurse is browsing data.

Escalation of privilege is when the bad actor is able to use the login of a user and then issue a command such as SU to access super user or administrator level access. The nurse is using their normal level of access. They will need access to the patient records at some point, which is why they have access. The policy would state that they are not allowed to browse for the sake of gaining information for the purpose of blackmailing people.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Justine has been working on the contract between her company, the Cloud Service Customer (CSC), and the Cloud Service Provider (CSP) that they are purchasing a Software as a Service (SaaS) product from. In the shared responsibility model, who is responsible for protecting the application?

A. Only the CSP
B. Only the CSC
C. Cloud Service Partner
D. Both the CSC and the CSP

A

D. Both the CSC and the CSP

Explanation:
In a shared responsibility model, both the CSC and the CSP have a responsibility to protect the application itself. Exactly where the line of division occurs depends on the provider and the contract that Justine is working on.

The CSP is responsible for everything below the application, from the platform to the physical environment.

The CSC is responsible for their Governance, Risk management and Compliance (GRC). Independently, the CSP has their own GRC that they are responsible for, but that does not show in the share model.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Multitenancy is BEST described as:

A. Multiple tenants on a single server. These tenants are different cloud customers of a single cloud provider.
B. The ability for two separate organizations to share an identity system while keeping autonomy
C. Multiple organizations that would be able to access the resources of a single cloud provider
D. Multiple tenants on a single server. This includes tenants within a single cloud customer or between multiple cloud customers

A

D. Multiple tenants on a single server. This includes tenants within a single cloud customer or between multiple cloud customers

Explanation:
Correct answer: Multiple tenants on a single server. This includes tenants within a single cloud customer or between multiple cloud customers

Multitenancy refers to when multiple cloud customers share the same server within a cloud provider. The hypervisor is responsible for isolating cloud tenants from each other. The tenants can be from different cloud customers or from a single customer. ISO/IEC 17788 states that cloud tenants can come from a single company, but they could be different departments or different projects that need to be isolated from each other. For example, the sales department and the research and development department. They should not have access to each other’s data or servers.

Multiple organizations that would be able to access the resources of a single cloud provider is the nature of the public cloud.

It is possible in federated identification for two separate organizations to share an identity system.

Multiple tenants on a single server that are different cloud customers would be an example of multitenancy. However, they could come from the same company as well, so the correct answer is more accurate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A corporation has been expanding their current Business Continuity (BC) and Disaster Recovery (DR) capability. The DR team has recently been analyzing the plans that they have in place for a critical database. They have been looking at the time that system could be offline and the consequences that would cause for the business. They have been able to determine the expected percentage of loss in a single event.

What is the CORRECT equation to use when determining annual loss expectancy (ALE)?

A. Take the Annual Rate of Occurrence (ARO) and divide it by the Single Loss Expectancy (SLE)
B. Take the Single Loss Expectancy (SLE) and add the Annual Rate of Occurrence (ARO)
C. Take the Single Loss Expectancy (SLE) and multiply it by the Annual Rate of Occurrence (ARO)
D. Take the Single Loss Expectancy (SLE) and subtract the Annual Rate of Occurrence (ARO)

A

C. Take the Single Loss Expectancy (SLE) and multiply it by the Annual Rate of Occurrence (ARO)

Explanation:
Correct answer: Take the Single Loss Expectancy (SLE) and multiply it by the Annual Rate of Occurrence (ARO)

To find the annual loss expectancy, you must first know the single loss expectancy and the annual rate of occurrence. To determine the annual loss of expectancy, multiply the single loss expectancy value by the annual rate of occurrence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

At which stage of the incident should the IRT ensure that documentation is complete and perform a root cause analysis to prevent future incidents from occurring?

A. Detect
B. Post-Incident
C. Respond
D. Recover

A

B. Post-Incident

Explanation:
An incident response plan (IRP) should lay out the steps that the incident response team (IRT) should carry out during each step of the incident management process. This process is commonly broken up into several steps, including:

Prepare: During the preparation stage, the organization develops and tests the IRP and forms the IRT.
Detect: Often, detection is performed by the security operations center (SOC), which performs ongoing security monitoring and alerts the IRT if an issue is discovered. Issues may also be raised by users, security researchers, or other third parties.
Respond: At this point, the IRT investigates the incident and develops a remediation strategy. This phase will also involve containing the incident and notifying relevant stakeholders.
Recover: During the recovery phase, the IRT takes steps to restore the organization to a secure state. This could include changing compromised passwords and similar steps. Additionally, the IRT works to address and remediate the underlying cause of the incident to ensure that it is completely fixed.
Post-Incident: After the incident, the IRT should document everything and perform a retrospective to identify potential room for improvement and try to identify and remediate the root cause to stop future incidents from happening.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Which organization produced the “Data Center Design and Implementation Best Practices” standard, which includes specifications for items such as hot/cold aisle setups?

A. National Institute of Standards & Technology (NIST)
B. International Data Center Authority (IDCA)
C. National Fire Protection Association (NFPA)
D. Building Industry Consulting Service International (BICSI)

A

D. Building Industry Consulting Service International (BICSI)

Explanation:
Building Industry Consulting Service International (BICSI) has been around since 1977. Of all the standards that BICSI has developed, the ANSI/BICSI 002-2014 is the most prominent. This standard is “Data Center Design and Implementation Best Practices.” In this standard, items such as hot/cold aisle setups, power specifications, and energy efficiency are all covered.

The IDCA has issued the Infinity Paradigm® standards framework. The application, as the ultimate end-user of data, requires an ecosystem to perform and deliver its promise. IDCA Application Ecosystem® standards are inclusive of data center standards, cloud standards, application standards, and information technology standards.

The NFPA is a U.S. group that is a center for fire safety knowledge. Virtually every building, process, service, design, and installation is affected by NFPA’s 300+ codes and standards. Their codes and standards are all available for free online and reflect changing industry needs and evolving technologies, supported by research and development and practical experience.

The National Institute of Standards and Technology (NIST) was founded in 1901 and is now part of the U.S. Department of Commerce. NIST is one of the nation’s oldest physical science laboratories. Congress established the agency to remove a major challenge to U.S. industrial competitiveness at the time — a second-rate measurement infrastructure that lagged behind the capabilities of the United Kingdom, Germany, and other economic rivals.

From the smart electric power grid and electronic health records to atomic clocks, advanced nanomaterials, and computer chips, innumerable products and services rely in some way on technology, measurement, and standards provided by the National Institute of Standards and Technology.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

During which phase of the SDLC are the necessary security controls for risk mitigation/minimization integrated with the programming designs?

A. Design
B. Requirement gathering and feasibility
C. Maintenance
D. Development

A

A. Design

Explanation:
Once the risks are analyzed, prioritized, and mitigation strategies are defined, they are integrated into the system design. The design phase involves creating architectural designs and detailed system designs and specifying technical requirements. During this phase, the design should incorporate the necessary security controls, error handling mechanisms, fault tolerance measures, and other design elements to mitigate identified risks.

In the early stages of the SDLC, business analysts, stakeholders, and development teams collaborate to gather requirements. These requirements include functional and non-functional aspects of the software system. While capturing the functional requirements, the identification and analysis of potential risks should also take place.

The development phase is a crucial stage where the actual software solution is built based on the requirements and design specifications defined in the earlier phases. During this phase, the development team converts the design into a functional and operational software product.

The maintenance phase is the stage where the software system is actively used, monitored, and updated to ensure its smooth operation and address any issues or enhancements that may arise. This phase typically follows the completion of the development and deployment phases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Nacala is a cloud architect that is designing an Infrastructure as a Service (IaaS) environment for the corporation. The servers that she is designing around have a serious need for uptime. They cannot afford to have a particular server offline at any time. What configuration would be useful for her to use?

A. Server cluster
B. Content Distribution Network (CDN)
C. Redundant servers
D. Software Define Network (SDN)

A

A. Server cluster

Explanation:
A server cluster is a group of hosts that are combined together to achieve the same purpose, such as redundancy, configuration synchronization, failover, or to minimize downtime. Clusters can be groups of hosts that are physically or logically grouped together. Clusters are handled as one unit, meaning that resources are pooled and shared between the hosts within the group. Server clusters are usually considered active-active.

Redundant servers are usually considered active-passive. The second server is not actively processing calls, data, or requests until the first fails.

SDN is a technology to improve how switches operate. It adds a controller that makes forwarding path decisions that can be configured with policy information to tailor it further to business’ needs.

CDN networks utilize edge servers to cache used content closer to the users. Netflix is an example company that uses CDN to push content out from the storage locations to edge servers for easy streaming to the users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Karen is working to ensure that the cloud solution chosen for her banking company as they move to a vendor supplied Software as a Service (SaaS) solution will protect them properly. They have several regulations that they must be in compliance with. Of the following, what is the highest security concern that they likely have?

A. Ensuring the integrity of their data
B. Ensuring that cost is managed effectively
C. Preventing vendor lock-in
D. Ensuring 99.9999% uptime

A

A. Ensuring the integrity of their data

Explanation:
Security involves confidentiality, integrity, and availability according to most definitions. For banks, the greatest concern is integrity. They must ensure that the databases are accurate. That does not mean that they are not worried about confidentiality or availability. Regulations, such as Basel III, demand accuracy of financial data. SOX is another example that is similar in nature, even though it is not for banking. As further examples, service providers are more concerned about availability, and the government is more worried about confidentiality.

Cost is always a concern for a business, but it is not a security concern.

With reference to regulations, an answer that is not likely to be the highest concern is vendor lock-in. It is a concern, but not the greatest given the facts of the question.

Uptime is a concern for all businesses, but given the banking scenario, integrity is a greater concern. Accessing a bank account but not having the right values in the account is the opposite of normal concerns. Bank regulations such as Basel III demand integrity be protected. If you are unfamiliar with that one, consider SOX and its concerns as a similar type of regulation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

The cloud administrators and operators working for a large Platform as a Service (PaaS) customer have thousands of running virtual machines that they need to take care of. It is necessary to ensure that all the virtual machines running specific operating systems (OS) and applications are always patched and configured properly. They use a tool that automates tasks such as provisioning, scaling, and allocating resources in their cloud environment.

What is this tool called?

A. Patch management
B. Distributed Resource Scheduling (DRS)
C. Dynamic Optimization (DO)
D. Orchestration

A

D. Orchestration

Explanation:
Orchestration is the term used to describe the large use of automation in a cloud environment. The automation is used for tasks such as provisions, scaling, allocating resources, and much more. Orchestration tools include Salt, Puppet, and Ansible. These tools greatly assist the management of thousands of virtual machines.

DRS and DO perform similar functions. They are both used to dynamically load virtual machines on the best server available. The administrators can select the physical server they want virtual machines to load on, but it is probably a better choice to allow the cloud to find the server that would be able to best meet the needs of a virtual machine. They both also allow for live migrations of the virtual machines to other physical servers if needed. For example, if a physical server needs maintenance, that requires it to be offline. The difference between DRS and DO is that DO is used for server clusters specifically.

Patch management takes care of patching systems, which is on the topic of the question. However, orchestration is a better tool for the cloud. It is a more robust tool.

26
Q

Eloise works with the DevOps team in a security capacity as her company works its way to DevSecOps. The team has been building something that will be deployed on their cloud platform. It contains code, libraries, and dependencies. This product will be very portable and scalable. What are they building?

A. Virtual machine
B. Application virtualization
C. Orchestration
D. Container

A

D. Container

Explanation:
Containers are an isolated environment that includes everything that an application needs to run, including code, libraries, and dependencies. Containers are often compared to virtual machines, but they differ in their architecture and purpose. Containers are portable, scalable, and efficient, providing consistency for deployment. Dependencies refer to other software components or services that the application relies on to function properly. Libraries are pre-written code that can be used to perform specific functions, such as data processing or user interface design. The code refers to the actual program instructions that define the behavior of the application.

A virtual machine is a bit different. It is a complete operating system all the way down to the kernel (the heart of the operating system). Containers share the kernel of the physical servers, such as a Linux server.

Application virtualization allows applications to run on a machine that would not. Such as WINE for a Mac, which allows the running of Microsoft Windows .EXE files.

Orchestration refers to the automated process of managing and coordinating multiple systems or services to work together to achieve a common goal. In the context of cloud computing, orchestration is the process of automating the deployment, scaling, and management of software applications and infrastructure.

27
Q

What type of monitoring is required to identify issues such as dropped packets, excessive memory utilization, slow CPU reaction time, and high latency?

A. Hardware monitoring
B. Resource monitoring
C. Baseline monitoring
D. Performance monitoring

A

D. Performance monitoring

Explanation:
Performance monitoring is a continual process in which the CSP ensures that systems operate reliably and that customer service level agreements are met.

Baseline monitoring collects data on system resources, such as CPU usage, memory usage, disk I/O, and network traffic, during normal operations. This data is used to establish performance baselines, which can help detect anomalies, identify performance issues, and optimize system resources.

Monitoring resource utilization helps track how cloud resources are being used, including CPU utilization, memory usage, storage capacity, network bandwidth, and other relevant metrics. By analyzing this data, administrators can identify resource bottlenecks, optimize resource allocation, and make informed decisions about scaling resources up or down based on demand.

Hardware monitoring refers to the process of monitoring and managing the physical infrastructure components of a cloud computing environment. It involves monitoring the health, performance, and availability of hardware devices and components that support the cloud infrastructure, including servers, storage systems, networking equipment, and other hardware resources.

28
Q

The fact that a CSP is a large organization with many potentially valuable customers may increase its exposure to which of the following risks?

A. Data Center Location
B. Compliance
C. Downtime
D. General Technology Risks

A

D. General Technology Risks

Explanation:
Cloud computing risks can depend on the cloud service model used. Some risks common to all cloud services include:

CSP Data Center Location: The location of a CSP’s data center may impact its exposure to natural disasters or the risk of regulatory issues. Cloud customers should verify that a CSP’s locations are resilient against applicable natural disasters and consider potential regulatory issues.
Downtime: If a CSP’s network provider is down, then its services are unavailable to its customers. CSPs should use multivendor network connectivity to improve network resiliency.
Compliance: Certain types of data are protected by law and may have mandatory security controls or jurisdictional limitations. These restrictions may affect the choice of a cloud service model or CSP.
General Technology Risks: CSPs are a big target for attackers, who might exploit vulnerabilities or design flaws to attack CSPs and their customers.
29
Q

Yakov is a cloud data architect and is currently designing the storage structure for data in the company’s Infrastructure as a Service (IaaS) deployment. He has decided to use a structure that identifies data by the record and its fields. What is he using?

A. Relational database
B. File based
C. Object based
D. Data lake

A

A. Relational database

Explanation:
Relational databases store data in tables. Each row in the table is called a record or a tuple, and the columns are called fields or attributes.

Object-based cloud storage is a storage architecture that organizes and manages data as discrete objects rather than traditional file hierarchies or blocks. In this model, each object is assigned a unique identifier and stored as a self-contained unit along with its metadata and attributes. This approach allows for scalable and efficient storage and retrieval of data in cloud environments.

A data lake is a centralized and scalable repository that stores raw, unprocessed, and heterogeneous data from various sources in its native format. It is designed to accommodate vast volumes of structured, semi-structured, and unstructured data without the need for predefined schemas or data transformations.

File-based cloud storage is a type of cloud storage service that allows users to store and access files in the cloud. This means that users can organize and access their files on a cloud-based file system that is similar to the file system they use on their local computer or network storage device.

30
Q

Concerns about vendor lock-in are MOST related to which of the following?

A. Resiliency
B. Portability
C. Performance
D. Interoperability

A

B. Portability

Explanation:
Some important cloud considerations have to do with its effects on operations. These include:

Availability: The data and applications that an organization hosts in the cloud must be available to provide value to the company. Contracts with cloud providers commonly include service level agreements (SLAs) mandating that the service is available a certain percentage of the time.
Resiliency: Resiliency refers to the ability of a system to weather disruptions. Resiliency in the cloud may include the use of redundancy and load balancing to avoid single points of failure.
Performance: Cloud contracts also often include SLAs regarding performance. This ensures that the cloud-based services can maintain an acceptable level of operations even under heavy load.
Maintenance and Versioning: Maintenance and versioning help to manage the process of changing software and other systems. Updates should only be made via clear, well-defined processes.
Reversibility: Reversibility refers to the ability to recover from a change that went wrong. For example, how difficult it is to restore on-site operations after a transition to an outsourced service (like a cloud provider).
Portability: Different cloud providers have different infrastructures and may do things in different ways. If an organization’s cloud environment relies too much on a provider’s unique implementation or the provider doesn’t offer easy export, the company may be stuck with that provider due to vendor lock-in.
Interoperability: With multi-cloud environments, an organization may have data and services hosted in different providers’ environments. In this case, it is important to ensure that these platforms and the applications hosted on them are capable of interoperating.
Outsourcing: Using cloud environments requires handing over control of a portion of an organization’s infrastructure to a third party, which introduces operational and security concerns.
31
Q

Isabella is a cloud data architect who has been working with application developers. They are building a machine learning tool for their business. There is a great deal of concern about protecting some of the information because if there is a breach the implications are wide ranging. The customers could lose confidence in their business, and the regulatory fines are quite high. So, they are interested in a technology that will allow the data to be used in machine learning, possibly through mathematical operations and Boolean logic without revealing the actual values.

What technology do they need?

A. Symmetric encryption
B. Tokenization
C. Fully Homomorphic Encryption
D. Public key cryptography

A

C. Fully Homomorphic Encryption

Explanation:
Fully Homomorphic Encryption (FHE) would allow for the manipulation of encrypted files without needing to unencrypt them. FHE is definitely still evolving, although some FHE options do exist today. FHE looks to maintain the mathematical values of the data, allowing mathematical operations such as addition and multiplication or using it in Boolean operations of AND and OR.

With symmetric encryption and public key cryptography, the data must be decrypted to be used. Symmetric encryption includes algorithms such as the Advanced Encryption Standard (AES), which uses a single key for both encryption and decryption.

Public key cryptography is sometimes called asymmetric encryption. This involves algorithms such as RSA and DH. When encryption with RSA, or other similar algorithms, two keys are used: a public key and a private key. When one key is selected for encryption, the other is used for decryption.

Tokenization is the practice of utilizing a random or opaque value to replace what would otherwise be sensitive data. Tokenization satisfies the need in the question to not reveal the actual data, but it cannot be used in its tokenized format. It must be returned to the original value for any operations to be performed.

32
Q

Which of the following development methods is LEAST like the others?

A. Kanban
B. Scrum
C. Waterfall
D. Agile

A

C. Waterfall

Explanation:
Software development teams can use various development methodologies. Some of the most common include:

Waterfall: The waterfall design methodology strictly enforces the steps of the SDLC. Generally, every part of each stage must be completed before moving on to the next; however, some versions allow stepping back to an earlier phase as needed or only addressing some of the software’s requirements in each go-through.
Agile: Agile development methodologies such as Scrum or Kanban differ from Waterfall in that they are iterative. During each iteration, the team identifies requirements and works to fulfill them in a set (short) period before moving on to the next phase. Shorter development cycles enable the team to adapt to changing requirements, and Agile practices commonly embrace automation to support repeated processes and security testing (DevSecOps) to streamline the development process.
33
Q

Which of the following describes how an organization plans to maintain operations during an incident?

A. DRP
B. DIA
C. BIA
D. BCP

A

D. BCP

Explanation:
A business continuity plan (BCP) sustains operations during a disruptive event, such as a natural disaster or network outage. It can also be called a continuity of operations plan (COOP).

A disaster recovery plan (DRP) works to restore the organization to normal operations after such an event has occurred.

The decision of what needs to be included in a business continuity plan is determined by a business impact assessment (BIA), which determines what is necessary for the business to function vs. “nice to have.”

34
Q

Which of the following would be the BEST way to mitigate the risk of cryptographic failures on web applications?

A. Sanitize and validate all client-supplied input data
B. Ensure up-to-date and strong standard algorithms, protocols, and keys
C. User-supplied data is not validated, filtered, or sanitized by the application
D. Establish and use a library of secure design patterns or paved road

A

B. Ensure up-to-date and strong standard algorithms, protocols, and keys

Explanation:
Web applications store a lot of sensitive data such as credit card information, authentication credentials, and Personally Identifiable Information (PII). It is important for web applications to keep their user’s information safe. One way to mitigate the risk of cryptographic failures (formerly sensitive data exposure in the OWASP Top 10 2017) is by implementing proper encryption technologies. One element of that is to ensure up-to-date and strong standard algorithms, protocols, and keys.

Establish and use a library of secure design patterns or paved road is a solution to insecure design.

Sanitize and validate all client-supplied input data is a way to prevent Server Side Request Forgery (SSRF).

User-supplied data not being validated, filtered, or sanitized by the application is a common problem. If user-supplied data was validated, filtered and/or sanitized, it would be a way to prevent injection attacks.

The (ISC)2 books do not cover these very well and it is a good thing to be familiar with the threats and the prevention methods. A great resource to study more about these topics can be found at OWASP’s website.

35
Q

A business impact assessment (BIA) should be performed during which stage of creating a BCP/DRP?

A. Testing
B. Implementation
C. Creation
D. Auditing

A

C. Creation

Explanation:
Managing a business continuity/disaster recovery plan (BCP/DRP) has three main stages:

Creation: The creation stage starts with a business impact assessment (BIA) that identifies critical systems and processes and defines what needs to be covered by the plan and how quickly certain actions must be taken. Based on this BIA, the organization can identify critical, important, and support processes and prioritize them effectively. For example, if critical applications can only be accessed via a single sign-on (SSO), then SSO should be restored before them. BCPs are typically created first and then used as a template for prioritizing operations within a DRP.
Implementation: Implementation involves identifying the personnel and resources needed to put the BCP/DRP into place. For example, an organization may take advantage of cloud-based high availability features for critical processes or use redundant systems in an active/active or active/passive configuration (dependent on criticality). Often, decisions on the solution to use depend on a cost-benefit analysis.
Testing: Testing should be performed regularly and should consider a wide range of potential scenarios, including cyberattacks, natural disasters, and outages. Testing can be performed in various ways, including tabletop exercises, simulations, or full tests.

Auditing is not one of the three stages of developing a BCP/DRP.`

36
Q

Which of the following cloud models is the BEST choice for an organization looking to optimize its cloud environment for the various applications and data being hosted there?

A. Hybrid Cloud
B. Multi-Cloud
C. Public Cloud
D. Community Cloud

A

B. Multi-Cloud

Explanation:
Cloud services are available under a few different deployment models, including:

Private Cloud: In private clouds, the cloud customer builds their own cloud in-house or has a provider do so for them. Private clouds have dedicated servers, making them more secure but also more expensive.
Public Cloud: Public clouds are multi-tenant environments where multiple cloud customers share the same infrastructure managed by a third-party provider.
Hybrid Cloud: Hybrid cloud deployments mix both public and private cloud infrastructure. This allows data and applications to be hosted on the cloud that makes the most sense for them.
Multi-Cloud: Multi-cloud environments use cloud services from multiple different cloud providers. This enables customers to take advantage of price differences or optimizations offered by different providers.
Community Cloud: A community cloud is essentially a private cloud used by a group of related organizations rather than a single organization. It could be operated by that group or a third party, such as FedRAMP-compliant cloud environments operated by cloud service providers.
37
Q

A global social media company has been collecting personal data of European Union (EU) citizens. The EU citizens’ personal data was being stored in Frankfurt, Germany. However, a corporate decision was made to move the data to a server in Seattle, Washington, USA. What issue have they now caused for themselves?

A. Malware that modifies data
B. Regulatory noncompliance
C. Improper credential management
D. Accidental deletion of data

A

B. Regulatory noncompliance

Explanation:
The General Data Protection Regulation (GDPR) from the EU requires that personal data collected from natural persons within the EU be stored within the EU. It can leave the EU and head to a variety of other countries that have been pre-approved by the EU. They include countries like Argentina, Switzerland, Israel, Japan, etc. (not a bad thing to know before the test). Who is not on that list is the US. The US Privacy Shield that they created has been declared defunct by European courts because the protection mechanisms are not being put in place around that data in the US. This opens the company to lawsuits. Meta was just fined 1.3 billion USDs for doing this.

Accidental deletion of data is exactly that, accidental deletion. This has been happening since users have had access to computers. The cloud providers finally figured this out, so when you delete anything, most cloud providers will store the data (of any kind) for an additional 30 days.

Malware has been modifying data since the 1980s. This includes everything from viruses to ransomware.

Improper credential management is a very common problem that has also been around for a very, very long time. This could involve anything from leaving the default account with the default password in place to not removing an account when a user leaves the business.

38
Q

Which of the following SOC duties involves continuous monitoring and investigation?

A. Threat Detection
B. Threat Prevention
C. Quality Assurance
D. Incident Management

A

A. Threat Detection

Explanation:
The security operations center (SOC) is responsible for managing an organization’s cybersecurity. Some of the key duties of the SOC include:

Threat Prevention: Threat prevention involves implementing processes and security controls designed to close potential attack vectors and security gaps before they can be exploited by an attacker.
Threat Detection: SOC analysts use Security Information and Event Management (SIEM) solutions and various other security tools to identify, triage, and investigate potential security incidents to detect real threats to the organization.
Incident Management: If an incident has occurred, the SOC may work with the incident response team (IRT) to contain, investigate, remediate, and recover from the identified incident.

Quality Assurance is not a core SOC responsibility.

39
Q

Which of the following describes the cloud’s ability to grow over time as demand increases?

A. Scalability
B. Mobility
C.Elasticity
D. Agility

A

A. Scalability

Explanation:
Elasticity refers to a system’s ability to grow and shrink on demand.
Scalability refers to its ability to grow as demand increases.
Agility and mobility are not terms used to describe cloud environments.

40
Q

Sabia is part of the Development/ Security/ Operations (DevSecOps) teams. She is part of the security team itself. They need to perform testing that will show if all the modules will work together once they are combined. What type of testing needs to be done?

A. Usability testing
B. Integration testing
C. Regression testing
D. Unit testing

A

B. Integration testing

Correct answer: Integration testing

Integration testing is a testing process that aims to verify the interaction and compatibility of different software modules, components, or systems when combined or integrated together. It focuses on identifying defects or issues that may arise due to the interaction between these software elements.

Usability testing, also known as User eXperience (UX) testing or user acceptance testing, is a process of evaluating a software application’s ease of use, intuitiveness, and overall user satisfaction. The goal of usability testing is to assess how well the software meets the needs of its intended users and identify any usability issues or areas for improvement.

Unit testing is a testing approach that focuses on verifying the individual components or units of software in isolation. It involves testing the smallest testable parts of a software system, such as functions, methods, or classes, to ensure their correctness and proper functionality.

Regression testing is a testing process that verifies whether changes or updates to a software application have introduced new defects or caused previously functioning features to break. It aims to ensure that the existing functionality of the software remains intact after modifications or enhancements have been made.

41
Q

A small enterprise would like to move their environment from one cloud provider to another. However, the cloud provider implemented techniques that have made it very difficult to move their systems to a new provider. What is this an example of?

A. Interoperability
B. Vendor lock-in
C. Vendor lock-out
D. Portability

A

B. Vendor lock-in

Correct answer: Vendor lock-in

Vendor lock-in is the term used to describe the scenario in which a cloud customer is stuck using one cloud provider for one reason or another. Vendor lock-in can occur when the cloud provider has implemented technologies that make it difficult for the customer to move their data without hassle to another provider. For example, when Apple created iTunes and imported music as a .aac rather than as a .mpeg.

Interoperability means that data created on one type of system (for example, Microsoft 365 on a Mac) can be read on a different system (for example, Microsoft 365 on a Windows machine).

Portability is when data can be moved from one cloud provider to another without having to be recreated.

Vendor lock-out happens when a cloud provider files for bankruptcy. If a Cloud Service Provider (CSP) files for bankruptcy and the servers are turned off until the courts work through the case and determine what will happen to the corporation’s assets, the cloud customer will be locked out.

42
Q

Which of the following statements regarding General Data Protection Regulation (GDPR) is FALSE?

A. GDPR does offer some exemptions for national security agencies and law enforcement agencies
B. GDPR is focused on protecting the personal and private data of EU citizens
C. GDPR has no impact on organizations operating outside of the EU
D. Under Article 33 of the GDPR, data controllers have 72 hours to report a breach to the applicable agencies

A

C. GDPR has no impact on organizations operating outside of the EU

Explanation:
Correct answer: GDPR has no impact on organizations operating outside of the EU

The General Data Protection Regulation (GDPR) is a regulation that focuses on protecting the data of EU citizens regardless of where the data was created, collected, processed, or stored. What matters is where the person is when their data is collected. For example, if a German citizen is sitting in Germany when their data is collected, they are protected, but if a German citizen is in the US when their data is collected, they are not protected under GDPR. This means that if a citizen of the EU, sitting in the EU, utilizes a website that is run by an organization outside of the EU, that organization is still required by law to adhere to GDPR.

If a corporation is breached, the clock starts running from the moment the first person realizes that there was a breach. Within 72 hours, they must do a reasonable assessment and forensic investigation and report to the regulators what is going on.

GDPR does offer exceptions for law enforcement and national agencies.
Reference:

43
Q

Media sanitization is still a critical concept in the cloud. The media may be virtual for the customer, but it is physical in the provider’s possession. If the Cloud Service Provider (CSP) is going to manage the media properly to protect their customers and the data, they must ensure that it is erased/destroyed properly. If they have a need to purge the magnetic media, what would be an approved choice?

A. Degauss the magnetic media
B. Shred the magnetic media
C. Overwrite the media with all zeros
D. Incinerate the magnetic media

A

A. Degauss the magnetic media

Explanation:
For magnetic media, degaussing in an organizationally approved degausser rated at a minimum for the media is the approved choice.

NIST Special Publication 800-88 defines clear and purged as follows:

Clear: applies logical techniques to sanitize data in all user-addressable storage locations for protection against simple non-invasive data recovery techniques; typically applied through the standard Read and Write commands to the storage device, such as by rewriting with a new value or using a menu option to reset the device to the factory state (where rewriting is not supported).
Purge: applies physical or logical techniques that render target data recovery infeasible using state-of-the-art laboratory techniques.

For magnetic media, use an overwrite with at least a single write pass with a fixed data value, such as all zeros. Multiple write passes or more complex values may optionally be used.

Destroy renders target data recovery infeasible using state-of-the-art laboratory techniques and results in the subsequent inability to use the media for storage of data.

For magnetic media, incinerate floppy disks and diskettes by burning in a licensed incinerator or shred.
Reference:

(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 55-56.

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 93-94.
B

Shred the magnetic media
C

Overwrite the media with all zeros
D

Incinerate the magnetic media

44
Q

Which of the following cloud data encryption solutions is MOST likely to use a customer-controlled key?

A. Storage-Level Encryption
B. File-Level Encryption
C. Object-Level Encryption
D. Volume-Level Encryption

A

D. Volume-Level Encryption

Explanation:
Data can be encrypted in the cloud in a few different ways. The main encryption options available in the cloud are:

Storage-Level Encryption: Data is encrypted as it is written to storage using keys known to/controlled by the CSP.
Volume-Level Encryption: Data is encrypted when it is written to a volume connected to a VM using keys controlled by the cloud customer.
Object-Level Encryption: Data written to object storage is encrypted using keys that are most likely controlled by the CSP.
File-Level Encryption: Applications like Microsoft Word and Adobe Acrobat can encrypt files using a user-provided password or a key controlled by an IRM solution.
Application-Level Encryption: An application encrypts its own data using keys provided to it before storing the data (typically in object storage). Keys may be provided by the customer or CSP.
Database-Level Encryption: Databases can be encrypted at the file level or use transparent encryption, which is built into the database software and encrypts specific tables, rows, or columns. These keys are usually controlled by the cloud customer.

Reference:

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 69-70.

45
Q

To ensure compliance with regulatory requirements, an organization must conduct an annual assessment of its negotiated service agreements with its present Cloud Service Provider (CSP). This year, the organization may decide to change their CSP due to cost concerns.

What is the most critical condition/element to consider and should have been considered before they got into the cloud to begin with?

A. Reversibility
B. Auditability
C. Resiliency
D. Interoperability

A

A. Reversibility

Explanation:Correct answer: Reversibility

Reversibility refers to the ability for a customer to retrieve their data and all artifacts from a CSP. If a customer cannot get all their data and artifacts back from the CSP, it will be very difficult to leave.

Interoperability is defined in ISO/IEC 17788 as the ability for two systems to exchange and mutually use information. This is more important when building systems or determining where data will reside or who will receive the data.

Auditiability is defined in ISO/IEC 17788 as the capability of collecting and making available necessary evidential information related to the operation and use of a cloud service, for the purpose of conducting an audit.

Resiliency is the ability for a system to maintain an acceptable level of service in the face of faults that affect normal service, as defined by ISO/IEC 17788.

ISO/IEC 17788 is one of the few documents that ISO makes available for free. It is a basic description of cloud. It is a good read, and a short one.

46
Q

Having a proper mapping strategy will enable an organization to do which of the following?

A. Easily group together data of similar types and classification levels
B. Know when data is modified within an application
C. Classify data based on its structured or unstructured format
D. Know all the locations where data is stored

A

D. Know all the locations where data is stored

Explanation:
Correct answer: Know all the locations where data is stored

To implement security controls and policies, an organization must first know where data is stored. Having a proper mapping strategy enables an organization to know all the locations where data is stored. This knowledge goes a long way in creating effective security policies.

Data does not have to be grouped based on the type or the classification levels. Mapping is about knowing where the data exists.

Data is not classified based on whether it is structured or unstructured. Data is classified based on the sensitivity of the data.

Mapping data does not enable the administrators to know when someone has modified it through an application.

47
Q

At which phase of the cloud data lifecycle does encryption key rotation become an important consideration?

A. Archive
B. Use
C. Share
D. Create

A

A. Archive

Explanation:
The cloud data lifecycle has six phases, including:

Create: Data is created or generated. Data classification, labeling, and marking should occur in this phase.
Store: Data is placed in cloud storage. Data should be encrypted in transit and at rest using encryption and access controls.
Use: The data is retrieved from storage to be processed or used. Mapping and securing data flows becomes relevant in this stage.
Share: Access to the data is shared with other users. This sharing should be managed by access controls and should include restrictions on sharing based on legal and jurisdictional requirements. For example, the GDPR limits the sharing of EU citizens’ data.
Archive: Data no longer in active use is placed in long-term storage. Policies for data archiving should include considerations about legal data retention and deletion requirements and the rotation of encryption keys used to protect long-lived sensitive data.
Destroy: Data is permanently deleted. This should be accomplished using secure methods such as cryptographic erasure/crypto shredding.
48
Q

In a Data Loss Prevention (DLP) cloud-based solution, it is critical to ensure discovery and monitoring are done well. One major concern with DLP solutions when it is monitoring traffic is which of the following?

A. True positive
B. False positives
C. Data in motion
D. Data in use

A

B. False positives

Explanation:
A false positive in the monitoring state means that an alert and a log entry are created, and actions need to be taken to understand what has happened when, in truth, nothing had happened. The meaning of false positive is false means it is not telling you the truth, and positive means it is alerting you to an event. Therefore, action needs to be taken, diverting someone’s time and energy to uncover what has happened. That time and energy would be best placed elsewhere, so false positives are an issue with DLP monitoring.

A true positive means that an alert is created for an actual loss event that needs to be addressed.

Data in motion is probably the most common use of DLP solutions and that is more of a simple statement of truth about DLP monitoring.

Data in use is another place of concern for protecting data, but the topic of the question is data loss. That means that data is going somewhere it should not be. Data in use means the data is already there—a little late to provide DLP solutions. Other controls are needed there.

A good read on this topic is the CSA SecaaS Category 2 DLP document.

49
Q

Which of the following roles ensures that data’s context and meaning are understood and that it is used properly?

A. Data Steward
B. Data Owner
C. Data Processor
D. Data Custodian

A

A. Data Steward

Explanation:
Correct answer: Data Steward

There are several roles and responsibilities related to data ownership, including:

Data Owner: The data owner creates or collects the data and is responsible for it.
Data Custodian: A data custodian is responsible for maintaining or administrating the data. This includes securing the data based on instructions from the data owner.
Data Steward: The data steward ensures that the data’s context and meaning are understood and that it is used properly.
Data Processor: A data processor uses the data, including manipulating, storing, or moving it. Cloud providers are data processors.
50
Q

Cloud service providers have a shared responsibility model that explains the distribution of tasks and responsibilities between the customer and the provider. A customer has subscribed to a server-based Platform as a Service (PaaS) and used it for the creation of their new application, which is running on another server-based subscription.

Who is responsible for the protection of the personal data that the application will store?

A. Data custodian
B. Data processor
C. Cloud customer
D. Cloud provider

A

C. Cloud customer

Explanation:
Correct answer: Cloud customer

The cloud customer is always responsible for protecting their data, no matter whose hands it’s in.

Under the European Union (EU) General Data Protection Regulation (GDPR), the data processor is not the cloud customer. If it is a company that is processing (which includes storage) personal data, they do have a responsibility for the protection of the personal data in their possession. However, there is no indication in the question that this is occurring in the EU, so that is not the best answer here. If this scenario involved the personal data of European citizens (and a few more conditions), then the cloud provider is a data processor, and they are required to protect the data in their possession.

The cloud provider should protect the data in their possession, but the customer is always responsible for their data, no matter where it is stored or processed.

The data custodian is the one who is in possession of the data, which could include the end users, the Information Technology (IT) department, the cloud service provider, and so on. Whoever is handling data should take care to protect the data in their possession. However, the customer is always responsible for their data, no matter who’s in possession of it.
Reference:

51
Q

Which of the following measures the amount of time that a company is willing to accept a given system being down after a disruptive event?

A. MTD
B. RPO
C. RSL
D. RTO

A

D. RTO

Explanation:
A business continuity and disaster recovery (BC/DR) plan uses various business requirements and metrics, including:

Recovery Time Objective (RTO): The RTO is the amount of time that an organization is willing to have a particular system be down. This should be less than the maximum tolerable downtime (MTD), which is the maximum amount of time that a system can be down before causing significant harm to the business.
Recovery Point Objective (RPO): The RPO measures the maximum amount of data that the company is willing to lose due to an event. Typically, this is based on the age of the last backup when the system is restored to normal operations.
Recovery Service Level (RSL): The RSL measures the percentage of compute resources needed to keep production environments running while shutting down development, testing, etc.
52
Q

Sehaj is building a new server-based Platform as a Service (PaaS) Virtual Machine (VM). As a customer, he is concerned if the VM will be able to expand to match the needs of the users as the days go by. To make sure that the VM is placed on the best server, which tool would he be interested in knowing is in use?

A. Distributed resource scheduling
B. Maintenance mode
C. Dynamic optimization
D. High availability

A

C. Dynamic optimization

Explanation:
Dynamic optimization is the process in which cloud environments are constantly monitored and maintained to ensure that the resources are available when needed and that nodes share the load equally so that one node doesn’t become overloaded. This process ensures that VMs will be added to the best physical server based on the requirements in the configuration and the current load on the server.

Distributed resource scheduling is a method for providing high availability, workload distribution, and the balancing of jobs in a cluster.

When a host is in maintenance mode, the virtual machines are suspended. This allows for patching, configuration, and saving of images of virtual machines.

High availability is the concept that systems experience little to no downtime. High availability mode is usually found between two or more devices. It allows the devices to communicate with each other and pick up the traffic for the other if one of the devices fails. It is often seen with devices such as firewalls.

53
Q

According to studies, the later in the software development phase errors are discovered, the more expensive it is to remedy them. What can be done to avert such problems?

A. Secure Software Development Lifecycle (SSDLC)
B. Dynamic code execution testing techniques
C. Interactive application security testing
D. Static code analysis techniques

A

A. Secure Software Development Lifecycle (SSDLC)

Explanation:
Correct answer: Secure Software Development Lifecycle (SSDLC)

Including security in the Software Development Lifecycle (SDLC) aids in the creation of secure software. The Secure Software Development Lifecycle (SSDLC) is expected to yield software solutions that are more secure against attack, minimizing the risk of important business and consumer data being exposed.

SSDLC would include testing. That includes the other three answer options.

Static code analysis is the review of the source code, looking for flaws or errors.

Dynamic code execution testing is an analysis of the application in a running environment. This could be use cases, misuse cases, and so on.

Interactive Application Security Testing (IAST) is the analysis of the running application with the source code visible beside it.

54
Q

Which of the following statements is TRUE regarding a compromised hypervisor?

A. A compromised hypervisor is only a threat to the virtual machines hosted on it and not other hypervisors in the environment
B. A compromised hypervisor can be used to attack all virtual machines on that hypervisor and also be used to attack other hypervisors
C. A compromised hypervisor is only a threat to other hypervisors in the environment but not a threat to the actual virtual machines
D. A compromised hypervisor can be used to attack network devices, but it can’t be used to attack other hypervisors in the environment

A

B. A compromised hypervisor can be used to attack all virtual machines on that hypervisor and also be used to attack other hypervisors

Explanation:
Correct answer: A compromised hypervisor can be used to attack all virtual machines on that hypervisor and also be used to attack other hypervisors

A compromised hypervisor can have serious consequences. If an attacker can compromise a hypervisor, they will then have access to all the virtual machines that are hosted on that hypervisor. In addition, the attacker could use the hypervisor as a launching pad for additional attacks on other hypervisors since each hypervisor plays a central role in the cloud environment.
Reference:

(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 203-204.

55
Q

Alicia made the decision to create virtual machines through the use of a hypervisor. The hypervisor will be installed after a full Operating System (OS) is deployed. What type of hypervisor is this?

A. Software-based
B. Bare metal
C. Type 1
D. Software as a Service (SaaS)

A

A. Software-based

Explanation:
There are two types of hypervisors. Type 1 and type 2. A type 2 hypervisor is loaded on top of a full OS, so it is often referred to as a software-based hypervisor, which is what the question describes.

Type 1 is effectively the OS for the machine, so it is often referred to as a bare-metal hypervisor.

SaaS is a cloud option that provides software to the customer. Since the cloud is required to offer its solution on a pay-as-you-go basis, it can be considered that you are renting the software rather than buying it. A simple example today would be Microsoft 365, Google Mail (gmail), or Dropbox.
Reference:

56
Q

An organization may communicate with which of the following to define SLAs and report outages that have affected the organization’s operations.

A. Partners
B. Vendors
C. Regulators
D. Consumers

A

B. Vendors

Explanation:
An organization may need to communicate with various parties as part of its security and risk management process. These include:

Vendors: Companies rely on vendor-provided solutions, and a vendor experiencing problems could result in availability issues or potential vulnerabilities for their customers. Relationships with vendors should be managed via contracts and SLAs, and companies should have clear lines of communication to ensure that customers have advance notice of potential issues and that they can communicate any observed issues to the vendor.
Customers: Communications between a company and its customers are important to set SLA terms, notify customers of planned and unplanned service interruptions, and otherwise handle logistics and protect brand awareness.
Partners: Partners often have more access to corporate data and systems than vendors but are independent organizations. Partners should be treated similarly to employees with defined onboarding/offboarding and management processes. Also, the partnership should begin with mutual due diligence and security reviews before granting access to sensitive data or systems.
Regulators: Regulatory requirements also apply to cloud environments. Organizations receive regulatory requirements and may need to demonstrate compliance or report security incidents to relevant regulators.

Organizations may need to communicate with other stakeholders in specific situations. For example, a security incident or business disruption may require communicating with the public, employees, investors, regulators, and other stakeholders. Organizations may also have other reporting requirements, such as quarterly reports to stakeholders, that could include security-related information.

57
Q

Kelly is working with software developers planning the security controls to be added to their software. They have been discussing the different forms of data protection that can be added to the software. They are looking for something that can be used to protect the software code. Of the following, what can be added to protect the code from reverse engineering?

A. Tokenization
B. Anonymization
C. Obfuscation
D. Hashing

A

C. Obfuscation

Explanation:
Obfuscation is to confuse. It can be used in many places in many ways. You can say that encryption is a form of obfuscation. But not all obfuscation is encryption. Obfuscation can be done by transmitting something in base 64 rather than base 16, or it can be used to hide the actual nature of code to complicate the reverse engineering of the code.

Hashing is used to create a value that represents a piece of data that can be used to verify the integrity of data.

Tokenization is replacing a piece of data with another value that can be returned to the original.

Anonymization is removing direct and indirect personal identifiers permanently.
Reference:

58
Q

Anyang is working for a software development company and is working on identifying the security requirements for a new project. They are developing a piece of software for a customer who is particularly concerned about the quality of their data. The transactions that it needs to perform are critical to their business, and they are currently working to ensure that when the results of the transactions are stored that they are properly maintained.

What tool can be used to verify the integrity of the stored transaction?

A. Encryption
B. Hashing
C. Obfuscation
D. Parity

A

B. Hashing

Explanation:
Hashing creates a fingerprint or checksum value of a fixed size (known as the hash value) of the original data object. If the same hashing algorithm is used and the hash value remains unchanged, the hash value will always be the same.

Encryption is a reversible process (decryption) that renders data unreadable. It provides a little bit of an integrity check, but hashing is designed for integrity checks.

Parity is a similar function to hashing but is was not built to deal with data the size of a transaction database. It is still used commonly in storage technology (e.g., RAID and Erasure Coding).

Obfuscation is to render the unclear. It will misrepresent the data to make it difficult for someone who is not supposed to read it to do just that, read it.

59
Q

Ngoni and his information security team are working with the Information Technology (IT) team to determine if they should move from an on-premises data center into an Infrastructure as a Service (IaaS) virtual data center. Of the following, which is critical to consider in the early stages of this process?

A. Create a cloud committee
B. Hire a team of cloud experts
C. Cost-benefit analysis
D. Proof of concept

A

C. Cost-benefit analysis

Explanation:
Any organization that is considering a move from an on-premises solution to the cloud should first perform a cost-benefit analysis to ensure that the decision makes sense for the company.

If the cost-benefit is looking good, then the other answer options can be done. It is arguable that you need a team of cloud experts to perform a proper cost-benefit. However, the answer says to hire a team and that is probably more than needed before the initial cost-benefit analysis.

If it looks like the cost-benefit is looking good, then use cloud experts to put together a proof of concept trial to ensure that the technology is going to work properly for the business.

A cloud committee could be put together if all the above look good and the business is going to make the move to the cloud.
Reference:

(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 20-22.

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 45.
D

Proof of concept

60
Q
A