Pocket Prep 8 Flashcards

1
Q

A college student is looking to set up their own cloud server so that they can install a few programs and create a lab. They need a cloud option that is cost-effective and will allow them to only pay for what they need. They don’t have the funds to purchase and maintain their own hardware.

Which cloud model would suit this student’s needs the BEST?

A. Public cloud
B. Private cloud
C. Community cloud
D. Hybrid cloud

A

A. Public cloud

Explanation:
A public cloud would be the best option for this student because it is the least expensive and will allow them to pay only for the resources that they use. Since the student is planning to use the server as a lab environment, it’s unlikely that it will cost much money. In fact, the big public cloud providers even have a free tier of services.

Community cloud is a close second for this question—building a cloud for many students to share. Or rather, the university setting up a community cloud that could be shared among colleges and universities to provide services to specific types of students. However, the question does not mention anything else. You could even look at the question from a selfish perspective. The college student wants to set something up just for themselves, so community is not the right answer.

A private cloud is cost prohibitive. The server, wherever it is, is dedicated to this student, which would cost more than most university students could ever afford.

Since community and private aren’t a consideration for an answer, hybrid is not the correct answer. Hybrid is a combination of either the two or three options: public, private, and community.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

If a data center has:

Fault tolerant Uninterruptible Power Supply (UPS)
Dual, diverse power path
12-hour fault tolerant standby power
Standby power has a continuous or unlimited runtime rating
Fault tolerant standby power generation

What tier data center would you have?

A. Tier II
B. Tie IV
C. Tier III
D. Tier I

A

B. Tier IV

Explanation:

A Tier IV data center is fault tolerant. So, there are many options in that list to confirm this level.

A Tier I data center has the basic capacity and some UPS devices.

A Tier II data center has redundant power and cooling capability.

A Tier III data center has multiple power paths that provide a concurrently maintainable environment. Repair work should not affect production systems.

Switch’s website has patented a Tier V data center, which you can find more information about on their website. There is a link under their awards to an interesting video that gives you a great idea what the data center tiers are all about.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Donna is working with her customer to set up the security controls they need to be able to share their content. They have case studies and suggested configurations for their products. They want to ensure that only their existing customers can access these files. What tool would you recommend?

A. Secure Shell (SSH)
B. Information Rights Management (IRM)
C. Data Rights Management (DRM)
D. Transport Layer Security (TLS)

A

B. Information Rights Management (IRM)

Explanation:
Information Rights Management (IRM)

Correct answer: Information Rights Management (IRM)

First, this question is not meant to trick you if you answered DRM. It is meant only to point out a possible view on the terms IRM and DRM. DRM is often used for public content like Kindle, iTunes, and Netflix, while IRM is more likely corporate information intended for customers in some manner. Cisco uses Locklizard as their IRM of choice for courseware in their classroom. (ISC)2 uses another IRM tool.

IRM and DRM control the content from how long you have access to the file to print capability and screen shots.

TLS and SSH are not suitable answers because they would only encrypt information in transit. The question is asking about security controls related to sharing. That is a bigger topic and requires IRM or DRM. In this case, IRM because it is corporate, not public sharing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following is concerned with the proper restoration of systems after a disaster or unexpected outage?

A. Information security management
B. Continuity management
C. Incident management
D. Change management

A

B. Continuity management

Explanation:
Continuity management

Correct answer: Continuity management

Continuity management, sometimes known as business continuity management, is concerned with restoring systems and devices after a disaster or unexpected outage has occurred. Business Continuity and Disaster Recovery (BCDR) plans are a part of continuity management. ITIL defines continuity management as the preparation for large scale interruptions. Many people might call this Disaster Recovery (DR) outside of the discipline of ITIL.

Change management is the process of tracking and managing a change throughout its entire lifecycle.

Incident management is the process of responding to an adverse event.

Information security management is arguably all that is done by information security managers within a business, including the cloud and standard data centers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Service level management is concerned with the oversight of service level agreements (SLAs). SLAs typically are used in contracts between service providers, such as cloud service providers, and their customers, especially when the provider is a public cloud provider. If the Information Technology (IT) department is building a private cloud in their on-premises data center, what would be the equivalent term used between IT and the business units?

A. Annual Rate of Occurrence (ARO)
B. Operational Level Agreement (OLA)
C. Master Services Agreement (MSA)
D. Recovery Time Objectives (RTO)

A

B. Operational Level Agreement (OLA)

Explanation:
Operational Level Agreement (OLA)

Correct answer: Operational Level Agreement (OLA)

Operational Level Agreements (OLAs) are similar to SLAs, but rather than existing between a customer and an external provider, OLAs exist between two units within the same organization.

MSAs are less specific than SLAs. SLAs typically define items such as uptime or bandwidth requirements. MSAs are used to define the relationships between the two parties in the contract.

AROs define the expected number of times that an incident or disaster could happen within a given year. The RTO is the window of time that is allocated for the recovery team to bring the backup solutions operational.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Simone has been working within the Information Technology (IT) department. While working there, they have been analyzing the golden images that they use when they start new virtual servers. They have used a software tool to analyze it in a running virtual environment. The tool has been able to detect that there are two fixes that need to be applied as a result of a CVE notice that has recently been released. It has been determined that there is a fix from the vendor that they can apply.

What would be the next action they should take?

A. Store a new golden image
B. Confirm the Common Vulnerability Score
C. Patching
D. Run a new scan

A

C. Patching

Explanation:
Patching is used to fix bugs found in software, apply security vulnerability fixes, introduce new software features, and more. Before patches are applied, they should be properly tested and validated. However, in this particular issue there is no option to test the patch. So, of these options, the patch is the logical next step. Actually, you cannot test the patch until it is applied. The answer “patch the golden image” does not eliminate that as the next step before the image is replaced.

Once it is patched, it is good to test it to ensure that everything is still working as it should be. Part of this could involve running a new scan. Once it is verified as good, a new golden image is stored and made available for use.

When a CVE is released, a score is given to it based on the Common Vulnerability Scoring System (CVSS). This is arguably a good thing to check, but the software tool that pointed to the CVE should also show the CVSS score.
Reference:

(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 221-223.

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 186.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A cloud customer who has been using a Hardware Security Module (HSM) is migrating to a newer model. They want to ensure that their keys will never be recovered by anyone, so they are taking actions to ensure that. One of the steps that they are taking is to overwrite the erased data with arbitrary data and zero values.

What are they doing?

A. Zeroization
B. Cryptographic erasure
C. Degaussing
D. Data hijacking

A

A. Zeroization

Explanation:
What are they doing?
A

Zeroization

Correct answer: Zeroization

Zeroization is another term for overwriting. In this process, erased data is overwritten with arbitrary data and zero values as a means of data sanitation.

Cryptographic erasure is when data is encrypted, and then the key that was used is destroyed. This is a possible option for customers when they do not have access to the drives themselves that their data resides on, which would be true for PaaS, SaaS, and possibly IaaS in public clouds.

Degaussing is a process of disrupting the magnetic state on magnetic drives—Hard Disk Drives (HDD). It does render a drive unusable. It would be possible for the cloud provider to perform this sanitization or possibly in a private cloud.

Data hijacking is an odd term, but it means that the bad actors of the world are taking control of someone’s data. Something like ransomware would be like this.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Your organization would like to automate a process that involves two applications. The data that moves between the applications must be synched in real time as well as one system that needs to boot up before the other. What can be used to synchronize the operations of these applications?

A. Application Programming Interface (API) Gateway
B. Tokenization
C. Sandboxing
D. Orchestration

A

D. Orchestration

Explanation:
Orchestration is a technique for synchronizing and orchestrating the operations of multiple apps that work together to complete a business activity. These are managed groups of applications, and their actions are choreographed based on the rules you establish.

Tokenization is a process that replaces a piece of sensitive data such as a credit card number with another unrelated value. The bank, regarding credit cards, would have an extra database that would allow them to replace the token with the credit card number. This prevents card numbers from needing to be sent across the internet, for example. This is how PayPal, ApplePay, and others work.

An API gateway is a piece of software that enables APIs to be processed through a single entry point when they are actually all processed by different backend services. It also provides threat protection. Gateways in general can be thought of as layer 7 firewalls.

Sandboxing, or process isolation, is a tool used to contain a piece of code, an application, a virtual machine, or others. It isolates it so that it cannot interact out of the sandbox or into the sandbox except as designed.
Reference:

(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 78-79, 158.

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 13.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of the following is NOT true of the responsibility for securing network and communication infrastructure in the cloud?

A. The cloud provider is responsible for securing the links between the customer’s and the provider’s environments.
B. The cloud provider offers tools for securing cloud environments, but the customer is responsible for properly configuring and using them.
C. The cloud provider is responsible for securing the physical infrastructure in its environment.
D. The cloud customer is responsible for securing the physical infrastructure in its data center.

A

A. The cloud provider is responsible for securing the links between the customer’s and the provider’s environments.

Explanation:
cloud?
A

The cloud provider is responsible for securing the links between the customer’s and the provider’s environments.

Correct answer: The cloud provider is responsible for securing the links between the customer’s and the provider’s environments.

The responsibility for securing network and communication infrastructure is typically shared between the CSP and the cloud customer. The CSP and cloud customer are each responsible for the security of the infrastructure within their facilities, and they share responsibility for securing traffic between them (over the Internet). This is often accomplished using cryptography, with the CSP offering secure protocols and the customer using them. Also, in cloud environments, the CSP is responsible for offering the tools needed to secure an environment (encryption, logging, etc.), but the customer is responsible for configuring and using these tools properly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following network security controls involves the principle of least privilege and individually evaluating each access request?

A. Network Security Groups
B. Traffic Inspection
C. Zero Trust Network
D. Geofencing

A

C. Zero Trust Network

Explanation:
Network security controls that are common in cloud environments include:

Network Security Groups: Network security groups (NSGs) limit access to certain resources, such as firewalls or sensitive VMs or databases. This makes it more difficult for an attacker to access these resources during their attacks.
Traffic Inspection: In the cloud, traffic monitoring can be complex since traffic is often sent directly to virtual interfaces. Many cloud environments have traffic mirroring solutions that allow an organization to see and analyze all traffic to its cloud-based resources.
Geofencing: Geofencing limits the locations from which a resource can be accessed. This is a helpful security control in the cloud, which is accessible from anywhere.
Zero Trust Network: Zero trust networks apply the principle of least privilege, where users, applications, systems, etc. are only granted the access and permissions that they need for their jobs. All requests for access to resources are individually evaluated, so an entity can only access those resources for which they have the proper permissions.

Reference:

(ISC)2 CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 126-127.

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 37-39.
D

Geofencing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

ITIL provides many different processes that can be instituted within the Information Technology (IT) department. Which management strategy is focused on preventing issues from occurring within a system or process in a proactive manner by looking for the root cause of previous bad events?

A. Release management
B. Availability management
C. Problem management
D. Incident management

A

C. Problem management

Explanation:
Problem management is focused on preventing potential issues from occurring within a system or process. It is usually done when an incident occurs that has had a big impact on the business. A root cause of the incident is then looked for to prevent the incident from occurring again.

Incident management is reactive in nature. This process is run when something does happen that is causing problems or even damage to systems, applications, data, or even the business.

Release management is the practice that makes available for use new or changed services and features.

Availability management is the process that is followed to ensure that services deliver agreed levels of availability (e.g., 99.99% uptime or 25 CPU hours/month) to the users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which of the following is NOT a physical or environmental control?

A. Biometric lock
B. Intrusion Detection System (IDS)
C. Intrusion Prevention System (IPS)
D. Uninterruptible Power Supply (UPS)

A

C. Intrusion Prevention System (IPS)

Explanation:
An intrusion prevention system helps protect a network from malicious activity and intrusions, and therefore, is not considered a physical or environmental control.

IDSs actually do exist in physical security. A sensor on a door or window that alerts when it is opened is a type of IDS. A biometric lock is a physical control even though they involve biometrics. A UPS is a battery that provides a power source if there’s a power outage, which would be considered a physical control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which type of security testing impersonates a normal user with no special knowledge or access to the system?

A. White-box
B. Gray-box
C. Clear-box
D. Black-box

A

Explanation:
Software testing can be classified as one of a few different types, including:

White-box: In white-box or clear-box testing, the tester has full access to the software and its source code and documentation. Static application security testing (SAST) is an example of this technique.
Gray-box: The tester has partial knowledge of and access to the software. For example, they may have access to user documentation and high-level architectural information.
Black-box: In this test, the attacker has no specialized knowledge or access. Dynamic application security testing (DAST) is an example of this form of testing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Sachio is working for a financial trading corporation and has been working with the Incident Response Team (IRT) to do practice drills for a variety of different incidents that they have prepared for. As they move through the phases, there is a point when they should take actions to prevent further damage from this incident to the corporation.

Which phase would this be?

A. Respond phase
B. Post-incident phase
C. Prepare phase
D. Recover phase

A

A. Respond phase

Explanation:
The first step in the respond phase is containment. The purpose of containment is to protect an organization from further damage caused by a known incident. Disconnecting affected systems, disabling hardware, and disconnecting storage are only a few of the responsibilities.

Prepare phase is the process of building plans to be able to respond to incidents as they happen. It is a crucial step to determine and build teams and build the procedural documents that will be used when an incident does happen. The question says to “prevent further damage from this incident.” So we are in an incident response, not preparing for it. This is true for post-incident and recover phases.

Recover phase is when actions are taken to return things to a normal condition. It is possible more controls could be added or configurations of existing controls could be changed. However, this is to change the effect of future events.

The post-incident phase primarily involves breaking down the steps that were just taken in a specific incident. The point is to find what needs to be improved, not point fingers, and get better at their response capability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

An information security manager is concerned about the security of portable devices in the organization which have been given access to corporate resources. What can this information security manager implement to manage and maintain the devices?

A. Symmetric encryption technology
B. Mobile Device Management (MDM)
C. Remote control to be able to delete files
D. Remote control to be able to disable it

A

B. Mobile Device Management (MDM)

Explanation:
Mobile Device Management (MDM) is the term used to describe the management and maintenance of mobile devices (such as tablets and mobile phones) that have access to corporate resources. Usually, MDM software will be installed on the devices so that the IT staff can manage the devices remotely in the case of a lost or stolen device.

MDM software usually has the following:

Symmetric encryption technology for the drive on the mobile device
Remote control to be able to disable or even 'brick' the device if it is lost or stolen
Remote control to be able to delete files in the event the phone is lost or stolen

All the answer options are good, but MDM is the all-inclusive answer, which makes it the best choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Sade, the information security manager, has been working with the software development team to ensure that the customer’s desired functionality has been fully defined while ensuring that security is also taken into consideration. Which software development phase are they in?

A. Coding
B. Requirements
C. Design
D. Planning

A

B. Requirements

Explanation:
The requirements phase of the SDLC is when the desired functionality is defined. This plan will outline the specifications for the features and functionality of the software or application to be created.

The planning phase is where the idea of developing a specific piece of software is considered based on feasibility and costs.

The design phase takes the desired functionality and creates the functionality, architecture, integration points, data flows, and so on and creates the design that can then be followed in the next phase, the coding phase.

The coding phase is where the software developers write the lines of code that will become the application. There is usually functional testing that begins in this phase as well as Static Application Security Testing (SAST).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which of the following BEST defines the Annual Rate of Occurrence (ARO)?

A. Jenna has determined that the server that could be hit by ransomware is valued at two million USD
B. Jenna has been able to determine that if the server is offline, it must be restored to a functional state within three hours
C. Jenna has been able to determine that ransomware is likely to occur once every three years
D. Jenna has determined that the cost of a ransomware event will likely be five million USD

A

C. Jenna has been able to determine that ransomware is likely to occur once every three years

Explanation:
ARO stands for Annualized Rate of Occurrence, which is defined by the estimated number of times a threat will successfully exploit a vulnerability in a given year. By multiplying the Single Loss Expectancy (SLE) by the ARO, you are able to determine the Annual Loss Expectancy (ALE).

ARO = Jenna has been able to determine that ransomware is likely to occur once every three years.

SLE = Jenna has determined that the cost of a ransomware event will likely be five million USD.

Asset value = Jenna has determined that the server that could be hit by ransomware is valued at two million USD.

Maximum Tolerable Downtime (MTD) = Jenna has been able to determine that if the server is offline, it must be restored to a functional state within three hours.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Your organization must be able to rapidly scale resources up or down, as required, to meet future needs and from a variety of cloud geographical regions. Which cloud characteristic is required in this scenario?

A. Elasticity
B. Resource pooling
C. On-demand
D. High availability

A

A. Elasticity

Explanation:
Elasticity increases and decreases resources as needed, but unlike scalability, elasticity is done automatically. Elastic resources are based on the current needs and resources are added or removed dynamically to meet those needs from a variety of geographical locations.

On-demand is the cloud characteristic in which you will be able to access a web page and request, configure, and build a cloud service without having to interact with the cloud provider.

High availability is a characteristic often found with firewalls and with other similar devices that enable communication between redundant devices so that if one fails, the other can continue the communication without disruption to the user.

Resource pooling includes the pooling of resources within a server such as a CPU and memory as well as the pool of resources within a data center, for example, servers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which of the following organizations publishes Top 10 lists describing the most common vulnerabilities in various types of software (web apps, web APIs, etc.)?

A. NIST
B. OWASP
C. SANS
D. CSA

A

B. OWASP

Explanation:
Several organizations provide resources designed to teach about the most common vulnerabilities in different environments. Some examples include:

Cloud Security Alliance Top Threats to Cloud Computing: These lists name the most common threats, such as the Egregious 11. According to the CSA, the top cloud security threats include data breaches; misconfiguration and inadequate change control; lack of cloud security architecture and strategy; and insufficient identity, credential access, and key management.
OWASP Top 10: The Open Web Application Security Project (OWASP) maintains multiple top 10 lists, but its web application list is the most famous and is updated every few years. The top four threats in the 2021 list were broken access control, cryptographic failures, injection, and insecure design.
SANS CWE Top 25: SANS maintains a Common Weakness Enumeration (CWE) that describes all common security errors. Its Top 25 list highlights the most dangerous and impactful weaknesses each year. In 2021, the top four were out-of-bounds write, improper neutralization of input during web page generation (cross-site scripting), out-of-bounds read, and improper input validation.

NIST doesn’t publish regular lists of top vulnerabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

There are many reasons a company must work to ensure that the information that they possess is managed and handled properly. One of the key elements of the European Union’s (EU) General Data Protection Regulation (GDPR) is that data shall not be stored longer than needed.

This speaks to the requirement in information security to create which of the following?

A. Data classification policy
B. Retention periods
C. Retention policy
D. Archiving and retrieval procedures

A

C. Retention policy

Explanation:
It is necessary to create a retention policy within businesses regarding information security. In that policy, it should specify how long data can be retained within a business. This is often connected to data classification. The retention policy answer is better because the question is asking about how long the data should be stored. That is specified by the retention policy.

The retention period is what is being pointed to, but that would be in the retention policy. The wording also does not work: “create a retention period.” It is not a matter of creating it, it is a matter of specifying the retention period.

Archiving and retrieval procedures should also be spelled out with details on handling data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A large financial institution is using a Platform as a Service (PaaS) deployment on a public Cloud Service Providers (CSP) network. The company has been moving their systems from a traditional data center to the CSP carefully over the last year. They are now going to deploy a new service that will handle sensitive data, so they need to ensure that the information is properly protected. The protection that they are setting up first is encryption. A cloud administrator has been tasked with safeguarding the encryption keys in a centralized setting.

What tool can be used for this?

A. Key Management Integrity Protocol (KMIP)
B. Key Management Service (KMS)
C. Public-Key Cryptography Standards (PKCS)
D. Tokenization

A

B. Key Management Service (KMS)

Explanation:
Key Management Service (KMS) is a cloud-based service that provides secure and centralized management of cryptographic keys for encrypting and decrypting sensitive data in cloud environments. It is designed to simplify the process of key management and enhance the security of data at rest or in transit.

Key Management Integrity Protocol (KMIP) is a communication protocol that enables secure and standardized management of cryptographic keys and related objects across different Key Management Services (KMS). It allows organizations to centralize and streamline their key management operations, ensuring interoperability and ease of integration between different key management solutions. This is the second best answer behind KMS. The focus of the question is the centralizing of the storage of the keys. That is your KMS. KMIP then works with the KMS.

PKCS stands for Public-Key Cryptography Standards. It is a set of standards developed by RSA Laboratories to facilitate the secure use and implementation of public-key cryptography. PKCS cover various aspects of public-key cryptography, including key management, digital signatures, encryption, and certificate handling.

Tokenization is a data protection technique that replaces sensitive information, such as credit card numbers or Personally Identifiable Information (PII) with a unique identifier called a token. The original data is securely stored and replaced with a randomly generated token that has no meaningful correlation to the original data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Ada has been tasked with implementing a solution for her organization that will assist with the Incident Response (IR) process. She is looking for a tool that will help analyze all the logs that are coming in from all the different virtual network devices, security products, and end systems.

Which of the following is a solution that Ada could implement?

A. Security Information Event Manager (SIEM)
B. Data Loss Prevention (DLP)
C. Intrusion Detection System (IDS)
D. Intrusion Prevention System (IPS)

A

A. Security Information Event Manager (SIEM)

Explanation:
The product that is useful for Incident Response as well as just management of a network is a SIEM. A SIEM collects logs from all the products in the network, cloud, or traditional. These products include routers, switches, servers, virtual servers, firewalls, IDS, IPS, DLP, and more—all virtual or physical. The SIEM then correlates all these events and looks for Indications of Compromise (IoC). These IoCs are then analyzed, probably by a Security Operations Center (SOC). If a compromise is found, the IR team could be started.

A DLP is a tool that was originally developed to watch the network for traffic that was leaving that would be considered a leak or a loss. It can do more now, such as watching emails for phishing attacks.

IDS and IPS products watch for intrusive traffic on either a network or end system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A cloud provider is offering an online data storage solution in a Software as a Service (SaaS) environment. As a customer, there is a Graphical User Interface (GUI) that allows the user to store data and easily access it when needed. The Cloud Service Provider (CSP) actually stores the data within a database that they maintain.

What type of storage is described here?

A. Block storage
B. Object storage
C. Ephemeral storage
D. Information storage and management

A

D. Information storage and management

Explanation:
Information storage and management is the classic form of storing data within databases that the application uses and maintains. This storage method is used in Software as a Service (SaaS) offerings.

Ephemeral storage is temporary storage. It is used by virtual machines. When a windows server is running in a virtual environment, it must believe that it has local storage. It is how the server software was written. So until there is a major rewrite, we must give it temporary storage to use while it is running. Anything that is stored only in ephemeral storage will be lost if the virtual machine shuts down before it is moved to persistent storage.

Block storage is a term used by Infrastructure as a Service (IaaS). Blocks are assigned in set amounts of space at a time (e.g., 1 terabyte of space that could be increased by 1 terabyte at a time). A block is like a drive on a traditional computer. It is fixed in size. In the cloud it can be expanded though. Inside of blocks you find volumes. A volume is like a file folder.

Inside volumes you find objects. An object is a file. The file could be a document, a movie, a sound file, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Many CSPs that want to work with US government contractors have cloud services that are audited against which of the following?

A. G-Cloud
B. ISO/IEC 27017
C. PCI DSS
D. FedRAMP

A

D. FedRAMP

Explanation:
Cloud service providers may have their environments verified against certain standards, including:

ISO/IEC 27017 and 27018: The  International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) publishes various standards, including those describing security best practices. ISO 27017 and ISO 27018 describe how the information security management systems and related security controls described in ISO 27001 and 27002 should be implemented in cloud environments and how PII should be protected in the cloud.
PCI DSS: The Payment Card Industry Data Security Standard (PCI DSS) was developed by major credit card brands to protect the personal data of payment card users. This includes securing and maintaining compliance with the underlying infrastructure when using cloud environments.
Government Standards: FedRAMP-compliant offerings and UK G-Cloud are cloud services designed to meet the requirements of the US and UK governments for computing resources.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

When is the MOST optimal time to determine if data needs to be classified?

A. Use phase
B. Archive phase
C. Create phase
D. Store phase

A

C. Create phase

Explanation:
When data is created during the create phase, the sensitivity of the data is known, or at least should be. It should then be handled properly from the beginning, as all additional phases will build off of the create phase. The create phase includes the alteration or modification of data.

Storage should happen shortly after the create phase. If it does not, it is very possible that data could be lost.

The use phase is when data is being utilized at a later point.

Archive is when data is being moved into a long-term storage.

Use and archive are too late for classification. It is possible in reality that is when the data does get classified, but it should be done in the create phase.

25
Q

A public cloud provider has been building data centers around the world. They now have a data center on most of the continents. They have not built any in Antarctica yet. What has driven the cloud provider to build so many data centers?

A. Reduce resilience and redundancy
B. Centralized control of Information Technology
C. Easier communication through company
D. Improved performance and scalability

A

D. Improved performance and scalability

Explanation:
With a distributed IT model, there are many benefits. As a cloud provider, it is more efficient to have data centers closer to where the users are. Microsoft has even experimented with a data center in a tube off the northern coast of Scotland. The location was not the key; it was an experiment to see if it would work. More people on the planet live closer to an ocean than not.

By building many data centers, resilience and redundancy is actually improved, not reduced.

Easier communication through the company may be something that they want, but a distributed IT model does not have that as a goal or a benefit. That is fundamentally a different topic.

By building so many different data centers, or a distributed IT environment, control of Information Technology (IT) can be localized. The laws of the country the data center is in can be managed by the people in that country. The goal is not to centralize control of IT.

26
Q

According to the American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE), what is the ideal humidity level for a data center?

A. 20-40 percent relative humidity
B. 40-60 percent relative humidity
C. 50-70 percent relative humidity
D. 45-65 percent relative humidity

A

B. 40-60 percent relative humidity

Explanation:
The American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE) recommends data centers to have a level of 40-60 percent relative humidity. It is important to ensure that both the temperature and humidity are at ideal levels within a data center. Whenever humidity levels are too high, condensation can form and cause water damage to systems. Whenever humidity levels are too low, electrostatic discharge is more likely to occur and cause damage to systems.

27
Q

It is vital for your firm to implement a solution to comply with compliance and regulatory standards and regulations. To prevent destructive commands from being executed on your organization’s data storage, it is necessary to monitor for suspicious activity and give notifications when anomalies are discovered.

Which of the following security controls should the organization consider implementing?

A. Web Application Firewall (WAF)
B. Extensible Markup Language (XML) Gateway
C. Application Programing Interface (API) Gateway
D. Database Activity Monitor (DAM)

A

D. Database Activity Monitor (DAM)

Explanation:
The first clue to the question is that the data storage is being protected. It would be nice if the question actually said database. However, the rules that (ISC)2 must follow include a rule that the answer cannot contain words from the question. A database is a data storage technology. It is possible to detect malicious commands being executed on your organization’s database and to prevent them from being executed with the help of a Database Activity Monitor (DAM). In addition, suspicious activity is monitored, and alerts are sent out when anomalies are discovered.

A WAF watches the web activity. This includes Hyper Text Markup Language (HTML), Hyper Text Transfer Protocol (HTTP), and so on. There is nothing in the question regarding web access for the customers. This is likely, but that is not the focus of the question.

An API gateway is a type of firewall that looks at the APIs, which includes Representation State Transfer (ReST) and SOAP traffic. That is also something that this company would need, but again, there is nothing in the question pointing us to an API of any type.

The XML gateway is very similar to the API gateway. However, it is just looking at the eXtensible Markup Language (XML) traffic.

28
Q

Pawel is working for a financial institution that needs to respond as quickly as possible to security breaches. They have deployed Intrusion Detection Systems (IDS), firewalls, Database Activity Monitors (DAM), and so on. To be able to respond quickly, they are looking for a control to correlate logs and create reports about network activity.

What would you recommend?

A. Security and information event management (SIEM)
B. Next Generation Firewall (NGFW)
C. File Activity Monitor (FAM)
D. Transport Layer Security (TLS)

A

A. Security and information event management (SIEM)

Explanation:
Security and Information Event Management (SIEM) systems provide a great number of functions, including log correlation and reporting.

The question implies that there are already NGFWs in place because it mentions firewalls. Firewalls will block or allow traffic and create logs.

FAMs monitor user activity and generate logs. Neither NGFW nor FAMs can correlate logs. They send their logs to the SIEM for that work to be done.

TLS is used to encrypt transmissions of traffic, which can include encrypting the transmission of the logs, but again, this is not used to correlate logs.

29
Q

A corporation has submitted their product, a Hardware Security Module (HSM), for testing. They need to prove to their customers that it is going to be able to protect itself from physical tampering. The tester has proven that their product will detect tampering attacks and overwrite the stored data with zeros. What have they achieved?

A. FIPS 140-3 Level 3
B. FIPS 140-3 Level 2
C. Common Criteria Level 4
D. Common Criteria Level 3

A

A. FIPS 140-3 Level 3

Explanation:
The National Institute of Standards and Technology (NIST) Federal Information Processing Standard 140-2 or 140-3 is a criterion for the physical security of cryptographic modules. A level 2 certification means that there will be evidence of tampering such as a cut piece of tape. Level 3 products must be able to detect the tampering and respond by zeroizing the data, including the crypto keys that it stores.

Common criteria is International Standards Organization (ISO) 15408. Level 3 is methodically designed and tested. Level 4 is methodically designed, tested, and reviewed.

30
Q

Which of the following organizations developed the Application Security Verification Standard (ASVS)?

A. NIST
B. SAFECode
C. SANS
D. OWASP

A

D. OWASP

Explanation:
The Application Security Verification Standard (ASVS) was developed by OWASP and provides resources for testing secure coding. It specifies 14 control objectives that can be used to guide tests and organize reporting.

31
Q

Company A and Company B have both purchased cloud services from a Cloud Service Provider (CSP). Company A and Company B are both sharing access to a pool of resources owned by the CSP from the multiple servers that are in the data center to the Random Access Memory (RAM) within the server. Because the two companies could be on the same server, sharing the same physical RAM of that server, the hypervisor has a particular responsibility.

What is that responsibility?

A. Logging
B. Isolation
C. Containers
D. Virtualization

A

B. Isolation

Explanation:
When there are multiple tenants on a single physical server, they must be isolated from each other. Those tenants can be different companies, like in the question, in a public cloud. In a private cloud, the tenants are different departments or on different projects.

There is logging that needs to be done at the hypervisor level, but the question is driving at the two companies that need to be protected. So, they need to be isolated.

If you are building containers, you do not need a hypervisor. They do not operate the same way. Virtual machines are built on hypervisors. Containers are built on top of the Operating System (OS), usually Linux.

Virtualization is the foundation idea that allows clouds to be built. Hypervisors allow for the virtualization of servers, desktops, etc., but again the question is trying to get the two companies isolated from each other.

32
Q

What type of testing is performed during the maintenance phase of software development to guarantee that changes to the software program do NOT destroy existing functionality, introduce new vulnerabilities, or resurface previously resolved vulnerabilities?

A. Unit testing
B. Integration testing
C. Regression testing
D. Useability testing

A

C. Regression testing

Explanation:
This is a good definition of regression testing. Regression testing is responsible for ensuring that the functionality of the existing features remains when software is being updated.

Unit testing focuses on individual units or components of the software.

Integration testing is when individual software modules are combined and tested as a group.

Useability testing is done to determine how the user experience is with the software.

33
Q

A medical corporation that creates many advances in pharmaceutical drugs treating bacterial infections, cancers, and other medical conditions that is operating in many different countries has a few legal challenges that they face. When storing data, they know that they must protect the data appropriately. They have been working with their legal councils that are aware of the local laws in each country that they operate. One of those laws is the United States (U.S.) Health Insurance Portability and Accountability Act (HIPAA).

Under HIPAA, can they store data outside of the U.S. in a cloud provider’s network?

A. Yes, it is possible if data is protected properly
B. No, it is not allowed under any condition
C. Yes, it is possible and standard security controls are sufficient
D. No, it is not allowed because it is impossible to control the cloud

A

A. Yes, it is possible if data is protected properly

Explanation:
It is possible for a covered entity to store protected health information outside of the United States. It is allowed if the security rule is known and followed. The covered entity should establish a Business Associate Agreement (BAA) with their cloud provider. This is essentially a Privacy Level Agreement (PLA) generically or a Data Processing Agreement (DPA) under the European Union’s (EU) General Data Protection Regulation (GDPR).

34
Q

Belle is a cloud data architect and has decided to organise some of their data in an object-relational database. Data that is easily searchable and organized within a database is known as:

A. Semi-structured data
B. Relational data
C. Structured data
D. Unstructured data

A

C. Structured data

Explanation:
Structured data is data that is predictable. In a relational database (in object-relational, the backend is still relational), the data is predicable. Every row or record has exactly the same fields as all the other records. The size of each field is exactly the same in each record. This makes it structured.

Unstructured data is not predicable. The size of the next Word document that you create is unpredictable. Will it have photos, graphs, GIFs, or just plain text? This makes it unstructured. If your brain is fighting that, it is likely because you are thinking about how Microsoft Word formats the file so that it is readable by the software called Microsoft Word. This is not looking from the inside, this is looking at it from the outside of the file. The storage space that it will take is not predictable.

Semi-structured data is unstructured data stored in a structure/database—a file, a picture, a recording, something like that stored in a field in a database. Traditional databases are fixed in size because of the old programming days. There is no reason a software developer is limited by those ideas. So, now we have semi-structured data.

Relational data is not really a term. More properly, it is a relational database.

35
Q

Although the cloud data lifecycle is not necessarily iterative, it does have distinct phases. What is the proper sequence of the data lifecycle phases?

A. Create, Use, Share, Store, Archive, Destroy
B. Create, Use, Store, Share, Archive, Destroy
C. Create, Store, Use, Share, Archive, Destroy
D. Create, Store, Share, Use, Archive, Destroy

A

C. Create, Store, Use, Share, Archive, Destroy

Explanation:
Create, Store, Use, Share, Archive, Destroy are the phases in the cloud data lifecycle in the correct order.

All other options are in the incorrect order.

36
Q

Which cloud characteristic is discussed when the Central Processing Unit (CPU), memory, network capacity, and other things are allocated to customer virtual machines as they are needed?

A. On-demand self-service
B. Broad network access
C. Resource pooling
D. Interoperability

A

C. Resource pooling

Explanation:
When the resources, such as CPU, memory, and network, are gathered into a pool and allocated to running virtual machines as needed, resource pooling is being discussed.

On-demand self-service is when the cloud provider (private, public, or community) provides a web interface to navigate offerings and purchase them, such as AWS.amazon.com.

Interoperability is the ability for two different systems to be able to share a piece of data and use it, such as a word doc being created on a Mac and sent to a Windows device and be readable.

Broad network access is the requirement of the cloud, which means that network access must be there, and when it is there, the customer can use the cloud.

37
Q

You are assessing a Hardware Security Module (HSM) for your Infrastructure as a Service (IaaS) cloud implementation to protect your data properly. You have found several different vendors, and they are rated against the FIPS 140-3 standard. Of the four levels, which provides the HIGHEST level of security and tamper protection?

A. Level 1
B. Level 4
C. Level 2
D. Level 3

A

B. Level 4

Explanation:
The Federal Information Processing Standard (FIPS) 140-3 standard defines four levels of security. Level 1 is the lowest level of security and level 4 provides the highest level of security and tamper protection. Levels 2 and 3 are in between.

Level 1 has no physical security to it, only logical security.

Level 2 has tamper evidence. This is done through the use of seals of some kind.

Level 3 is tamper resistant. The device should react to any tampering by zeroing out all data. It provides a high probability of detecting and responding to the intrusion.

Level 4 also detects and reacts to an intrusion by zeroizing the data. Level 3 detects when the device is opened. Level 4 detects environmental fluctuations in temperature and such that an attacker can use to attack devices.

38
Q

Which of the following data security methods requires secure random number generation?

A. Hashing
B. Encryption
C. Tokenization
D. Masking

A

B. Encryption

Explanation:
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:

Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules.
Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions.
Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number.
Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data.
Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
39
Q

A software development company is looking to purchase a cloud service. They need the ability to develop and maintain their applications in the cloud without needing to manage and maintain the servers and network equipment that keep the applications running. Which of the following cloud service types BEST fits the needs of the software development company?

A. Infrastructure as a Service (IaaS)
B. Database as a Service (DBaaS)
C. Platform as a Service (PaaS)
D. Software as a Service (SaaS)

A

C. Platform as a Service (PaaS)

Explanation:

Platform as a Service (PaaS) provides organizations with a place to develop and maintain software and applications without needing to maintain the infrastructure. This allows the developers and programmers to focus strictly on the tasks that they excel at, such as creating new applications.

IaaS requires the management of the virtual infrastructure, which is more than what is needed to meet the needs of the question.

SaaS is not enough for the software developers. Hotmail, Google mail, Google docs, and Microsoft 365 (M365) are all examples of SaaS. It does not provide the developers with a place to create the application since it is the application.

DBaaS is a place to store data. The developers need to be able to create applications, not store data.

40
Q

A security incident occurred within an organization that affected numerous servers and network devices. A security engineer was able to use the Security Information Event Manager (SIEM) to see all the logs pertaining to that event, even though they occurred on different devices, by using the IP address of the source.

Which function of a SIEM is being described in this scenario?

A. Correlation
B. Aggregation
C. Analysis
D. Reporting

A

A. Correlation

Explanation:
Security Information and Event Management (SIEM) systems are very useful because they are able to correlate data. This means that not only can the data be stored in one place through aggregation but also be searched using specific items such as an IP address or timestamp.

Reporting is the function of the SIEM, creating alerts and reports regarding suspicious activity and Indication of Compromise (IoC).

Analysis is a reasonable word here, but the correct word is correlation regarding SIEM.
Reference:

41
Q

Thorn is the information security manager working for a company that has a subscription service that allows users to watch popular TV shows. He is looking for a storage technology that would be the most effective. What form of storage is used when content is saved in object storage and then dispersed to multiple geographical hosts to increase internet consumption speed?

A. Content Delivery Network (CDN)
B. Storage Area Network (SAN)
C. Software Defined Network (SDN)
D. Software Designed Storage (SDS)

A

A. Content Delivery Network (CDN)

Explanation:
A Content Delivery Network (CDN) provides globally-distributed object storage, allowing an organization to keep data as close to users as possible. As a result, end users benefit from reduced bandwidth consumption and decreased latency because they can pull from a server closer to their geographic location, an edge server.

SDS allows for the abstraction of the storage that exists within a physical server. Once abstracted, it can be allocated using the software that is the cloud.

SDN is a method of managing a switch-based network in a more efficient and effective manner. It adds a controller to the network that can plan all traffic flows more effectively than the switch can. It also allows administrators to add rules to control traffic flows according to corporate policies.

A SAN is a Local Area Network (LAN) that is comprised of servers that have storage as their primary function.

42
Q

Of the following types of cloud deployments, which is MOST susceptible to virtual machine and virtual switch attacks?

A. Infrastructure as a Service (IaaS)
B. Database as a Service (DBaaS)
C. Platform as a Service (PaaS)
D. Software as a Service (SaaS)

A

A. Infrastructure as a Service (IaaS)

Explanation:
Two special security considerations that are applicable to IaaS cloud environments are virtual switch attacks and virtual machine attacks. In an IaaS deployment, the customer has the ability to load all the Operating Systems (OS). These include the OS that is the virtual router, switch, firewall, Intrusion Detection System (IDS), and so on that actually created a virtual data center. This is Infrastructure as Code (IaC). The infrastructure is not real; it is virtual. This includes those virtual switches.

PaaS is effectively the next level up. The customer does not need to worry about the infrastructure. There is no concern over switches or virtual switches. The construction of the cloud environment begins with the virtual server in a server-based PaaS and even higher if it is a serverless deployment. DBaaS is a PaaS deployment.

SaaS is the highest, where the customer does not even see the server. The only visibility is the software, not the server or switch.

43
Q

You have been creating the documentation needed for your corporation regarding the long-term storage and recovery of data. You know that it is necessary to retain some information for a certain period of time, while some information can only be stored for a limited amount of time. You have been working with the legal department to ensure that all laws related to their particular data types are followed.

Which section of a data retention policy would outline the steps involved in this process of storage and recovery of the data?

A. Retention formats
B. Archiving and retrieval procedures
C. Retention period
D. Data classification

A

B. Archiving and retrieval procedures

Explanation:
The data retention policy’s archiving and retrieval procedures will detail how data should be stored to facilitate later recovery. It is necessary to detail the procedures of archival and recovery of data.

Data classification is the process of identifying the sensitivity level of information and labeling it. This can have an impact on data retention, but the question is asking about the storage and recovery of data. That is archiving and retrieval.

The retention period is how long data needs to be stored, either because of a maximum time limit or a minimum time limit. However, again, the question is asking about the process, not the time limit.

Retention formats are critical to watch. If data is stored on old media, a couple of different problems can arise. Media can degrade and become unusable, rendering the data lost. The other problem is not having the software to recover the data because, for example, it was on an old computer that no longer works.

44
Q

Batu works with the DevOps team. He is an information security professional who has been tasked with ensuring that the software is properly tested. They have added Open Source Software (OSS) to their application. What is the best way to test and validate this OSS?

A. Static Application Security Testing (SAST) tools only
B. Interactive Application Security Testing (IAST) tools only
C. Dynamic Application Security Testing (DAST) tools in conjunction with Runtime Application Self-Protection (RASP) tools
D. Static Application Security Testing (SAST) tools in conjunction with Interactive Application Security Testing (IAST) tools

A

D. Static Application Security Testing (SAST) tools in conjunction with Interactive Application Security Testing (IAST) tools

Explanation:
Correct answer: Static Application Security Testing (SAST) tools in conjunction with Interactive Application Security Testing (IAST) tools

Given that you are utilizing well-known and well-supported Open-Source Software (OSS), performing Static Application Security Testing (SAST) to identify vulnerabilities and then implementing Interactive Application Security Testing (IAST) to detect additional security issues in real time would be the best of these options.

SAST will analyze the lines of code, which is possible since it is open source. This alone is not as good as combining it with IAST.

IAST analyzes the application with visibility to the active lines of code that are in use simultaneously. This alone is not as good as combining it with SAST.

RASP is self-protection that is added to the the application. It is not a testing method.

DAST is analyzing the running application for vulnerabilities visible to the user and therefore possibly exploitable by the bad actor. This is good to do. However, combining it with RASP for the sake of testing is not the best combination, since RASP is not testing.

45
Q

Grazing is working with the cloud security architect on the design of the encryption that will be used to protect the data that they need to store within the public cloud. The corporation formerly made the decision to store their data within the public cloud because it is much cheaper than building the physical Storage Area Networks (SAN) they would need within their own datacenter.

Of the following, which is the MOST important to consider when planning encryption?

A. Data format requirements
B. Key storage location
C. Regulatory requirements
D. Encryption algorithms

A

B. Key storage location

Explanation:
While there are many factors that come into play when setting up encryption, the most important factor to consider when placing data within the cloud is the storage location of the key. It is important that the key is not stored with the data and the encryption/decryption software. If it is stored with the cloud provider, it should be stored within a Key Management System (KMS) at least but preferably within a Hardware Security Manager (HSM). The encryption algorithm is important, but most cloud providers already only offer strong algorithms.

Regulatory requirements are important, but the law does not demand specific algorithms, key sizes, key storage locations, or other similar factors. The laws and regulations demand that data be encrypted. Perhaps they might specify that current best practices should be followed.

The data format is not a part of determining the encryption algorithm or key size. It is also unlikely to be a factor in the key storage location determination.

46
Q

Which of the following types of data uses tags to identify the purpose of a piece of data?

A. Unstructured
B. Structured
C. Loosely Structured
D. Semi-Structured

A

D. Semi-Structured

Explanation:
The complexity of data discovery depends on the type of data being analyzed. Data is commonly classified into one of three categories:

Structured: Structured data has a clear, consistent format. Data in a database is a classic example of structured data where all data is labeled using columns. Data discovery is easiest with structured data because the data discovery tool just needs to understand the structure of the database and the context to identify sensitive data.
Unstructured Data: Unstructured data is at the other extreme from structured data and includes data where no underlying structure exists. Documents, emails, photos, and similar files are examples of unstructured data. Data discovery in unstructured data is more complex because the tool needs to identify data of interest completely on its own.
Semi-Structured Data: Semi-structured data falls between structured and unstructured data, having some internal structure but not to the same degree as a database. HTML, XML, and JSON are examples of semi-structured data formats that use tags to define the function of a particular piece of data.

Loosely structured is not a common classification for data.

47
Q

Which of the following is NOT a type of requirement that would be developed by the team during the Requirements phase of the SDLC?

A. Non-functional
B. Security
C. Business
D. Functional

A

A. Non-functional

Explanation:
The Software Development Lifecycle (SDLC) describes the main phases of software development from initial planning to end-of-life. While definitions of the phases differ, one commonly-used description includes these phases:

Requirements: During the requirements phase, the team identifies the software's role and the applicable requirements. This includes business, functional, and security requirements.
Design: During this phase, the team creates a plan for the software that fulfills the previously identified requirements. Often, this is an iterative process as the design moves from high-level plans to specific ones. Also, the team may develop test cases during this phase to verify the software against requirements.
Development: This phase is when the software is written. It includes everything up to the actual build of the software, and unit testing should be performed regularly through the development phase to verify that individual components meet requirements.
Testing: After the software has been built, it undergoes more extensive testing. This should verify the software against all test cases and ensure that they map back to and fulfill all of the software’s requirements.
Deployment: During the deployment phase, the software moves from development to release. During this phase, the default configurations of the software are defined and reviewed to ensure that they are secure and hardened against potential attacks.
Operations and Maintenance (O&M): The O&M phase covers the software from release to end-of-life. During O&M, the software should undergo regular monitoring, testing, etc., to ensure that it remains secure and fit for purpose.
48
Q

A cloud operator is working in a data center and notices that the temperature of the data center is 71.7 degrees Fahrenheit/22 degrees Celcius and the humidity level is at 35 percent. Which of the statements is TRUE regarding this data center?

A. The temperature is ideal, but the humidity level is too low
B. The temperature is too high, and the humidity level is ideal
C. Both temperature and humidity are within the ideal ranges
D. The temperature is too high, and the humidity level is too low

A

A. The temperature is ideal, but the humidity level is too low

Explanation:
It’s very important to ensure that both temperature and humidity levels are ideal within a data center. The American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE) recommend that data centers have a temperature between 64.4-80.6 degrees Fahrenheit/18-27 degrees Celcius and a humidity level between 40-60 percent relative humidity. A humidity level of 35 percent is too low and could lead to an excess of electrostatic discharge.

49
Q

As the information security manager, Ren has been working with the business continuity planning team to determine if their plan is ready. They have just performed a test that tests all but the actual switchover from the production environment to the backup cloud environment. What type of test have they performed?

A. Full interruption
B. Simulation
C. Tabletop
D. Parallel

A

D. Parallel

Explanation:
Parallel is the fourth level in the test phase (see #6 below).

The phases of developing a BC/DR plan are shown here:

Policy
Project Management & Initiation
Business Impact Analysis (BIA)
Develop Strategies
Document
Implement, Test, and Update
Embed in the user communities

In the test phase, there are about five basic levels of testing. The most basic is a checklist or desk check. This serves to ensure that the document contains all the pieces that they have been able to identify and develop. It is possible to perform this test using a list of commonly forgotten items from BC/DR plans.

The second level is historically called a structured walkthrough, although today it is more commonly called a tabletop. This allows the team to talk through the plan in a logical order to see if the pieces all fit as they believe they should.

The third level is a simulation, which does not fit well into cloud discussions. A good example is a fire drill.

The fourth is then a parallel test. This brings the backup environment to a functional state but does not take the production environment offline or cause a failover.

To test the failover capability, the final test is a full interruption.

There are alternative names to these tests that some people use. There is no one correct list because ISC2 does not follow a single standard for their exams.

50
Q

A real estate corporation is utilizing file storage in a Platform as a Service (PaaS). They are consulting with their lawyers to determine how data must be protected given the laws that they must comply with. If the data is stored with the European Union (EU) region, what law or regulation would the data be governed by?

A. The nation where the business is registered
B. The user must specify where data is to be located and stored
C. The nation where the data is stored
D. The nation where the data is collected

A

D. The nation where the data is collected

Explanation:
Data sovereignty refers to the concept that data is subject to a nation’s laws and regulations. The laws governing the data sovereignty of the country where the data is collected should be followed. If you are required to comply with a data sovereignty obligation regarding the placement of your data, global CSPs will offer locations that may satisfy these criteria.

Storing data in the EU does not mean that it must comply with the EU General Data Protection Regulation (GDPR). GDPR protects natural persons within the EU when their data is collected.

The nation the business is registered in is incorrect. If a company is registered in Argentina but operating in the EU, they need to comply with the EU GDPR.

Depending on the law, the user may need to opt in to the data being collected and stored, but the user does not specify where data is to be located and stored.

51
Q

Which one is the correct order of the SDLC?

A. Design, development, requirements, testing, operation & maintenance, deployment
B. Requirements, design, development, testing, deployment, operations & maintenance
C. Requirements, development, deployment, design, operations & maintenance, testing
D. Design, requirements, development, deployment, testing, operations & maintenance

A

B. Requirements, design, development, testing, deployment, operations & maintenance

Explanation:
Correct answer: Requirements, design, development, testing, deployment, operations & maintenance

The Software Development LifeCycle (SDLC) is a process/framework for development and coding. The SDLC is made up of six phases to be carried out in the following order:

Requirements
Design
Development
Testing
Deployment
Operations & Maintenance

There are many variations of the SDLC, but this is the one in the ISC2 book, so it works fine here. The logical flow does not change if you use a different set of names.

52
Q

There are a few different methods of authenticating a user. Leith needs to build an Identity and Access Management (IAM) system for a cloud database. The requirement is that it must be MultiFactor Authentication (MFA). Of the following, which would be a possible answer for his database?

A. WS-Federation and a Security Assertion Markup Language (SAML) token
B. A password with a Personally Identifiable Number (PIN)
C. A Security Assertion Markup Language (SAML) token with Open Authorization (OAuth)
D. A fingerprint and a Security Assertion Markup Language (SAML) token

A

D. A fingerprint and a Security Assertion Markup Language (SAML) token

Explanation:
In MultiFactor Authentication (MFA), users are required to use two or more types of authentication components. Authentication types include something the user knows, type 1, (pin, passwords); something the user has, type 2, (RSA token, key card); or something the user is, type 3, (retina scan, fingerprint scan).

A finger print is type 1 and a Security Assertion Markup Language (SAML) token is type 2. That is MFA.

A password is type 1 and a PIN is ambiguous. If this is like the PIN used with your bank debit card, then it is type 1 as well. If it is user identification, then it is not any type. Either way, it is not MFA, as there is only one possible type here.

OAuth is authorization, not authentication. If paired with SAML or something similar, then it could be counted as type 2. However, SAML and OAuth in a single answer only cover one factor.

The same would be true with WS-Federation and SAML. It only includes type 2 by itself.

53
Q

Threat modeling is critical in the process of securing software. There are many different threat modeling techniques available. Which method involves analyzing risks by assigning a score on a scale of 1-10?

A. Process for attack simulation and threat analysis (PASTA)
B. Architecture, threats, attack surfaces and mitigations (ATASM)
C. Spoofing, tampering, repudiation, information disclosure, denial of service, elevation of privilege (STRIDE)
D. Damage, reproducibility, exploitability, affected users, discoverability (DREAD)

A

D. Damage, reproducibility, exploitability, affected users, discoverability (DREAD)

Explanation:
The DREAD threat model looks at five categories, including damage potential, reproducibility, exploitability, affected users, and discoverability. DREAD was invented at Microsoft and has been largely abandoned, but it does give a score to each of those categories on a scale of 1-10.

STRIDE was also invented at Microsoft to standardize the way that they describe threats.

ATASM has the intention of looking at the potential threats to the software architecture.

PASTA has the business objectives considered when using threat intelligence to uncover potential threats.

54
Q

Albert is working on the team that is building Disaster Recovery (DR) plans for their business. They have servers that their customers use in their Platform as a Service (PaaS). They are working on determining the impact to their business if it is offline. They have been able to determine that they will have thirty hours of time to perform the actions to recover their service on a different provider in the event of a disaster.

What is this time frame called?

A. Recovery Time Objective (RTO)
B. Recovery Service Level (RSL)
C. Recovery Point Objective (RPO)
D. Mean Time to Repair (MTR)

A

A. Recovery Time Objective (RTO)

Explanation:
The RTO, or Recovery Time Objective, is the measurement of how long the administrators and operators should have to recover operations after a disaster has occurred. The window of time that the service could be offline is actually the Maximum Tolerable Downtime (MTD). Part of that window is allocated to the chaos that occurs when something major happens.

The RPO is the recovery point for data. It defines how much data can be lost as a unit of time.

The RSL is the percentage of functionality that the business must achieve when operating in the alternative configuration.

The MTR, or sometimes called the Mean Time To Repair (MTTR), is the average time it takes to repair something.

55
Q

In multitenant cloud environments, it is the cloud service provider’s responsibility to implement logical controls to protect against which of the following threats?

A. Unauthorized Provisioning
B. Improper Disposal
C. Theft or Media Loss
D. Unauthorized Access

A

D. Unauthorized Access

Explanation:
Data storage in the cloud faces various potential threats, including:

Unauthorized Access: Cloud customers should implement access controls to prevent unauthorized users from accessing data. Also, a cloud service provider (CSP) should implement controls to prevent data leakage in multitenant environments.
Unauthorized Provisioning: The ease of setting up cloud data storage may lead to shadow IT, where cloud resources are provisioned outside of the oversight of the IT department. This can incur additional costs to the organization and creates security and compliance challenges since the security team can’t secure data that they don’t know exists.
Regulatory Non-Compliance: Various regulations mandate security controls and other requirements for certain types of data. A failure to comply with these requirements — by failing to protect data or allowing it to flow outside of jurisdictional boundaries — could result in fines, legal action, or a suspension of the business’s ability to operate.
Jurisdictional Issues: Different jurisdictions have different laws and regulations regarding data security, usage, and transfer. Many CSPs have locations around the world, which can violate these laws if data is improperly protected or stored in an unauthorized location.
Denial of Service: Cloud environments are publicly accessible and largely accessible via the Internet. This creates the risk of Denial of Service attacks if the CSP does not have adequate protections in place.
Data Corruption or Destruction: Data stored in the cloud can be corrupted or destroyed by accident, malicious intent, or natural disasters.
Theft or Media Loss: CSPs are responsible for the physical security of their data centers. If these security controls fail, an attacker may be able to steal the physical media storing an organization’s data.
Malware: Ransomware and other malware increasingly target cloud environments as well as local storage. Access controls, secure backups, and anti-malware solutions are essential to protecting cloud data against theft or corruption.
Improper Disposal: The CSP is responsible for ensuring that physical media is disposed of correctly at the end of life. Cloud customers can also protect their data by using encryption to make the data stored on a drive unreadable.
56
Q

Cloud is built from compute, network, and storage elements. When purchasing compute capability, such as a virtual server on a cloud provider, what elements do you need to define within the self-service portal?

A. Storage and Central Processing Unit (CPU)
B. Storage and Random Access Memory (RAM)
C. Random Access Memory (RAM) and Central Processing Unit (CPU)
D. Shares and storage

A

C. Random Access Memory (RAM) and Central Processing Unit (CPU)

Explanation:
Computing and processing capabilities are defined as Random Access Memory (RAM) and Central Processing Unit (CPU) of the system and environment. This is the same in both cloud environments and traditional data centers. However, the management of these items is different depending on whether it’s a cloud environment or a traditional data center.

Storage requirements need to be provisioned as well, but that is the storage element not the compute element.

Shares refers to the pool of resources that RAM and CPU resources are pulled from. The actual RAM and CPU capability of the physical server is what becomes the pool of resources that is shared among the tenants.

57
Q

A cloud administrator would like to access an Infrastructure as a Service (IaaS) Structured Query Language (SQL) server. Which technology can be used to connect to it?

A. Secure Shell (SSH)
B. Internet Protocol Security (IPSec)
C. Transport Layer Security (TLS)
D. Keyboard, Video, Mouse (KVM)

A

A. Secure Shell (SSH)

Explanation:
SSH is a layer 5 protocol that creates an encrypted tunnel that is very commonly used for administrators and operators to connect remotely to devices, real or virtual. So, SSH can be used to connect to servers, routers, switches, virtual servers, and so on.

TLS is a layer 4 protocol that also creates an encrypted tunnel. However, this is more commonly used for user connection to websites.

IPSec is a layer 3 protocol that also creates an encrypted tunnel. However, this is more commonly used for site to site / router to router connections. It could be used to connect from the router at the corporate site to the edge router within the public cloud deployment.

KVM is used by network professionals when they deploy new physical equipment into a data center. When a router (or any physical box) is placed into a rack, physically connected using wire or fiber to the network, it is then necessary to program it with its initial configuration so that it can connect to the network. Once that is done, the administrator would then be able to remotely connect to it and complete its configuration.

Be careful with the acronym KVM. This is just one of its uses. There is a type 1 hypervisor called KVM (Kernel-based Virtual Machine). There is a great deal of confusion that does occur because of this overlap in the acronym.

58
Q

Reservations in cloud environments are critical to ensure that cloud customers have what they need. In which example, problem or attack, is this most essential?

A. Resource exhaustion
B. Weak control plane
C. Ransomware
D. Applistructure failure

A

A. Resource exhaustion

Explanation:
Resource exhaustion occurs when the tenants within a specific server are increasing their use of the CPU and memory of a specific server. Reservations hold a certain amount of CPU and memory, the reservation, for a specific customer. The reservation level should be at a level that allows the company and its users to continue to work.

Applistructure failures are within the application itself. The reservation at a server level will not have much effect on this problem.

The control plane, or management plane, is the connection point for operations to log in and configure the cloud. This should be protected by multi-factor authentication. A reservation has to do with the user’s being able to function, not operations.

Ransomware attacks are when bad actors encrypt the user’s data. Once it is encrypted, the resources used within the cloud are likely to be reduced. The problem is the encrypted data. Reservations on the memory or the CPU will not help in this case.

59
Q

An organization within the European Union experienced a data breach. During the breach, personally identifiable data was stolen by the attackers. Under which regulation is this organization required to notify the applicable government agencies of the breach within 72 hours?

A. Sarbanes Oxley (SOX)
B. General Data Protection Regulation (GDPR)
C. Asia Pacific Economic Cooperation (APEC)
D. Gramm-Leach-Bliley Act (GLBA)

A

B. General Data Protection Regulation (GDPR)

Explanation:
The European Union implemented the General Data Protection Regulation (GDPR), which covers the entire European Union and the European Economic Area. GDPR focuses on the protection of private and personal user data for all EU citizens regardless of where the data was created, collected, processed, or stored. Any organization that has a data breach where protected or private user information is viewed or stolen by an attacker must report it to the applicable government agencies within 72 hours.

APEC is an agreement among 21 countries. The basic goal is to cooperate for the purpose of trade.

GLBA is a U.S. regulation that is tied to SOX. GLBA requires the protection of personal information.

SOX is a U.S. regulation that requires companies to report their financial status accurately.