Pocket Prep 20 Flashcards

1
Q

A financial organization has purchased an Infrastructure as a Service (IaaS) cloud service from their cloud provider. They are consolidating and migrating their on-prem data centers (DC) into the cloud. Once they are setup in the cloud, they will have their servers, routers, and switches configured as needed with all of the network-based security appliances such as firewalls and Intrusion Detection Systems (IDS).

What type of billing model should this organization expect to see?

A. Locked-in monthly payment that never changes
B. Up-front equipment purchase, then a locked-in monthly fee afterward
C. Metered usage that changes based upon resource utilization
D. One up-front cost to purchase cloud equipment

A

C. Metered usage that changes based upon resource utilization

Explanation:
In an IaaS environment (and Platform as a Service (PaaS) as well as Software as a Service (SaaS)), the customer can expect to only pay for the resources that they are using. This is far more cost effective and allows for greater scalability. However, this type of billing does mean that the price is not locked-in, and it could change as the need for resources either increases or decreases from month to month.

There is no equipment to purchase with cloud services (IaaS, PaaS or SaaS). You could purchase equipment if you want to build a private cloud. However, there is no mention of that in the question. The standard cloud definition excludes the “locked-in monthly payment.” A company could offer that, but it is outside of the cloud as defined in NIST SP 800-145 and ISO 17788.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Cloud Service Providers (CSP) and virtualization technologies offer a form of backup that captures all the data on a drive at a point in time and freezes it. What type of backup is this?

A. Data replication
B. Guest OS image
C. Snapshot
D. Incremental backup

A

C. Snapshot

Explanation:
CSPs and virtualization technologies offer snapshots as a form of backup. A snapshot will capture all the data on a drive at a point in time and freeze it. The snapshot can be used for a number of reasons, including rolling back or restoring a virtual machine to its snapshot state, creating a new virtual machine from the snapshot that serves as an exact replica of the original server, and copying the snapshot to object storage for eventual recovery.

A guest OS image is a file that, when spun-up or run on a hypervisor, becomes the running virtual machine.

Incremental backups are only changes since the last backup of any kind. The last backup could be a full or an incremental backup. So, it basically backs up only “today’s changes” (assuming that backups are done once a day).

Data replication is usually immediate backups to multiple places at the same time. That way, if one copy of the data is gone, another still exists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Shai has been working with the Disaster Recovery (DR) teams to build the DR Plans (DRP) for their critical transaction database. They process a great deal of commercial transactions in an hour. They have been able to determine that they need a configuration that will nearly eliminate the risk of the loss of any transactions that are performed. They have a Recovery Point Objective (RPO) that is sub one second.

What technology should they implement?

A. Load balancers that span multiple servers in a single data center
B. A server cluster that spans multiple availability zones with load balancers
C. Redundant servers that are served through multiple data centers
D. A set of redundant servers across multiple availability zones

A

B. A server cluster that spans multiple availability zones with load balancers

Explanation:
The Recovery Point Objective (RPO) is defined as the period of time during which an organization is willing to accept the risk of missing transactions. With server clusters in a cloud environment that span multiple availability zones that are handled by load balancers, it is unlikely that a single completed transaction would be lost. Incomplete transactions may be lost, but that is probably acceptable for this business.

Redundant servers are not as robust as clusters. Clusters have all the servers active all the time. Redundant servers are not actively processing data unless the primary fails, then it will take over. Redundant servers are often stated as active-passive, where clusters are active-active.

Having multiple servers in a single data center is not as robust as ones that are in different availability zones. If a massive fire happens in one data center, the customer would be offline for awhile (depending on additional configurations) and likely to lose some of the transactions. There are configurations to help with this, but they are missing from the answers, so they cannot be assumed to be there. One configuration would be data mirroring or database shadows, which are nearly instantaneous backups to another server or drive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Rhonda is working with the company’s public cloud provider to determine what technologies and tools they will need to setup to ensure they will have a functional configuration. The topic she is currently working on is the connection from users at the office connecting to their new Platform as a Service (PaaS) server-based solution. Her concern is others being able to see and access their sensitive corporate data between the office and the cloud.

What solution would work best for this scenario?

A. Distributed Resource Scheduling (DRS)
B. Virtual Private Network (VPN)
C. Software Defined Network (SDN)
D. Advanced Encryption Standard (AES)

A

B. Virtual Private Network (VPN)

Explanation:
A VPN is an encrypted tunnel or connection between two points. It could be used site-to-site or client-to-server. AES would likely be the encryption algorithm that would be used within the VPN. However, the question is looking for a solution and that makes the VPN a better answer.

DRS is used to automatically (as opposed to manually) find the best servers to start Virtual Machines (VM) on.

SDN is a technology to more effectively use switches and routers within a network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Rhonda is working with her team to determine if they should code their own API, use an opensource one, or one from a vendor. Which of the following is true about the benefits of each of the API options?

A. A vendor API will be managed and patched by that vendor. An open-source API has the benefit that you can see the code for review purposes. And coding your own API means you have greater control.
B. A vendor API will be managed and patched by that vendor. An open-source API has the benefit that you can see the code for review purposes. And coding your own API means you have to do all the reviews yourself.
C. A vendor API code is not open, so you do not need to review it. An open-source API has the benefit that you can see the code for review purposes. And coding your own API means you have greater control.
D. A vendor API will be managed and patched by that vendor. An open-source API has the benefit that you know who is behind it and that they are updating it. And coding your own API means you have greater control.

A

A. A vendor API will be managed and patched by that vendor. An open-source API has the benefit that you can see the code for review purposes. And coding your own API means you have greater control.

Explanation:
Correct answer: A vendor API will be managed and patched by that vendor. An open-source API has the benefit that you can see the code for review purposes. And coding your own API means you have greater control.

With a vendor’s API, it is good to have a company with formal processes behind it so that you know it will be managed and patched by that vendor. A disadvantage is that you cannot see the code for reviewing yourself.

An open-source API has the benefit that you can see the code for review purposes. You do not know for sure who is behind it or whether they are updating it.

And coding your own API means you have greater control. You do also have to do all the review and testing, but that is not necessarily a benefit. It depends on how you look at it.

Keep in mind this is a theoretical exam. So each of these statements could be debated in the real world, and needs to be, but this is a good starting point, which is usually where the exam is at.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Noah needs to take a backup of a virtual machine. As the cloud operator, he knows that the most common method for doing this is to use which of the following backup solutions?

A. Agent based
B. Agentless
C. Differential
D. Snapshots

A

B. Agentless

Explanation:
Taking a snapshot to backup a Virtual Machine (VM) typically uses an agentless approach. In most virtualization environments, VM snapshots are performed at the hypervisor level without requiring any specific software or agents installed within the VM itself.

Agentless backups first take a snapshot which is then a point-in-time copy. After that, differential changes can be tracked to make storage space less for future backups.

Agent-based backup products require the installation of a lightweight piece of software on each virtual machine. The agent software lives at the kernel level in a protected system, so it can easily detect block-level changes on the machine. If these agents are on a lot of installed VMs, it becomes more difficult to manage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Yamin works for a Cloud Service Provider (CSP) as a technician in one of their data centers. She has been setting up the Fibre Channel equipment over the last week. What part of the cloud is she building?

A. Cabeling
B. Network
C. Compute
D. Storage

A

D. Storage

Explanation:
The three things that must be built to create a cloud data center are Compute, Network, and Storage. Storage is where the data will be at rest. This involves building Storage Area Networks (SAN). There are two primary SAN protocols: one is Fibre Channel and the other is IP-based Small Computer Serial Interface (iSCSI). What also needs to be constructed within that is how the storage is allocated. Will it be allocated as block storage, file storage, raw storage, etc?

Compute is the computation capability that comes along with a computer. That could be a virtual server of a virtual desktop interface.

The network element is the ability to transmit data to or from storage, to or from the compute elements, and out of the cloud to other destinations. This involves both physical networks and the virtual networks created within a server.

Cables are needed to connect all the physical equipment together. There is even virtual cables found within Infrastruture as a Service (IaaS) environments. This is part of the network element, though.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of the following is MOST closely related to data loss prevention (DLP)?

A. Denial-of-Service Prevention
B. Security Function Isolation
C. Boundary Protection
D. Separation of System and User Functionality

A

C. Boundary Protection

Explanation:
NIST SP 800-53, Security and Privacy Controls for Information Systems and Organizations defines 51 security controls for systems and communication protection. Among these are:

Policy and Procedures: Policies and procedures define requirements for system and communication protection and the roles, responsibilities, etc. needed to meet them.
Separation of System and User Functionality: Separating administrative duties from end-user use of a system reduces the risk of a user accidentally or intentionally misconfiguring security settings.
Security Function Isolation: Separating roles related to security (such as configuring encryption and logging) from other roles also implements separation of duties and helps to prevent errors.
Denial-of-Service Prevention: Cloud resources are Internet-accessible, making them a prime target for DoS attacks. These resources should have protections in place to mitigate these attacks as well as allocate sufficient bandwidth and compute resources for various systems.
Boundary Protection: Monitoring and filtering inbound and outbound traffic can help to block inbound threats and stop data exfiltration. Firewalls, routers, and gateways can also be used to isolate and protect critical systems.
Cryptographic Key Establishment and Management: Cryptographic keys are used for various purposes, such as ensuring confidentiality, integrity, authentication, and non-repudiation. They must be securely generated and secured against unauthorized access.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Volume and object are the names of the storage types used in which cloud service model?

A. Platform as a Service (PaaS)
B. Database as a Service (DBaaS)
C. Infrastructure as a Service (IaaS)
D. Software as a Service (SaaS)

A

C. Infrastructure as a Service (IaaS)

Explanation;
Each cloud service model uses different types of storage as shown below:

Infrastructure as a Service (IaaS): Volume, Object
Platform as a Service (PaaS): Structured, Unstructured
Software as a Service (SaaS): Content and file storage

DBaaS is not a term used by NIST SP 800-145 or ISO/IEC 17788. However, database would be the storage type. Or more specifically, we could say that it is structured data.

The question is really based on the Cloud Security Alliance guidance 4.0 document. If you read the OSG or the CBK, you will see completely different descriptions in each book. What is wise for the exam is to be familiar with the names and what they mean, not how they link to IaaS, PaaS, or SaaS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Imani is working with their cloud data architect to design a Storage Area Network (SAN) that will provide the cloud storage needed by the users. They want users to be able to have access to mountable volumes within the Fibre Channel (FC) SAN.

Of the following, which term describes the allocated storage space that is presented to the user as a mountable drive?

A. Logical Unit Number (LUN)
B. World Wide Port Name (WWPN)
C. World Wide Node Name (WWNN)
D. World Wide Names (WWN)

A

A. Logical Unit Number (LUN)

Explanation:
Storage management is a complex topic that is worth learning more about beyond (ISC)2 books for the exam. When building SANs that are accessed with Fibre Channel (FC) or iSCSI protocols, the server is often built with Redundant Array of Independent Disks (RAID), and it is necessary to slice storage into pieces that are visible to individuals, groups, applications, and so on. The mechanism for identifying the space allocated to users that present as mountable drives is called a Logical Unit Number (LUN).

There is more addressing used within FC, which is WWNs. A WWN is allocated to FC devices on the SAN. The WWN that has been allocated to a node is a WWNN. The WWN allocated to a port is a WWPN.

IBM is a good source of information for study.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Celiene is a cloud data architect. She has been designing cloud solutions for years and has worked with databases, big data, object storage, and so on. One of the types of data that she works with provides the context and details about another data asset. What would that be called?

A. Data Lake
B. Unstructured data
C. Structured data
D. Metadata

A

D. Metadata

Explanation:
Metadata refers to descriptive information that provides context and details about a particular data asset. It gives information about the characteristics, attributes, and properties of the data, enabling better understanding, organization, and management of the data. Metadata can be found in various domains, including digital content, databases, documents, and research materials.

Data lakes and structured and unstructured data are the data assets that metadata can describe.

A data lake is a centralized repository that stores large volumes of structured, semi-structured, and unstructured data in its raw, unprocessed form. It is designed to handle massive amounts of data from diverse sources, such as transactional systems, log files, social media feeds, IoT devices, and more. Unlike traditional data warehouses that require data to be structured and organized upfront, data lakes store data in its native format, allowing for flexible analysis and exploration.

Structured data refers to data that is organized and formatted according to a predefined data model or schema. It follows a strict and consistent structure, where each data element is assigned a specific data type and resides in a well-defined field or column within a table or database. Structured data is typically stored in relational databases or data warehouses.

Unstructured data refers to data that does not have a predefined or organized format. It lacks a specific data model or schema and does not fit neatly into traditional rows and columns like structured data. Unstructured data is typically found in various forms, such as text documents, images, videos, audio files, social media posts, emails, sensor data, and more.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

According to the American Society for Heating, Refrigeration and Air conditioning Engineers (ASHRAE), what is the ideal temperature for a data center?

A. 64.4 - 80.6 degrees F/ 18 - 27 degrees C
B. 70.2 - 85.0 degrees F / 21.2 - 29.4 degrees C
C. 55.7 - 78.5 degrees F / 13.1 - 25.8 degrees C
D. 49.8 - 70.6 degrees F / 9.3 - 21.4 degrees C

A

A. 64.4 - 80.6 degrees F/ 18 - 27 degrees C

Explanation:
Due to the number of systems running, data centers produce a lot of heat. If the systems in the data center overheat, it could damage the systems and make them unusable. To protect the systems, adequate and redundant cooling systems are needed. The American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE) recommend that the ideal temperature for a data center is 64.4 - 80.6 degrees F / 18 - 27 degrees C.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

During which phase of the software development lifecycle should testing needs be defined?

A. Requirements definition
B. Testing
C. Coding
D. Design

A

A. Requirements definition

Explanation:
During the phase of the software development lifecycle, sometimes referred to as requirement gathering and feasibility or requirements definition, the testing requirements are defined. Having these requirements in place before development and testing even begins helps to ensure the success of the project.

The coding phase is when the developers are building the application, which is too late to define the test requirements. Testing is when the requirements are met and, again, is too late. The design phase is also a little late and is a better answer than design or testing, but requirements definition is the best answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

An information security architect is developing a business disaster recovery plan (DRP) for her organization. They have been progressing through the steps to develop their plans that will be utilized in the event of major disruptions to their private cloud datacenter. They have just finished developing the procedural documentation.

What is the next step for them to take?

A. Develop recovery strategies
B. Implement the plan
C. Conduct the Business Impact Analysis (BIA)
D. Test the plan

A

B. Implement the plan

Explanation:
When developing a Disaster Recovery Plan (DRP), the following order should be followed:

Project management and initiation
Business Impact Analysis
Develop recovery strategies
Develop the documentation
Implement the plan
Test the plan
Report and revise
Embed the plan in the user community

As they have just developed the plan, the next step is to implement it. The instinct for most people is to move to test the plan so that it can then be implemented. Since these are the steps to be taken after significant failure, it is necessary to build the alternative cloud to fail into before you can test it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A medium-sized business is looking to utilize a Storage Area Network in their private cloud. They are looking for the easiest route to build the SAN, utilizing the existing and traditional Local Area Network technology that they already have. Which storage protocol would you recommend?

A. Fibre Channel
B. iSCSI
C. HyperSCSI
D. NVMe over Fabrics

A

B. iSCSI

Explanation:
iSCSI allows for the transmission of SCSI commands over a TCP-based network. SCSI allows systems to use block-level storage that behaves like a SAN would on physical servers but leverages the TCP network within a virtualized environment. iSCSI is the most commonly used communications protocol for network-based storage.

Fibre Channel (FC) requires a change of cabling from wire to fiber. It also involves different equipment to connect the storage devices for transmission.

HyperSCSI is a proprietary protocol developed by some storage vendors that allows SCSI commands and data to be transmitted over IP networks. Since it is proprietary, it is probably not a good choice. We do not have any information in the question other than they are using traditional LAN technology.

NVMe over Fabrics (NVMe-oF) is an emerging protocol that allows direct access to Non-Volatile Memory Express (NVMe) storage devices over a network. NVMe-oF enables low-latency, high-performance storage connectivity by leveraging high-speed interconnects like Fibre Channel, Ethernet, or InfiniBand. This can use Ethernet, which is the traditional LAN layer 2 protocol, but the storage devices are different.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Daria is working with the software developers to ensure that the sensitive data that the application handles will be well protected. The application they are creating will contain information about their customers, their orders, conversations that have occurred, and so on. What she needs to make sure of is that the payment card numbers that are stored in the application will be properly protected because they must be in compliance with the Payment Card Industry Data Security Standard (PCI DSS). When the customer service representatives are on the phone with the customers and they access a specific customer’s account, they are not allowed to see the full payment card number.

What mechanism can be used to protect the card number?

A. Tokenization
B. Anonymization
C. Masking
D. Obfuscation

A

C. Masking

Explanation:
Masking is the common way that card numbers are protected in this scenario. Instead of seeing the card number, the customer service representative would only see stars (*) and the last four or five digits of the card number. This is also used to protect passwords when a user is typing them in to the application to protect it from someone’s shoulder surfing.

Masking is a term that is not defined by ISO, NIST, or the CSA. It is a term that software developers use in other ways than in the previous paragraph. However, that use is not consistent. Using the term masking as a way to cover or hide information should be sufficient for this test.

Tokenization is to replace data. So, if it was a credit card number that was being transmitted across a network, which could be compromised, it would be better to replace that number with a token. The bank would need another database to convert that token back to the card number to determine if a purchase can be made. This is how Apple Pay, Google pay, PayPal, and other such payment methods work.

Obfuscation is to confuse. You can say that encryption is a form of obfuscation. If you were looking at the encrypted test it would not make sense. You would be left “confused” by what the text actually says. However, not all obfuscation is encryption. There are other methods. As an example, go to Microsoft Word and change the font to ‘Wingdings.” It is arguable that that is a form of obfuscation.

Anonymization is to remove. Anonymization removes the direct and indirect identifiers from data. Once it is removed, it cannot be recovered. So, if a lab was researching a new medicine, and they needed to review medical records to see how things were working, it would be good to remove the Personally Identifiable Information (PII) from their view, or anonymize it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

An organization’s communications with which of the following is MOST likely to include information about planned and unplanned outages and other information designed to protect the brand image?

A. Regulators
B. Partners
C. Vendors
D. Customers

A

D. Customers

Explanation:
An organization may need to communicate with various parties as part of its security and risk management process. These include:

Vendors: Companies rely on vendor-provided solutions, and a vendor experiencing problems could result in availability issues or potential vulnerabilities for their customers. Relationships with vendors should be managed via contracts and SLAs, and companies should have clear lines of communication to ensure that customers have advance notice of potential issues and that they can communicate any observed issues to the vendor.
Customers: Communications between a company and its customers are important to set SLA terms, notify customers of planned and unplanned service interruptions, and otherwise handle logistics and protect the brand image.
Partners: Partners often have more access to corporate data and systems than vendors but are independent organizations. Partners should be treated similarly to employees with defined onboarding/offboarding and management processes. Also, the partnership should begin with mutual due diligence and security reviews before granting access to sensitive data or systems.
Regulators: Regulatory requirements also apply to cloud environments. Organizations receive regulatory requirements and may need to demonstrate compliance or report security incidents to relevant regulators.

Organizations may need to communicate with other stakeholders in specific situations. For example, a security incident or business disruption may require communicating with the public, employees, investors, regulators, and other stakeholders. Organizations may also have other reporting requirements, such as quarterly reports to stakeholders, that could include security-related information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

During which phase of the cloud data lifecycle would data undergo cryptographic erasure?

A. Use
B. Archive
C. Destroy
D. Store

A

C. Destroy

Explanation:
As the name suggests, the destroy phase is where data is removed completely from a system (or “destroyed”) and should be unable to be recovered. In cloud environments, methods such as degassing and shredding can’t be used because they require physical access to the hardware. Instead, in cloud environments, cryptographic erasure can be used by a PaaS or IaaS customer to destroy the data. The cloud provider can also use cryptographic erasure or any of the traditional destruction techniques like shredding. That would be good to inquire into when searching for cloud providers.

The use phase is when someone goes back to existing data and utilizes it in some way. You looking at this question would actually fit into the use phase.

Store and archive are two different stages of holding data. Store is normal storage on persistent drives like HDD or SSD. Archival is long-term storage. For example, holding on to your tax records for the next seven years just in case you need them. (The IRS actually says three not seven, but consult your accountant or lawyer.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which of the following storage types acts like a physical hard drive connected to a VM?

A. Ephemeral
B. Volume
C. Raw
D. Long-Term

A

B. Volume

Explanation:
Cloud-based infrastructure can use a few different forms of data storage, including:

Ephemeral: Ephemeral storage mimics RAM on a computer. It is intended for short-term storage that will be deleted when an instance is deleted.
Long-Term: Long-term storage solutions like Amazon Glacier, Azure Archive Storage, and Google Coldline and Archive are designed for long-term data storage. Often, these provide durable, resilient storage with integrity protections.
Raw: Raw storage provides direct access to the underlying storage of the server rather than a storage service.
Volume: Volume storage behaves like a physical hard drive connected to the cloud customer’s virtual machine. It can either be file storage, which formats the space like a traditional file system, or block storage, which simply provides space for the user to store anything.
Object: Object storage stores data as objects with unique identifiers associated with metadata, which can be used for data labeling.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Which of the following is MOST relevant in public cloud environments?

A. Access Controls
B. Tenant Partitioning
C. HVAC
D. Multivendor Pathway Connectivity

A

B. Tenant Partitioning

Explanation:
Multitenant public cloud environments run the security risk of one tenant being able to access or affect another’s data, applications, etc. Cloud providers enforce tenant partitioning using access controls and similar means, but cloud customers are responsible for protecting their data by using encryption and properly configuring CSP-provided security controls.

Access controls, multivendor pathway connectivity, and HVAC are important in any data center, regardless of who owns it.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

In their Infrastructure as a Service (IaaS) cloud environment, an organization encounters a catastrophic business impact event. The occurrence happened as a result of an outage in the eastern U.S. region, but the Cloud Service Provider’s (CSP) failover between availability zones was not triggered.

Who would be responsible for configuring the cloud-based resiliency functions?

A. Cloud service auditor
B. Cloud Service Provider (CSP)
C. Cloud service broker
D. Cloud Service Customer (CSC)

A

D. Cloud Service Customer (CSC)

Explanation:
The consumer will always be responsible for configuring resiliency functions such as automated data replication, failover between CSP availability zones, and network load balancing.

The CSP’s response is to preserve the capabilities that provide these solutions, but the consumer must construct their cloud system to suit their own resiliency requirements.

A cloud service broker is an intermediary between the CSC and the CSP. They can be used to negotiate contracts or manage the services.

Cloud service auditors are a third party that would audit the CSP on behalf of the CSC interests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Haile is a cloud operator who has been reviewing the Indications of Compromise (IoC) from the company’s Security Information and Event Manager (SIEM). The SIEM reviews the log outputs to find these possible compromises. Where should detailed logging be in place within the cloud?

A. Only access to the hypervisor and the management plane
B. Wherever the client accesses the management plane only
C. Only specific levels of the virtualization structure
D. Each level of the virtualization infrastructure as well as wherever the client accesses the management plane

A

D. Each level of the virtualization infrastructure as well as wherever the client accesses the management plane

Explanation:
Logging is imperative for a cloud environment. Role-based access should be implemented, and logging should be done at each and every level of the virtualization infrastructure as well as wherever the client accesses the management plane (such as a web portal).

The SIEM cannot analyze the logs to find the possible compromise points unless logging is enabled, and the logs are delivered to that central point. This is necessary in case there is a compromise, which could happen anywhere within the cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A security administrator has been asked to review the most critical web application security risks currently known. What could this engineer use to review these?

A. ITIL
B. OWASP Top 10
C. NIST SP 800-146
D. ISO/IEC 31000

A

B. OWASP Top 10

Explanation:
The Open Web Application Security Project (OWASP) Top 10 list identifies the 10 most critical web application security risks at a given time. The project is regularly updated to ensure that their list is up to date. The current top 2021 10 are as follows:

Broken access control
Cryptographic failures
Injection
Insecure design
Security misconfiguration
Vulnerable and outdated components
Identification and authentication failures
Software and data integrity failures
Security logging and monitoring failures
Server-Side Request Forgery (SSRF)

NIST SP 800-146 is the cloud computing synopsis and recommendations.

ISO/IEC 31000 is a risk management framework.

ITIL is a UK creation that is managed by the UK government and Axelos. It is a framework to standardize the overall lifecycle of IT services from planning to maintenance.

24
Q

In a tier two data center, it is necessary to build in some redundancy. Which of the following options would be appropriate at this tier?

A. Generators
B. Servers
C. Server drives
D. Data circuits

A

A. Generators

Explanation:
In a tier two data center, the power and cooling are made redundant. The path for the power and cooling is not redundant, but there can be Uninterruptible Power Supplies (UPS), chillers, and generators.

The data center tiers, as defined by the Uptime Institute, are focused on power and cooling. The servers will need a dual power supply at tier 3 but not at tier 2. The data circuits and server drives are not addressed by the tiers at all.

25
Q

Which of the following types of SOC audits includes a review of an organization’s control designs but nothing else?

A. SOC 3
B. SOC 1
C. SOC 2 Type I
D. SOC 2 Type II

A

C. SOC 2 Type I

Explanation:
Service Organization Control (SOC) reports are generated by the American Institute of CPAs (AICPA). The three types of SOC reports are:

SOC 1: SOC 1 reports focus on financial controls and are used to assess an organization’s financial stability.
SOC 2: SOC 2 reports assess an organization's controls in different areas, including Security, Availability, Processing Integrity, Confidentiality, or Privacy. Only the Security area is mandatory in a SOC 2 report.
SOC 3: SOC 3 reports provide a high-level summary of the controls that are tested in a SOC 2 report but lack the same detail. SOC 3 reports are intended for general dissemination.

SOC 2 reports can also be classified as Type I or Type II. A Type I report is based on an analysis of an organization’s control designs but does not test the controls themselves. A Type II report is more comprehensive, as it tests the effectiveness and sustainability of the controls through a more extended audit.

26
Q

Which of the following is used to mitigate and control customer requests for resources if the environment doesn’t have enough resources available to meet the requests?

A. Objects
B. Reservations
C. Shares
D. Limits

A

C. Shares

Explanation:
The concept of shares is that CPU and memory resources are available for virtual machines that need them. The flexibility of the shared environment that is the cloud makes it possible for virtual machines to expand to meet the user’s needs or shrink back. This is the elasticity of the cloud. It is the shared space that it expands into. It generally works on a first-come first-serve basis.

Reservations set aside some of that CPU and memory for a specific virtual machine.

A limit restricts a virtual machine’s ability to expand past a certain point.

An object is a file of some type. It could be a virtual machine image, video, document, spreadsheet, and so on.

27
Q

Rachel, an information security manager at a Top 500 company that is expanding into Canada, has learned of the Personal Information Protection and Electronic Documents Act (PIPEDA). For their corporation, they regularly create polls and ask people questions about their political opinions.

Do they need to protect this information under privacy acts and if so, what type of identifier would it be?

A. No, since this is not considered personal information as an indirect identifier
B. Yes, this is considered personal information as a direct identifier
C. No, since this is not considered personal information as a direct identifier
D. Yes, this is considered personal information as an indirect identifier

A

D. Yes, this is considered personal information as an indirect identifier

Explanation:
Personally Identifiable Information (PII) is broken up into indirect identifiers and direct identifiers. Indirect identifiers are information about opinions, evaluations, and disciplinary actions.

Direct identifiers are name, address, birth date, gender, and zip code.

Both direct and indirect identifiers need to be protected under PIPEDA and other privacy laws.

28
Q

A cloud operator is working on a current issue that has been identified within their Infrastructure as a Service (IaaS) public cloud deployment. After an Indication of Compromise (IoC) from the Security Information Event Manager (SIEM) pointed to a possible compromise, the Incident Response (IR) team was called in. The analysis of the alerts has led the network cloud operator to require visibility of the packets passing through their border router for what appears to be an ongoing attack.

What will this likely require?

A. Input and involvement from the cloud provider
B. A written contract and written permission
C. No special requirements are needed
D. Permission from all cloud customers

A

C. No special requirements are needed

Explanation:
This is an IaaS deployment. The edge router in the cloud is virtual within the IaaS deployment. The physical edge router that is in the cloud provider’s network is not visible to the customer. The virtual router belongs to the customer. It is their operating system. They are the only ones who have traffic passing through that router, unless, of course, there is a bad actor sending data through it.

In both Platform as a Service (PaaS) and Software as a Service (SaaS) deployments, the routers, virtual or physical, are not visible to the cloud customer. If they do want to see that traffic, they would need to work with the cloud provider. This is because the routers belong to the cloud provider and would have traffic from other cloud customers passing through it. For this reason, the “permission from all cloud customers,” “a written contract and written permission” and the “input and involvement from the cloud provider” are not correct.

29
Q

A covert government agency has hired highly skilled software developers to create a tool to infiltrate and control the power grid of an enemy state. The software is designed to slowly cause damage to the programmable logic computers (PLC) that control the physical systems of the power station. The software is also designed to send false information to the monitoring devices to reduce the chance that the damage will be noticed until it is too late.

What type of threat is this?

A. Malicious insider
B. Denial of Service (DoS) attack
C. Command injection attack
D. Advanced Persistent Threat (APT)

A

D. Advanced Persistent Threat (APT)

Explanation:
Advanced Persistent Threat (APT) is an attack, which aims to gain access to the network and systems while staying undetected. APTs will try not to do anything that could be disruptive, as their goal is to maintain access for as long as possible without raising any red flags.

The fundamental goal of the attack in the question is to shut down the power grid. However, DoS is not the best answer because of the description of who is attacking and how it is much more aggressive than just a DoS attack.

Since this is nation state against nation state and the attack is slow, combined with the goal of not being detected, APT is the best answer.

A command injection is when an Operating System (OS) command is entered into a field that is used by users during normal application activity.

This is not a malicious insider. If that were the subject of this question, it would be necessary to obtain a government agent hired by the opposition government with the intention of remaining there and causing damage from within the government.

30
Q

Clea has been working on configuring part of her Infrastructure as a Service (IaaS) virtual network. She has been configuring the switches and firewalls with information regarding the controller information. This will allow more effective policy-based control of the network. What has she been configuring?

A. Software Defined Storage (SDS)
B. Secure Shell (SSH)
C. Software Defined Networking (SDN)
D. Content Delivery Network (CDN)

A

C. Software Defined Networking (SDN)

Explanation:
Software-Defined Networking (SDN) allows a policy-based network through a controller node. This enables centralized management of the network. There is a lack of information in the OSG and the CBK, so research this to better prepare yourself.

Software Defined Storage (SDS) is a method of abstracting the software storage logic from the actual hardware. This makes management more effective. The OSG and CBK have no information about this, so research this to learn more.

Content Delivery Network (CDN) is a method of distributing content such as videos to the end users by storing the needed content on edge servers that are closer to the users. The content is only cached at the edge. When it is no longer needed, it is deleted.

SSH is an OSI model layer 5 encryption technology that is commonly used by network and server administrators for remote connection to devices such as switches and routers.

31
Q

What process is oriented around service delivery of the application service produced in modern DevOps / DevSecOps and occurs at all phases to provide continuous improvement and quality tracking?

A. Software configuration management
B. Threat modeling
C. Version control
D. Quality assurance

A

D. Quality assurance

Explanation:
Quality Assurance (QA) is centered on service delivery of the application service built in the modern DevOps / DevSecOps process, and it occurs at all phases to assure continuous improvement and quality tracking. When more functional and requirements testing is done, QA is most effective. Testing may be automated to make it even more efficient.

Threat modeling is a common process in DevOps/ DevSecOps to analyse what can be done to attack a piece of software, how and the severity level. That makes a product better and stronger, but it is not about service delivery, continuous improvement and quality tracking.

Software Configuration Management (SCM) is primarily concerned with tracking and managing changes to software code and artifacts throughout the software development lifecycle. This includes version control, build automation, release management, and issue tracking. The goal of SCM is to ensure that software changes are made in a controlled and predictable way, while minimizing the risk of errors, inconsistencies, and conflicts.

Version Control Systems (VCS) are used to manage source code and other development artifacts over time. VCS allow developers to track changes to code, collaborate on code with others, and revert changes if necessary. Common VCS tools include Git, SVN, and Mercurial.

32
Q

Which of the following cloud service models has the FEWEST potential external risks and threats that the customer must consider?

A. Platform as a Service
B. Software as a Service
C. Function as a Service
D. Infrastructure as a Service

A

D. Infrastructure as a Service

Explanation:
In an Infrastructure as a Service (IaaS) environment, the customer has the greatest control over its infrastructure stack. This means that it needs to rely less on the service provider than in other service models and, therefore, has fewer potential external security risks and threats.

33
Q

A bad actor was able to gain access to a Virtual Machine (VM) in an Infrastructure as a Service (IaaS) environment. Once they were in the VM, they found a flaw that allowed them to break out of the VM and gain access to the hypervisor. What type of attack is being described here?

A. Man-in-the-Cloud (MitC)
B. Guest escape
C. Hyperjacking
D. Guest hopping

A

B. Guest escape

Explanation:
In a guest escape attack, the bad actor is able to break out of the VM and access the host OS on the server, which is the hypervisor.

Guest hopping is when the bad actor is able to move from one guest OS (which is the VM) to another. Usually the other guest OS will be part of another tenant’s infrastructure.

MitC is when the bad actor steals (copies) the authentication token that is stored on a machine that allows backups to occur automatically.

Hyperjacking (hypervisor hijacking) is when the bad actor gains access to the hypervisor within a server. This is close to the attack in the question. However, the guest escape attack is more specific to what is in the question. There is no specific way that the bad actor will gain access to the hypervisor.

34
Q

Organization A and Organization B are both cloud customers using the same cloud provider and are even sharing resources on the same physical server. Organization B was hit with a Denial-of-Service (DoS) attack, causing them to use more resources than they would normally need. Fortunately, Organization A will always receive, at least, the minimum Central Processing Unit (CPU) and memory resources that they need to operate their services. Organization B’s DoS attack will also not knock them out of service.

Which concept guarantees that Organization A will always receive the amount of resources needed to run their services?

A. Pooling
B. Reservations
C. Limits
D. Shares

A

B. Reservations

Explanation:
Reservations refer to the minimum guaranteed amount of resources that a cloud customer will receive, regardless of the resources being used by other cloud customers. This guarantees that the cloud customer will always have, at the very least, the minimum amount of resources needed to power and operate their services. Because of ideas such as multitenancy, in which many cloud customers are utilizing the same pool of resources, reservations protect cloud customers in the event that a neighboring cloud customer experiences an attack that causes them to overuse resources, making them limited.

A limit is the maximum amount of resources a Virtual Machine (VM), application, container, etc. is allowed to use. This is actually a good thing to do so that something like this attack would not cause a serious financial strain on Organization B.

Pooling is the collection of the resources that a VM, application, container, etc. can pull resources from. These pools would be inside of a single server. It is the CPU, the memory, the storage, and the network capacity that must be divided among the current tenants.

Shares is whatever is left over of the pool after the reservations are assigned.

35
Q

An engineer, working for an insurance company, is beginning the process of configuring the company’s private cloud, building this on-prem at the headquarter’s data center. They have decided on the physical routers, switches, and servers that they are going to purchase. The next step is the hypervisor.

What kind of hypervisor should this engineer purchase?

A. Type 1 hypervisor
B. Type 1 nested hypervisor
C. Type 2 nested hypervisor
D. Type 2 hypervisor

A

A. Type 1 hypervisor

Explanation:
Type 1 hypervisors are called bare metal, embedded, or native hypervisors because they run directly on the physical hardware. Type 2 hypervisors are added after a host Operating System (OS). Type 2 hypervisors are more common on desktop or laptop machines. They can be used in a data center, but that would be an exception, not a rule. The question is starting with physical servers with no mention of an OS, so type 1 is more appropriate.

Hypervisors can be nested. It is possible to have a type 1 hypervisor loaded, then create a windows server (for example) as a virtual device within an Infrastructure or Platform as a Service (IaaS or PaaS). Once that windows server is running, a type 2 hypervisor could be added. It is unlikely that type 1 is actually nested though.
Reference:

36
Q

Mordecai works within the Security Operations Center (SOC) for a marketing company. One of his team members has just reported that they have found an issue that needs to be addressed as soon as possible. They have discovered that bad actors could gain access to one of their critical systems because it is missing a critical security patch.

What threat is this?

A. System vulnerabilities
B. Cloud storage data exfiltration
C. Insecure Interfaces and Application Programming Interfaces (API)
D. Accidental cloud data disclosure

A

A. System vulnerabilities

Explanation:
The Cloud Security Alliance (CSA) lists four main categories of system vulnerabilities. They are as follows:

Zero-day vulnerabilities
Missing security patches
Configuration-based vulnerabilities
Weak or default credentials

Vulnerabilities are flaws. They can exist in any architecture and any service model and could be the responsibility of the customer or the cloud provider.

If the bad actors did access the data, it could be an accidental cloud data disclosure issue. But the team is reporting that the bad actor could gain access, not that they did gain access. This is a small detail, but that is what you need to look for in this test.

Since the bad actor did not get the data, the cloud storage data exfiltration option is also incorrect.

It is plausible that the system vulnerability is in the API. The question does not specify that, so the more generic system vulnerabilities is a better answer.

More details can be found in the CSA Pandemic 11 document, which is recommended reading for this test.

37
Q

A software development company is looking for a way to be able to identify the third-party and open-source software components that are in their software. What can they use?

A. Application Security Verification Standard
B. Interactive Application Security Testing
C. Software Composition Analysis
D. Penetration testing

A

C. Software Composition Analysis

Explanation:
Software Composition Analysis (SCA) is a security practice that involves the identification and analysis of third-party and open-source software components used in a software application. SCA helps organizations understand and manage the security risks associated with the software components they utilize, including known vulnerabilities, licensing issues, and potential code vulnerabilities.

The Application Security Verification Standard (ASVS) is a comprehensive framework and set of guidelines developed by the Open Web Application Security Project (OWASP). ASVS provides a standardized methodology for evaluating the security of web applications and APIs. It serves as a valuable resource for organizations, security professionals, and developers to assess the security posture of their applications and implement necessary security controls.

Interactive Application Security Testing (IAST) is a dynamic application security testing technique that combines elements of both static analysis and dynamic analysis to identify security vulnerabilities in software applications. IAST aims to provide more accurate and comprehensive security testing by analyzing an application’s behavior during runtime.

Penetration testing, also known as ethical hacking or pen testing, is a security assessment technique that involves simulating real-world attacks on software systems to identify vulnerabilities and assess their potential impact. The goal of penetration testing is to identify security weaknesses before malicious attackers can exploit them.

38
Q

Which of the following SOC responsibilities is PROACTIVE in nature?

A. Threat Detection
B. Incident Management
C. Quality Assurance
D. Threat Prevention

A

D. Threat Prevention

Explanation:
The security operations center (SOC) is responsible for managing an organization’s cybersecurity. Some of the key duties of the SOC include:

Threat Prevention: Threat prevention involves implementing processes and security controls designed to close potential attack vectors and security gaps before they can be exploited by an attacker.
Threat Detection: SOC analysts use Security Information and Event Management (SIEM) solutions and various other security tools to identify, triage, and investigate potential security incidents to detect real threats to the organization.
Incident Management: If an incident has occurred, the SOC may work with the incident response team (IRT) to contain, investigate, remediate, and recover from the identified incident.

Quality Assurance is not a core SOC responsibility.

39
Q

Vaeda is an information security professional working with the risk management team for a medium-sized manufacturing company. They have spent a great deal of time performing quantitative and qualitative risk assessments. They are now in the risk mitigation phase. What they want to know is how can they ensure that all risk is fully mitigated?

A. Constantly monitoring all systems
B. Purchasing various insurance policies
C. Risk is never fully mitigated
D. Enforcing a strict risk management plan

A

C. Risk is never fully mitigated

Explanation:
There is no way to ensure that risk is fully mitigated in a system or application. Organizations should seek to lower the possibility of risk or lower the impact that a risk could have on systems, but there is no way to completely mitigate it. Any system that has users and access will always maintain some level of risk, even if the risk level is low.

40
Q

Emilia is a cloud security architect. She is designing a data storage solution that involves a database to store customer data after they have purchased a product from their e-commerce site. The database will store the name and address of the customer as well as the credit card number and associated data. Her concern is that the integrity of the information must be maintained within the database.

Which of the following technologies can help Emilia verify the integrity of customer data?

A. Data Security Standard (DSS)
B. Tokenization
C. Hashing
D. Encryption

A

C. Hashing

Explanation:
Hashing is a mathematical technology that can verify the integrity of data whether it is at rest or in transmission. Algorithms such as MD5 and SHA 1/2/3 are available today.

Encryption changes data to an unreadable state. It is arguable that if you can decrypt and read the data, then that helps to prove the integrity, but hashing is a more direct and specific answer. Symmetric encryption includes algorithms like AES and Blowfish. Asymmetric encryption includes algorithms like RSA and DH.

Tokenization is commonly used today with credit card transactions when PayPal, ApplePay, and Google Pay are used. It replaces the credit card with a different value. The original value is still available and can be found within a database that the bank should maintain.

The DSS is the standard level of security that must be applied to any corporation and system that is processing payment cards as specified by the Payment Card Industry (PCI).

41
Q

API and SSH keys are MOST relevant to which of the following?

A. Identity Providers
B. Secrets Management
C. Cloud Access Security Broker
D. Federated Identity

A

B. Secrets Management

Explanation:
Identity and Access Management (IAM) is critical to application security. Some important concepts in IAM include:

Federated Identity: Federated identity allows users to use the same identity across multiple organizations. The organizations set up their IAM systems to trust user credentials developed by the other organization.
Single Sign-On (SSO): SSO allows users to use a single login credential for multiple applications and systems. The user authenticates to the SSO provider, and the SSO provider authenticates the user to the apps using it.
Identity Providers (IdPs): IdPs manage a user’s identities for an organization. For example, Google, Facebook, and other organizations offer identity management and SSO services on the Web.
Multi-Factor Authentication (MFA): MFA requires a user to provide multiple authentication factors to log into a system. For example, a user may need to provide a password and a one-time password (OTP) sent to a smartphone or generated by an authenticator app.
Cloud Access Security Broker (CASB): A CASB sits between cloud applications and users and manages access and security enforcement for these applications. All requests go through the CASB, which can perform monitoring and logging and can block requests that violate corporate security policies.
Secrets Management: Secrets include passwords, API keys, SSH keys, digital certificates, and anything that is used to authenticate identity and grant access to a system. Secrets management includes ensuring that secrets are randomly generated and stored securely.
42
Q

Communication, Consent, Control, Transparency, and Independent and yearly audits are the five key principles found in what standard that cloud providers should adhere to?

A. ISO/IEC 27001
B. General Data Protection Regulation (GDPR)
C. Privacy Management Framework (PMF)
D. ISO/IEC 27018

A

D. ISO/IEC 27018

Explanation:
ISO/IEC 27018 is a standard privacy requirement for cloud service providers to adhere to. It is focused on five key principals: communication, consent, control, transparency, and independent and yearly audits. ISO/IEC 27018 is for cloud providers acting as Data Processors handling Personally Identifiable Information (PII). (A major clue in the question is cloud providers.)

The PMF, formerly known as the Generally Accepted Privacy Principles (GAPP), has nine core principles. One of which is agreement, notice, and communication. Another is collection and creation. It is very similar. However, PMF is not specifically for cloud providers.

GDPR is a European Union requirement for member states to have a law to protect the personal data of natural persons.

ISO/IEC 27001 is an international standard that is used to create and audit Information Security Management Systems (ISMS).

43
Q

Which type of hypervisor is MOST likely to be used in a cloud service provider’s data center?

A. Type 1
B. Type 2
C. Type 3
D. Type 4

A

A. Type 1

Explanation:
Virtualization allows a single physical computer to host multiple different virtual machines (VMs). The guest computers are managed by a hypervisor, which makes it appear to each VM that it is running directly on physical hardware. The two types of hypervisors are:

Type 1: The hypervisor runs on bare metal and hosts virtual machines on top of it. Most data centers use Type 1 hypervisors.
Type 2: The hypervisor is a program that runs on top of a host operating system alongside other applications. Type 2 hypervisors like VirtualBox, Parallels, and VMware are often used on personal computers.

Type 3 and 4 hypervisors do not exist.

44
Q

When a quantitative risk assessment is performed, it is possible to determine how much a threat can cost a business over the course of a year. What term defines this?

A. Annual Rate of Occurrence (ARO)
B. Recovery Time Objective (RTO)
C. Annualized Loss Expectancy (ALE)
D. Single Loss Expectancy (SLE)

A

C. Annualized Loss Expectancy (ALE)

Explanation:
How much a single occurrence of a threat will cost a business is the SLE. The total number of times this is expected within a year is the ARO. So, the total cost of a threat over a year is calculated by multiplying the ARO times the SLE and that will result in the ALE.

The RTO is the amount of time that is given to the recovery team to perform the recovery actions after a disaster has been declared.
Reference:

45
Q

Which of the following main objectives of IRM is associated with defining what users are permitted to do?

A. Enforcement
B. Access Models
C. Data Rights
D. Provisioning

A

C. Data Rights

Explanation:
Information rights management (IRM) involves controlling access to data, including implementing access controls and managing what users can do with the data. The three main objectives of IRM are:

Data Rights: Data rights define what users are permitted to do with data (read, write, execute, forward, etc.). It also deals with how those rights are defined, applied, changed, and revoked.
Provisioning: Provisioning is when users are onboarded to a system and rights are assigned to them. Often, this uses roles and groups to improve the consistency and scalability of rights management, as rights can be defined granularly for a particular role or group and then applied to everyone that fits in that group.
Access Models: Access models take the means by which data is accessed into account when defining rights. For example, data presented via a web application has different potential rights (read, copy-paste, etc.) than data provided in files (read, write, execute, delete, etc.).

Enforcement is not a main objective of IRM.

46
Q

Rhona is working within a database. The data in the database is encrypted, but Rhona doesn’t notice the encryption. Which method of encryption is integrated within the actual database processes and, therefore, not noticeable by the user?

A. Stream encryption
B. Asymmetrical encryption
C. Transparent encryption
D. Full disk encryption

A

C. Transparent encryption

Explanation:
Transparent encryption is a method of encryption that works by being integrated right into the database processes. In this way, the encryption is unnoticeable by the user.

Full disk encryption encrypts at a drive level. Real or virtual.

Stream encryption is one of the core technologies within symmetric encryption. It encrypts the smallest unit possible, such as a word, a letter, or a bit.

Asymmetrical encryption is encryption that uses two different keys: one for encryption and one for decryption. It can be used for exchanging the symmetric key, digitally signing something, or providing confidentiality of very small messages.

47
Q

A medium-sized business has been slowly moving its operations into the public cloud. The sales department has moved to an online tool for managing their contacts with their customers and prospective sales. The engineering department has been using a database to manage all their projects and test results. All business, except for marketing, has migrated to the cloud-based document creation tool. Through all this movement into the cloud, each department has made their own decisions about when and how to migrate to the cloud. The information security department has discovered that the reliability level of each of these services is lower than required by the company to maintain the level of customer service expected by upper management.

Where has the corporation likely failed in their move to the cloud?

A. Cloud governance
B. Risk management
C. Threat modeling
D. Board oversight

A

A. Cloud governance

Explanation:
Cloud governance is the subset of corporate governance. Governance is the oversight and management of the business. Security governance is the management of security based on corporate goals and objectives. Cloud governance is the management of the movement and existence in the cloud in support of corporate goals and objectives. Since each department has been doing their own thing, it is likely that there is a lack of central management of the cloud and its solutions.

The solutions chosen should be based on the threats and their impacts on the business. Risk management is the process of analyzing what could happen and how bad it could be for the corporation. So, risk management and threat modeling are both critical in the decision-making process for each choice of cloud solutions. This feeds into cloud governance, which makes that a better choice based on the focus of the question.

Corporate governance includes board oversight. The board should be advised of cloud solutions and decisions. This is a critical element, but the focus of the question is again more cloud governance. The focus is that each department is doing their own thing.
Reference:

48
Q

Which international standard contains information about the architectures and security of Trusted Platform Modules (TPMs)?

A. ISO/IEC 27017
B. ISO/IEC 11889
C. ISO/IEC 27018
D. ISO/IEC 27050

A

B. ISO/IEC 11889

Explanation:
ISO/IEC 11889 specifies how various cryptographic techniques and architectural elements are to be implemented. It consists of four parts, including an overview of architectures of the TPN, design principles, commands, and supporting code.

ISO/IEC 27050 is a standard for e-Discovery.

ISO/IEC 27018 is a standard for cloud providers acting as data processors.

ISO/IEC 27017 is a standard for security controls for cloud environments.

49
Q

Which of the following attributes of evidence relates to supporting a certain conclusion?

A. Admissible
B. Convincing
C. Authentic
D. Accurate

A

B. Convincing

Explanation:
Typically, digital forensics is performed as part of an investigation or to support a court case. The five attributes that define whether evidence is useful include:

Authentic: The evidence must be real and relevant to the incident being investigated.
Accurate: The evidence should be unquestionably truthful and not tampered with (integrity).
Complete: The evidence should be presented in its entirety without leaving out anything that is inconvenient or would harm the case.
Convincing: The evidence supports a particular fact or conclusion (e.g., that a user did something).
Admissible: The evidence should be admissible in court, which places restrictions on the types of evidence that can be used and how it can be collected (e.g., no illegally collected evidence).
50
Q

Like the European Union (EU) and the United States, which other influential body has released privacy protections and regulations regarding data privacy?

A. Canadian Institute of Chartered Accountants (CICA)
B. Building Industry Consulting Services International (BICSI)
C. Asia Pacific Economic Cooperation (APEC)
D. National Fire Protection Association (NFPA)

A

C. Asia Pacific Economic Cooperation (APEC)

Explanation:
Both the United States and the European Union (EU) have established data privacy regulations such as HIPAA and GDPR, respectively. In addition, the Asian-Pacific Economic Cooperation (APEC) is another influential body that has established regulations regarding data privacy. APEC developed the APEC Privacy Framework.

The BICSI has standards for designing, installing, and integrating Information Technology Systems (ITS), specifically for fiber and copper cabling.

The CICA is the Canadian organization for certified and practicing accountants. This is similar to the American AICPA.

NFPA is a fire professionals association. They are a source of information and resources regarding fires and fire protection and prevention.

51
Q

What steps can an organization take to ensure that electronically stored information (ESI) is preserved and collected in a defensible manner during eDiscovery?

A. Document the entire process
B. Always use police investigators
C. Never put data in the cloud
D. Always hire a contractor

A

A. Document the entire process

Explanation:
There are many things a corporation can do to ensure that they will have the evidence they need from the cloud. The process should be researched and documented. Document the step-by-step process for data preservation and collection, including the who, what, when, where, and how to ensure defensibility, sustainability, and auditability.

It is possible that hiring a contractor will be part of the process of collecting and analyzing evidence, but not always.

Putting data in the cloud is what we do today. So it is necessary to figure out how to work with the cloud, not avoid it. The most common use for the cloud is data storage.

It is possible that a corporation could undergo a breach or attack of some kind that will involve the police at some point. But the answer option always is unlikely to be the right answer.

52
Q

Which of the following is NOT one of the four main practices of IAM?

A. Authentication
B. Administration
C. Accountability
D. Authorization

A

B. Administration

Explanation:
Identity and Access Management (IAM) services have four main practices, including:

Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering.
Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in.
Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permission models. A cloud access security broker (CASB) solution can help with this.
Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.

Reference:

53
Q

A manufacturing business has been establishing its robust incident response plans for sometime now. This is a wise step because they were just hit by a cyber attack. In responding to this attack, they have just assigned Vaeda as the incident manager to coordinate further actions. What phase of the incident response lifecycle will Vaeda be in?

A. Performance
B. Privacy
C. Detection and Analysis
D. Governance

A

C. Detection and Analysis

Explanation:
The incident manager is in the second phase, detection and analysis.

The Cloud Security Alliance (CSA) guidance 4.0 document (watch for v5 as of May 2023) breaks the phases down in the following manner:

Preparation: Establishing an incident response capability so that the organization is ready to respond to incidents.

Process to handle the incidents
Handler communications and facilities
Incident analysis hardware and software
Internal documentation (port lists, asset lists, network diagrams, current baselines of network traffic)
Identifying training
Evaluating infrastructure by proactive scanning and network monitoring, vulnerability assessments, and performing risk assessments
Subscribing to third-party threat intelligence services

Detection and Analysis: This is really when managing a real risk begins. Preparation is getting ready. Detection is when it has happened, and we discover it in someway. The analysis is often stated as triage. It is necessary to determine what is happening and what will be handled by the team first.

Alerts [endpoint protection, network security monitoring, host monitoring, account creation, privilege escalation, other indicators of compromise, SIEM, security analytics(baseline and anomaly detection), and user behavior analytics]
Validate alerts (reducing false positives) and escalation
Estimate the scope of the incident
Assign an Incident Manager who will coordinate further actions
Designate a person who will communicate the incident containment and recovery status to senior management
Build a timeline of the attack
Determine the extent of the potential data loss
Notification and coordination activities

Containment, Eradication, and Recovery: We often want to start the containment as soon as we detect it. We need to analyze it to be able to determine the containment step. This could occur minutes, hours, or days after the incident happens. Once the attack is contained, it is necessary to clean and remove any remnants of the attack (e.g., Is there a virus on a system that needs to be removed?). Otherwise, it is necessary to return our environments to a normal condition. It could be that we also need to change a control, alter a configuration, add a new control somewhere, etc. to ensure this does not happen again.

Containment: Taking systems offline. Considerations for data loss versus service availability. Ensuring systems don’t destroy themselves upon detection
Eradication and Recovery: Clean up compromised devices and restore systems to normal operation. Confirm systems are functioning properly. Deploy controls to prevent similar incidents
Documenting the incident and gathering evidence (chain of custody)

Post-mortem: Now that things are back to normal, what could have been handled differently that would have made things better? This is a meeting to work together to get better. It is not a finger-pointing exercise. Own what we did right and what we did wrong.

What could have been done better? Could the attack have been detected sooner? What additional data would have been helpful to isolate the attack faster? Does the IR process need to change? If so, how?

Governance is the oversight provided by the Board of Directors and the C-suite. There is corporate governance, security governance, and data governance.

Privacy is a critical topic these days. There are so many regulations being created or updated to force companies to take the act of protecting their customers’ and employees’ personal information.

Neither governance nor privacy is directly part of the incident response. They are not phases. Depending on how you look at things, they are connected but not directly.

54
Q

Which of the following test types provides the attacker with FULL knowledge and access to the software?

A. Black-box
B. White-box
C. Red-box
D. Gray-box

A

B. White-box

Explanation:
Software testing can be classified as one of a few different types, including:

White-box: In white-box or clear-box testing, the tester has full access to the software and its source code and documentation. Static application security testing (SAST) is an example of this technique.
Gray-box: The tester has partial knowledge of and access to the software. For example, they may have access to user documentation and high-level architectural information.
Black-box: In this test, the attacker has no specialized knowledge or access. Dynamic application security testing (DAST) is an example of this form of testing.

Red-box is not a classification of security testing.
Reference:

55
Q

Lori is a cloud security architect who is planning how users will be authenticated when utilizing the new Software as a Service (SaaS) that her company is building. The primary concern that she and her team have is that she needs to be sure that the users are authenticated with a high degree of accuracy. They have decided that they will use multi-factor authentication that uses a retina scan as its second factor.

What type of authentication is that?

A. Something the user does
B. Something the user has
C. Something the user is
D. Something the user knows

A

C. Something the user is

Explanation:
In Multi-Factor Authentication (MFA), users are required to use two or more types of authentication components. Authentication types include something the user knows (pin, passwords), something the user has (RSA token, key card), and biometrics, which includes something the user is (retina scan, fingerprint scan) and something the user does (behavioral such as vocal prints).
Reference:

56
Q
A