Pocket Prep 16 Flashcards

1
Q

Which of the following storage types acts like a physical hard drive connected to a VM?

A. Raw
B. Ephemeral
C. Long-Term
D. Volume

A

D. Volume

Explanation:
Cloud-based infrastructure can use a few different forms of data storage, including:

Ephemeral: Ephemeral storage mimics RAM on a computer. It is intended for short-term storage that will be deleted when an instance is deleted.
Long-Term: Long-term storage solutions like Amazon Glacier, Azure Archive Storage, and Google Coldline and Archive are designed for long-term data storage. Often, these provide durable, resilient storage with integrity protections.
Raw: Raw storage provides direct access to the underlying storage of the server rather than a storage service.
Volume: Volume storage behaves like a physical hard drive connected to the cloud customer’s virtual machine. It can either be file storage, which formats the space like a traditional file system, or block storage, which simply provides space for the user to store anything.
Object: Object storage stores data as objects with unique identifiers associated with metadata, which can be used for data labeling.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Brocky has been working with a project team analyzing the risks that could occur as this project progresses. The analysis that their team has been performing used descriptive information rather than financial numbers. Which type of assessment have they been performing?

A. Quantitative assessment
B. Fault tree analysis
C. Qualitative assessment
D. Root cause analysis

A

C. Qualitative assessment

Explanation:
There are two main assessment types that can be done for assessing risk: qualitative assessments and quantitative assessments. While quantitative assessments are data driven, focusing on items such as Single Loss Expectancy (SLE), Annual Rate of Occurrence (ARO), and Annual Loss Expectancy (ALE), qualitative assessments are descriptive in nature and not data driven.

Fault tree analysis is actually a combination of quantitative and qualitative assessments. The question is looking for something that is not financial and that would be the quantitative. So this is more than what the question is about.

Root cause analysis is what is done in problem management from ITIL. Root cause analysis analyzes why some bad event has happened so that the root cause can be found and fixed so that it does not happen again.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Communication, Consent, Control, Transparency, and Independent and yearly audits are the five key principles found in what standard that cloud providers should adhere to?

A. General Data Protection Regulation (GDPR)
B. ISO/IEC 27018
C. Privacy Management Framework (PMF)
D. ISO/IEC 27001

A

B. ISO/IEC 27018

Explanation:
ISO/IEC 27018 is a standard privacy requirement for cloud service providers to adhere to. It is focused on five key principals: communication, consent, control, transparency, and independent and yearly audits. ISO/IEC 27018 is for cloud providers acting as Data Processors handling Personally Identifiable Information (PII). (A major clue in the question is cloud providers.)

The PMF, formerly known as the Generally Accepted Privacy Principles (GAPP), has nine core principles. One of which is agreement, notice, and communication. Another is collection and creation. It is very similar. However, PMF is not specifically for cloud providers.

GDPR is a European Union requirement for member states to have a law to protect the personal data of natural persons.

ISO/IEC 27001 is an international standard that is used to create and audit Information Security Management Systems (ISMS).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Orlando has been able to determine that they are experiencing a lot of shadow IT. However, he is unsure of the tool that could be used to determine where the users are connecting to. What tool is designed to assist with this process?

A. Data Leak Prevention (DLP)
B. Cloud Access Security Broker (CASB)
C. Cloud Posture Manager (CPM)
D. Cloud broker

A

B. Cloud Access Security Broker (CASB)

Explanation:
CASBs were originally designed to determine where the users were connecting and using shadow IT. Shadow IT is technology that the users have signed up for (in the cloud) that did not go through the regular acquisition procedures. Today, they can do additional things like DLP.

DLP is designed to determine if a user has just sent (or is trying to send) data someplace it should not go or in a format it should not be in (e.g., not encrypted). It is not designed to determine what web addresses the users are accessing.

Cloud brokers are people/companies that help cloud customers and cloud providers in their negotiations.

CPM tools are even newer. Sometimes called Cloud Security Posture Manager (CSPM), these tools are designed to determine all the paths a user can take to get access to particular resources. It is normal in the cloud that there can be multiple paths to a piece of data. Also, it is normal to assume the role of one of the devices or applications temporarily, which could give a user more access than they should have.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Riki and her team have been working with a Managed Service Provider (MSP) regarding their Infrastructure as a Service (IaaS) deployment. They are working to move an on-premise data center into the cloud. It is essential that they are clear about their availability expectations. These requirements should be spelled out in which part of the contract?

A. Business Associate Agreement (BAA)
B. Master Services Agreement (MSA.
C. Service Level Agreement (SLA)
D. Privacy Level Agreement (PLA)

A

C. Service Level Agreement (SLA)

Explanation:
The Service Level Agreement (SLA) is made between an organization and a third-party vendor (such as a cloud provider). Availability expectations and needs should be addressed in the SLA.

MSAs define the core responsibility of each company within a contract. For example, the MSA could be responsible for providing the cloud environment and maintaining the physical data center. The customer builds and manages their IaaS. (It is not necessary for the relationship to be exactly this.)

The PLA spells out the types of personal data that would be stored and processed within the cloud and what the expectation of the customer is for the cloud provider to protect that data. A BAA is essentially the same type of document, but it is specific to the US HIPAA regulation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Jamarcus is looking for a security control that can be used to protect a database within their Platform as a Service (PaaS). What they are concerned about within this business is that the data must be protected. It cannot be viewed by anyone that is not approved, and it cannot be sent anywhere it should not be.

What tool can accomplish this?

A. Federated identification
B. Identity and Access Management (IAM
C. Data Loss Prevention (DLP)
D. Transport Layer Security (TLS)

A

C. Data Loss Prevention (DLP)

Explanation:
Data loss prevention refers to a set of controls and practices put in place to ensure that data is only accessible to those authorized to access it. DLP also protects data from being lost or improperly used.

IAM is used to control what someone has access to and with what permissions. It is not used to control where data is sent.

TLS is used to encrypt data in transit so that it is not visible to someone who should not be able to see it. It does not control where data can be sent.

Federated identification is another way to control who has access to something. It is not used to control where data is sent.

So, the only tool here that does everything needed by the question is DLP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of the following regulations deals with law enforcement’s access to data that may be located in data centers in other jurisdictions?

A. SOX
B. SCA
C. GLBA
D. US CLOUD Act

A

D. US CLOUD Act

Explanation:
A company may be subject to various regulations that mandate certain controls be in place to protect customers’ sensitive data or ensure regulatory transparency. Some examples of regulations that can affect cloud infrastructure include:

General Data Protection Regulation (GDPR): GDPR is a regulation protecting the personal data of EU citizens. It defines required security controls for their data, export controls, and rights for data subjects.
US CLOUD Act: The US CLOUD Act creates a framework for handling cross-border data requests from cloud providers. The US law enforcement and their counterparts in countries with similar laws can request data hosted in a data center in a different country.
Privacy Shield: Privacy Shield is a program designed to bring the US into partial compliance with GDPR and allow US companies to transfer EU citizen data outside of the US. The main reason that the US is not GDPR compliant is that federal agencies have unrestricted access to non-citizens’ data.
Gramm-Leach-Bliley Act (GLBA): GLBA requires financial services organizations to disclose to customers how they use those customers’ personal data.
Stored Communications Act of 1986 (SCA): SCA provides privacy protection for the electronic communications (email, etc.) of US citizens.
Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) Act: HIPAA and HITECH are US regulations that protect the protected health information (PHI) that patients give to medical providers.
Payment Card Industry Data Security Standard (PCI DSS): PCI DSS is a standard defined by major payment card brands to secure payment data and protect against fraud.
Sarbanes Oxley (SOX): SOX is a US regulation that applies to publicly-traded companies and requires annual disclosures to protect investors.
North American Electric Reliability Corporation/Critical Infrastructure Protection (NERC/CIP): NERC/CIP are regulations designed to protect the power grid in the US and Canada by ensuring that power providers have certain controls in place.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A cloud information security manager is building the policies and associated documents for handling cloud assets. She is currently detailing how assets will be understood or listed so that access can be controlled, alerts can be created, and billing can be tracked. What tool allows for this?

A. Key
B. Identifier
C. Tags
D. Value

A

C. Tags

Explanation:
Tags are pervasive in cloud deployments. It is crucial that a plan is built for the corporation on how to tag assets. If it is not done consistently, it is not helpful. A tag is made up of two pieces, a key or name and a value. Key here is not the cryptographic key for encryption and decryption, but it is a word in English that was chosen by some to use here. It is really a name.

You can think of the tag as a type of identifier, but the tool needed to manage assets is called a tag.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A cloud data center is being built by a new Cloud Service Provider (CSP). The CSP wants to build a data center that has a level of resilience that will classify it as a Tier III. At which tier is it expected to add generators to backup the power supply?

A. Tier I
B. Tier IV
C. Tier II
D. Tier III

A

A. Tier I

Explanation:
Generators are added to the requirements from the lowest level, Tier I.

Tier II and above also require those generators to be there. Tier I and II also require Uninterruptible Power Supply (UPS) units.

Tier III requires a redundant distribution path for the data.

Tier IV requires several independent and physically isolated power supplies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Abigail is designing the infrastructure of Identity and Access Management (IAM) for their future Platform as a Service (PaaS) environment. As she is setting up identities, she knows that which of the following is true of roles?

A. Roles are the same as user identities
B. Roles are temporarily assumed by another identity
C. Roles are assigned to specific users permanently and occasionally assumed
D. Roles are permanently assumed by a user or group

A

B. Roles are temporarily assumed by another identity

Explanation:
Roles are not the same as they are in traditional data centers. Roles are in a way similar to traditional roles in that they allow a user or group a certain amount of access. The group is closer to what we traditionally called roles in Role Based Access Control (RBAC). In the cloud, roles are assumed temporarily. You can assume roles in a variety of ways, but, again, they are temporary.

The user is not permanently assigned a specific role. A user will log in as their user identity, then assume a role. This is temporary (e.g., for 15 hours or only the life of that session).

Note the distinction between assigning and assuming roles — you might have access to certain permissions, but you only use the role and those permissions occasionally.

An additional resource for your review/study is on the AWS website. Look for the user guide regarding roles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which of the following BEST describes the types of applications that create risk in a cloud environment?

A. Small utility scripts
B. Software with administrator privileges
C. Full application suites
D. Every piece of software in the environment

A

D. Every piece of software in the environment

Explanation:
Any piece of software, from major software suites to small utility scripts, can have possible vulnerabilities. This means that every program and every piece of software in the environment carries an inherent amount of risk with it. Any software that is installed in a cloud environment should be properly vetted and regularly audited.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Rufus is working for a growing manufacturing business. They have been upgrading their manufacturing equipment over the years to product versions that include internet connectivity for maintenance and management information. This has increased the amount of logs that need to be filtered. Due to the volume of log data generated by systems, it poses a challenge for his organization to perform log reviews efficiently and effectively.

What can his organization implement to help solve this issue?

A. Secure Shell (SSH)
B. Security Information and Event Manager (SIEM)
C. Data Loss Prevention (DLP)
D. System Logging protocol (syslog) server

A

B. Security Information and Event Manager (SIEM)

Explanation:
An organization’s logs are valuable only if the organization makes use of them to identify activity that is unauthorized or compromising. Due to the volume of log data generated by systems, the organization can implement a System Information and Event Monitoring (SIEM) system to overcome these challenges. The SIEM system provides the following:

Log centralization and aggregation
Data integrity
Normalization
Automated or continuous monitoring
Alerting
Investigative monitoring

A syslog server is a centralized logging system that collects, stores, and manages log messages generated by various devices and applications within a network. It provides a way to consolidate and analyze logs from different sources, allowing administrators to monitor system activity, troubleshoot issues, and maintain security. However, it does not help to correlate the logs as the SIEM does.

SSH is a networking protocol that encrypts transmissions. It works at layer 5 of the OSI model. It is commonly used to transmit logs to the syslog server. It is not helpful when analyzing logs. It only secures the transmission of the logs.

DLP tools are used to monitor and manage the transmission or storage of data to ensure that it is done properly. With DLP, the concern is that there will be a data breach/leak unintentionally by the users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Your company is looking for a way to ensure that their most critical servers are online when needed. They are exploring the options that their Platform as a Service (PaaS) cloud provider can offer them. The one that they are most interested in has the highest level of availability possible. After a cost-benefit analysis based on their threat assessment, they think that this will be the best option. The cloud provider describes the option as a grouping of resources with a coordinating software agent that facilitates communication, resource sharing, and routing of tasks.

What term matches this option?

A. Server cluster
B. Server redundancy
C. Storage controller
D. Security group

A

A. Server cluster

Explanation:
Server clusters are a collection of resources linked together by a software agent that enables communication, resource sharing, and task routing. Server clusters are considered active-active since they include at least two servers (and any other needed resources) that are both active at the same time.

Server redundancy is usually considered active-passive. Only one server is active at a time. The second waits for a failure to occur; then, it will take over.

Storage controllers are used for storage area networks. It is possible that the servers in the question are storage servers, but more likely they contain the applications that the users and/or the customers require. Therefore, server clustering is the correct answer.

Security groups are effectively virtualized local area networks protected by a firewall.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which SIEM feature is MOST vital to maintaining complete visibility in a multi-cloud environment?

A. Automated Monitoring
B. Log Centralization and Aggregation
C. Investigative Monitoring
D. Normalization

A

B. Log Centralization and Aggregation

Explanation:
Security information and event management (SIEM) solutions are useful tools for log analysis. Some of the key features that they provide include:

Log Centralization and Aggregation: Combining logs in a single location makes them more accessible and provides additional context by drawing information from multiple log sources.
Data Integrity: The SIEM is on its own system, making it more difficult for attackers to access and tamper with SIEM log files (which should be write-only).
Normalization: The SIEM can ensure that all data is in a consistent format, converting things like dates that can use multiple formats.
Automated Monitoring or Correlation: SIEMs can analyze the data provided to them to identify anomalies or trends that could be indicative of a cybersecurity incident.
Alerting: Based on their correlation and analysis, SIEMs can alert security personnel of potential security incidents, system failures, and other events of interest.
Investigative Monitoring: SIEMs support active investigations by enabling investigators to query log files or correlate events across multiple sources.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When a quantitative risk assessment is performed, it is possible to determine how much a threat can cost a business over the course of a year. What term defines this?

A. Annualized Loss Expectancy (ALE)
B. Annual Rate of Occurrence (ARO)
C. Recovery Time Objective (RTO)
D. Single Loss Expectancy (SLE)

A

A. Annualized Loss Expectancy (ALE)

Explanation:
How much a single occurrence of a threat will cost a business is the SLE. The total number of times this is expected within a year is the ARO. So, the total cost of a threat over a year is calculated by multiplying the ARO times the SLE and that will result in the ALE.

The RTO is the amount of time that is given to the recovery team to perform the recovery actions after a disaster has been declared.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Cloud Service Providers (CSP) and virtualization technologies offer a form of backup that captures all the data on a drive at a point in time and freezes it. What type of backup is this?

A. Guest OS image
B. Incremental backup
C. Data replication
D. Snapshot

A

D. Snapshot

Explanation:
CSPs and virtualization technologies offer snapshots as a form of backup. A snapshot will capture all the data on a drive at a point in time and freeze it. The snapshot can be used for a number of reasons, including rolling back or restoring a virtual machine to its snapshot state, creating a new virtual machine from the snapshot that serves as an exact replica of the original server, and copying the snapshot to object storage for eventual recovery.

A guest OS image is a file that, when spun-up or run on a hypervisor, becomes the running virtual machine.

Incremental backups are only changes since the last backup of any kind. The last backup could be a full or an incremental backup. So, it basically backs up only “today’s changes” (assuming that backups are done once a day).

Data replication is usually immediate backups to multiple places at the same time. That way, if one copy of the data is gone, another still exists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Hemi is working for a New Zealand bank, and they are growing nicely. They really need to carefully address their information security program, especially as they grow into their virtual data center that they are building using Infrastructure as a Service (IaaS) technology. As they are planning their information security carefully to ensure they are in compliance with all relevant laws and they provide the level of service their customers have come to expect, they are looking for a document that contains best practices.

What would you recommend?

A. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27018
B. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27017
C. Federal Information Processing Standard (FIPS) 140-2/3
D. National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53

A

B. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27017

Explanation:
ISO/IEC 27017 is Information technology — Security techniques — Code of practice for information security controls based on ISO/IEC 27002 for cloud services. This document pulls the security controls from ISO/IEC 27002 that apply to the cloud.

ISO/IEC 27018 is Information technology — Security techniques — Code of practice for protection of Personally Identifiable Information (PII) in public clouds acting as PII processors. A processor is defined in the European Union (EU) General Data Protection Regulation (GDPR) as “a person who processes data solely on behalf of the controller, excluding the employees of the data controller.” Processing is defined to include storage of data, which then applies to cloud services.

NIST SP 800-53 is: Security and Privacy Controls for Information Systems and Organizations. It is effectively a list of security controls and is similar to ISO/IEC 27002.

FIPS 140-2/3 is Security Requirements for Cryptographic Modules. This is for products such as TPMs and HSMs that store cryptographic keys.

Since the question is about security in the cloud, ISO/IEC 27017 is the best fit of these four documents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A large social media company that relies on public Infrastructure as a Service (IaaS) for their virtual Data Center (vDC) had an outage. They were not locatable through Domain Name Service (DNS) queries midafternoon one Thursday. In their virtual routers, a configuration was altered incorrectly. What did they fail to manage properly?

A. Change enablement practice
B. Service level management
C. Input validation
D. User training

A

A. Change enablement practice

Explanation:
ITIL defines change enablement practice as the practice of ensuring that risks are properly assessed, authorizing changes to proceed, and managing a change schedule to maximize the number of successful service and product changes. This is what happened to Facebook/Instagram/WhatsApp/Meta. They have their own network, but the effect would have been the same using AWS as an IaaS. This is change management.

Service level management is defined in ITIL as the practice of setting clear business-based targets for service performance so that the delivery of a service can be properly assessed, monitored, and managed against these targets.

Input validation needs to be performed by software to ensure that the values entered by the users are correct. The main goal of input validation is to prevent the submission of incorrect or malicious data and ensure that the software functions as intended. By checking for errors or malicious input, input validation helps to increase the security and reliability of software.

User training can help reduce the likelihood of errors occurring while using the software. By teaching users how to properly use the software, they become more aware of potential mistakes that may occur and can take measures to prevent them. This can help reduce the occurrence of mistakes, leading to less downtime, more accurate work, and improved outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

U-Jin has been tasked with figuring out how the company should protect the personal information that they have collected about their customers. He knows that they have to be compliant with a couple of different laws from around the world due to the location of their customers.

Under the Payment Card Industry - Data Security Standard (PCI DSS), which of the following requires when data must be encrypted?

A. Data in use
B. Data at rest
C. Data in transit
D. Data in storage

A

C. Data in transit

Explanation:
The PCI DSS requires that data be encrypted when it is in transit across public networks.

Data must be protected when it is being stored. PCI-DSS does not say that it must be encrypted within the 12 requirements. When it is at rest or in storage, it would be good to encrypt as well as establish and control anyone’s access through Identity and Access Management (IAM). Encrypting data in use is just emerging as an option but is certainly not a requirement, as it is not available in almost all situations today.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Which of the following techniques uses context and the meaning of text to identify sensitive data in unstructured data?

A. Pattern Matching
B. Hashing
C. Lexical Analysis
D. Schema Analysis

A

C. Lexical Analysis

Explanation:
When working with unstructured data, there are a few different techniques that a data discovery tool can use:

Pattern Matching: Pattern matching looks for data formats common to sensitive data, often using regular expressions. For example, the tool might look for 16-digit credit card numbers or numbers structured as XXX-XX-XXXX, which are likely US Social Security Numbers (SSNs).
Lexical Analysis: Lexical analysis uses natural language processing (NLP) to analyze the meaning and context of text and identify sensitive data. For example, a discussion of “payment details” or “card numbers” could include a credit card number.
Hashing: Hashing can be used to identify known-sensitive files that change infrequently. For example, a DLP solution may have a database of hashes for files containing corporate trade secrets or company applications.

Schema analysis can’t be used with unstructured data because only structured databases have schemas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Rise is working for a corporation as their cloud architect. He is designing how the Platform as a Service (PaaS) deployment will be used to store sensitive data for one particular application. He is designing a trust zone for that data to be handled inside of. Which of the following BEST defines a trust zone?

A. The ability to share pooled resources among different cloud customers
B. Sets of rules that define which employees have access to which resources
C. Physical, logical, or virtual boundaries around network resources
D. Virtual tunnels that connect resources at different locations

A

C. Physical, logical, or virtual boundaries around network resources

Explanation:
A cloud-based trust zone is a secure environment created within a cloud infrastructure where only authorized users or systems are allowed to access resources and data. This trust zone is typically created by configuring security measures such as firewalls, access controls, and encryption methods to ensure that only trusted sources can gain access to the data and applications within the zone. The goal of a cloud-based trust zone is to create a secure and reliable environment for sensitive data or critical applications by isolating them from potential threats and unauthorized access. This helps to ensure the confidentiality, integrity, and availability of the resources and data within the trust zone.

A virtual tunnel connecting to another location may be something that needs to be added, but it is not part of describing the zone itself.

Rules that define which employees have access to which resources are something that is needed by a business. This is Identity and Access Management (IAM). It should include information about the resources in a trust zone, but it does not define the actual zone.

The ability to share pooled resources is part of the definition of cloud. It is the opposite of a trust zone. Because resources are shared, many companies are very worried about using the cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Which of the following is a seven-step threat model that views things from the attacker’s perspective?

A. PASTA
B. STRIDE
C. ATASM
D. DREAD

A

A. PASTA

Explanation:
Several different threat models can be used in the cloud. Common examples include:

STRIDE: STRIDE was developed by Microsoft and identifies threats based on their effects/attributes. Its acronym stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege.
DREAD: DREAD was also created by Microsoft but is no longer in common use. It classifies risk based on Damage, Reproducibility, Exploitability, Affected Users, and Discoverability.
ATASM: ATASM stands for Architecture, Threats, Attack Surfaces, and Mitigations and was developed by Brook Schoenfield. It focuses on understanding an organization’s attack surfaces and potential threats and how these two would intersect.
PASTA: PASTA is the Process for Attack Simulation and Threat Analysis. It is a seven-stage framework that tries to look at infrastructure and applications from the viewpoint of an attacker.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

An organization’s communications with which of the following is MOST likely to include information about planned and unplanned outages and other information designed to protect the brand image?

A. Regulators
B. Customers
C. Vendors
D. Partners

A

B. Customers

Explanation:
An organization may need to communicate with various parties as part of its security and risk management process. These include:

Vendors: Companies rely on vendor-provided solutions, and a vendor experiencing problems could result in availability issues or potential vulnerabilities for their customers. Relationships with vendors should be managed via contracts and SLAs, and companies should have clear lines of communication to ensure that customers have advance notice of potential issues and that they can communicate any observed issues to the vendor.
Customers: Communications between a company and its customers are important to set SLA terms, notify customers of planned and unplanned service interruptions, and otherwise handle logistics and protect the brand image.
Partners: Partners often have more access to corporate data and systems than vendors but are independent organizations. Partners should be treated similarly to employees with defined onboarding/offboarding and management processes. Also, the partnership should begin with mutual due diligence and security reviews before granting access to sensitive data or systems.
Regulators: Regulatory requirements also apply to cloud environments. Organizations receive regulatory requirements and may need to demonstrate compliance or report security incidents to relevant regulators.

Organizations may need to communicate with other stakeholders in specific situations. For example, a security incident or business disruption may require communicating with the public, employees, investors, regulators, and other stakeholders. Organizations may also have other reporting requirements, such as quarterly reports to stakeholders, that could include security-related information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A corporation is planning their move to the cloud. They have decided to use a cloud provided by a Managed Service Provider (MSP). The MSP will retain ownership and management of the cloud and all the infrastructure. The cloud will be more expensive than a Cloud Service Provider (CSP). However, the level of control that this cloud will offer is expanded from that with a CSP.

What type of cloud have they selected?

A. Private cloud
B. Hybrid cloud
C. Community cloud
D. Public cloud

A

A. Private cloud

Explanation:
Private clouds can be located at the service provider’s location or the customer’s. It can be owned by either the cloud provider or the customer. It can be managed by either the cloud provider or the customer. It could be with an MSP or a CSP.

If it is with a CSP, it could be public, private, or community. If it is with an MSP, it could be either private or community. As it is just one company in the question, it is not a community, so it must be a private cloud. For more data on this, the Cloud Security Alliance (CSA) guidance 4.0 (or 5 if it has been released) would be a great read.

A hybrid cloud is usually a combination of public and private clouds. The question is specifically about the private cloud though, so this is not the best answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Which of the following is a regulation designed to protect the US and Canadian power grids?

A. SCA
B. NERC/CIP
C. SOX
D. GLBA

A

B. NERC/CIP

Explanation:
A company may be subject to various regulations that mandate certain controls be in place to protect customers’ sensitive data or ensure regulatory transparency. Some examples of regulations that can affect cloud infrastructure include:

General Data Protection Regulation (GDPR): GDPR is a regulation protecting the personal data of EU citizens. It defines required security controls for their data, export controls, and rights for data subjects.
US CLOUD Act: The US CLOUD Act creates a framework for handling cross-border data requests from cloud providers. The US law enforcement and their counterparts in countries with similar laws can request data hosted in a data center in a different country.
Privacy Shield: Privacy Shield is a program designed to bring the US into partial compliance with GDPR and allow US companies to transfer EU citizen data outside of the US. The main reason that the US is not GDPR compliant is that federal agencies have unrestricted access to non-citizens’ data.
Gramm-Leach-Bliley Act (GLBA): GLBA requires financial services organizations to disclose to customers how they use those customers’ personal data.
Stored Communications Act of 1986 (SCA): SCA provides privacy protection for the electronic communications (email, etc.) of US citizens.
Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) Act: HIPAA and HITECH are US regulations that protect the protected health information (PHI) that patients give to medical providers.
Payment Card Industry Data Security Standard (PCI DSS): PCI DSS is a standard defined by major payment card brands to secure payment data and protect against fraud.
Sarbanes Oxley (SOX): SOX is a US regulation that applies to publicly-traded companies and requires annual disclosures to protect investors.
North American Electric Reliability Corporation/Critical Infrastructure Protection (NERC/CIP): NERC/CIP are regulations designed to protect the power grid in the US and Canada by ensuring that power providers have certain controls in place.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Which of the following would benefit the MOST from using a hybrid cloud?

A. A small business that doesn’t have much sensitive data and is only looking to move email to the cloud
B. A group of organizations looking to create a shared service for all their customers to use
C. A healthcare company that needs to ensure that all their data is kep. extremely secure and private, no matter the expense
D. An organization that only requires that certain items are kept very secure, but can’t afford a full private cloud

A

D. An organization that only requires that certain items are kept very secure, but can’t afford a full private cloud

Explanation:
Correct answer: An organization that only requires that certain items are kept very secure, but can’t afford a full private cloud

Hybrid clouds are the best solution for any organization that requires the security of a private cloud for some, but not all, of their data. By only needing some of the data to be kept in a private cloud, the expense of building a full private cloud can be greatly reduced.

A small business that doesn’t have much sensitive data and only wants email could benefit from a public Software as a Service (SaaS).

A healthcare company, by that description, needs a private cloud.

A group of organizations looking to create a shared service is in need of a community cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Who should have access to the management plane in a cloud environment?

A. A highly vetted and limited set of administrators
B. A single, highly vetted administrator
C. Software developers deploying virtual machines
D. Security Operation Center (SOC) personnel

A

A. A highly vetted and limited set of administrators

Explanation:
If compromised, the management plane would provide full control of the cloud environment to an attacker. Due to this, only a highly vetted and limited set of administrators should have access to the management plane. However, you will want more than a single administrator. If the single administrator leaves or is no longer able to perform management duties, the ability of the business to manage their cloud environment would be compromised.

Software developers deploying virtual machines may need access, but they would be in the highly vetted group of administrators if that is the case. The same would be true for SOC personnel. They need to be vetted and trusted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

A company using Platform as a Service (PaaS) has discovered that their computing environment has gotten very complex. They are looking for a technology that will assist them in managing deployment and provisioning of all the resources that they now have.

Which technology can this organization implement to assist the administrators in a more agile and efficient manner than manual management?

A. Controller plane
B. Dynamic Host Configuration Protocol
C. Orchestration
D. Management plane

A

C. Orchestration

Explanation:
Orchestration enables the agile and efficient provisioning and management on demand and at a great scale. Common tools used today are puppet, chef, ansible, and salt.

Dynamic Host Configuration Protocol (DHCP) was similar in its use long ago in the earlier days of local area networks. It allows computers to obtain IP addresses dynamically. This is still needed but is insufficient for managing and provisioning cloud assets.

The management plane is the administrators’ connection into the cloud. This allows them to configure and manage, but it is not going to automate anything. It is the equivalent of establishing an SSH connection to a router. It is simply a protected connection.

The controller plane is found within Software Defined Networking (SDN). It allows switches to communicate with the controller for forwarding decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Which of the following data classification labels might impact where data can be transferred or stored?

A. Type
B. Ownership
C. Criticality
D. Jurisdiction

A

D. Jurisdiction

Explanation:
Data owners are responsible for data classification, and data is classified based on organizational policies. Some of the criteria commonly used for data classification include:

Type: Specifies the type of data, including whether it has personally identifiable information (PII), intellectual property (IP), or other sensitive data protected by corporate policy or various laws.
Sensitivity: Sensitivity refers to the potential results if data is disclosed to an unauthorized party. The Unclassified, Confidential, Secret, and Top Secret labels used by the U.S. government are an example of sensitivity-based classifications.
Ownership: Identifies who owns the data if the data is shared across multiple organizations, departments, etc.
Jurisdiction: The location where data is collected, processed, or stored may impact which regulations apply to it. For example, GDPR protects the data of EU citizens.
Criticality: Criticality refers to how important data is to an organization’s operations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Which storage type is used by virtual machines as their local drive for processing purposes?

A. Ephemeral
B. Block storage
C. Unstructured storage
D. Raw storage

A

A. Ephemeral

Explanation:
Temporary storage and data are stored in ephemeral storage solely for processing purposes. Ephemeral storage is not intended to provide long-term data storage. Ephemeral storage is similar to Random Access Memory (RAM) and other non-permanent storage technologies. When a VM shuts down, anything stored in ephemeral storage will be lost.

Raw storage refers to the actual drive such as a Solid State Drive (SSD) or a Hard Disk Drive (HDD). It is not accessed by a virtual machine directly. It will be organized into structured or unstructured storage.

Block storage is another name for structured storage. It is a way to organize the drive space into blocks of space for specific virtual machines or applications. The block could appear as a volume, all depending on the actual allocation by the cloud provider.

Unstructured storage is another name for object storage. Objects are files and can include word documents, spreadsheets, databases stored as a file, and even virtual machine images.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is the final step in deploying a newly upgraded application into production?

A. Deployment management
B. Service level management
C. Continuity management
D. Configuration management

A

A. Deployment management

Explanation:
Deployment management includes moving new or changed hardware or software to production.

Configuration management involves managing configuration items or parameters. The question is about an upgraded application. It does not mention the configuration of it, so deployment management is a better fit to answer the question.

Continuity management involves main training availability for a disaster.

Service level management is about setting clear business targets for service performance. There is no mention of a service level in the question.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Which of the following involves verifying that the software has completed all of its required tests and manages the logistics of moving it to the next step?

A. Deployment Management
B. Release Management
C. Configuration Management
D. Change Management

A

B. Release Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

An organization has just completed the design phase of developing their Business Continuity and Disaster Recovery (BC/DR) plan. What is the next step for this organization?

A. Revise
B. Test the plan
C. Assess risk
D. Implement the plan

A

D. Implement the plan

Explanation:
The steps of developing a BC/DR plan are as follows: Define scope, gather requirements, analyze, assess risk, design, implement, test, report, and finally, revise. Once an organization has completed all the design phases, they are ready to implement their BC/DR plan. Even though the plan has already gone through design, it will likely require some changes (both technical and policy-wise) during implementation. The key to this is that the work implement is used in many different ways. To people who work in the production environment, it means that “it” is placed into the production environment, whatever “it” is that we are talking about. However, when we are dealing with BC/DR, the alternate site or cloud must be built before it can be tested, which hopefully all occurs before we need it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Which of the following BEST describes the “create” phase of the cloud data lifecycle?

A. The creation of new or the alteration of existing content
B. The creation or modification of content stored onto a solid state drive (SSD)
C. The creation of new content
D. The creation of new content stored on a hard disk drive (HDD)

A

A. The creation of new or the alteration of existing content

Explanation:
The Cloud Security Alliance (CSA) defined the create phase of the data lifecycle as the creation of new or the alteration of existing content in their guidance 4.0 document. This exam is a joint venture between (ISC)2 and the CSA, so it is worth knowing what the CSA says. If you disagree with this definition, that is fine, but know that the CSA says this even though most people would put the alteration of content in the use phase. If you know these two options, it will be possible to work through exam questions.

If it is stored on a HDD or SSD, that means that data has moved from the create phase into the store phase. The question only involves the create phase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

In log management, what defines which categories of events are and are NOT written into logs?

A. Quality level
B. Retention level
C. Transparency level
D. Clipping level

A

D. Clipping level

Explanation:
Many systems and apps allow you to customize what data is written to log files based on the importance of the data. The clipping level determines which events, such as user authentication events, informational system messages, and system restarts, are written in the logs and which are ignored. Clipping levels are used to ensure that the correct logs are being accounted for. They are commonly called thresholds.

Transparency is the visibility of something. Quality is how good something is.Retention is holding on to something. Those three words do not quite apply with the word level. So, this question is mainly about the term clipping level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Which of the following is NOT one of the three main objectives of IRM?

A. Provisioning
B. Enforcement
C. Access Models
D. Data Rights

A

B. Enforcement

Explanation:
Information rights management (IRM) involves controlling access to data, including implementing access controls and managing what users can do with the data. The three main objectives of IRM are:

Data Rights: Data rights define what users are permitted to do with data (read, write, execute, forward, etc.). It also deals with how those rights are defined, applied, changed, and revoked.
Provisioning: Provisioning is when users are onboarded to a system and rights are assigned to them. Often, this uses roles and groups to improve the consistency and scalability of rights management, as rights can be defined granularly for a particular role or group and then applied to everyone that fits in that group.
Access Models: Access models take the means by which data is accessed into account when defining rights. For example, data presented via a web application has different potential rights (read, copy-paste, etc.) than data provided in files (read, write, execute, delete, etc.).

Enforcement is not a main objective of IRM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Which of the following is TRUE regarding virtualization?

A. Virtual images are susceptible to attacks only when they are online and running
B. It’s more important to secure the virtual images than the management plane in a virtualized environment
C. Virtual images are susceptible to attacks whether they are running or not
D. The most important component to secure in a virtualized environment is the hypervisor

A

C. Virtual images are susceptible to attacks whether they are running or not

Explanation:
Correct answer: Virtual images are susceptible to attacks whether they are running or not

Virtual images are susceptible to attacks, even when they are not running. Due to this, it’s extremely important to ensure the security of where the images are housed.

Ensuring that the management plane and the hypervisor are secured is the first step to ensuring the virtual images are secure. The management plane is the most important component to secure first because a compromise of the management plane would lead to a compromise of the entire environment.

Hypervisor security is critical, but the management plane is arguably more important. If the management plan is compromised, then everything that a corporation has built can be deleted in a moment. If a hypervisor is compromised, it could result in problems for all customers of a cloud provider. It is, arguably, more likely that the management plane is a target.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

A DLP solution is inspecting the contents of an employee’s email. What stage of the DLP process is it MOST likely at?

A. Discovery
B. Mapping
C. Monitoring
D. Enforcement

A

C. Monitoring

Explanation:
Data loss prevention (DLP) solutions are designed to prevent sensitive data from being leaked or accessed by unauthorized users. In general, DLP solutions consist of three components:

Discovery: During the Discovery phase, the DLP solution identifies data that needs to be protected. Often, this is accomplished by looking for data stored in formats associated with sensitive data. For example, credit card numbers are usually 16 digits long, and US Social Security Numbers (SSNs) have the format XXX-XX-XXXX. The DLP will identify storage locations containing these types of data that require monitoring and protection.
Monitoring: After completing discovery, the DLP solution will perform ongoing monitoring of these identified locations. This includes inspecting access requests and data flows to identify potential violations. For example, a DLP solution may be integrated into email software to look for data leaks or monitor for sensitive data stored outside of approved locations.
Enforcement: If a DLP solution identifies a violation, it can take action. This may include generating an alert for security personnel to investigate and/or block the unapproved action.

Mapping is not a stage of the DLP process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Alison is concerned that a malicious individual had gained access to her online health account in which her mental health history was listed. If this is true, what regulation is the health company in violation of?

A. Health Information Portability and Accountability Act (HIPAA)
B. Federal Information Security Management Act (FISMA)
C. General Data Protection Regulation (GDPR)
D. Personal Information Protection and Electronic Documents Act (PIPEDA)

A

A. Health Information Portability and Accountability Act (HIPAA)

Explanation:
HIPAA is a U.S. regulation that demands the protection of Personal Health Information (PHI), which includes mental health information, physical health information, medical history, and test and lab results as well as a number of other items.

GDPR is a European Union (EU) regulation that requires the protection of personal data. Personal data is often referred to as Personally Identifiable Information (PII) outside of the EU and GDPR. It could include health information, but HIPAA is a more direct fit for the question.

FISMA is a U.S. act that requires U.S. government agencies to build information security programs within their business.

PIPEDA is a Canadian law that requires the protection of personal data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Which of the following is a strategy for maintaining operations during a business-disrupting event?

A. Operational continuity plan
B. Disaster recovery plan
C. Business continuity plan
D. Ongoing operations plan

A

C. Business continuity plan

Explanation:
A business continuity plan is a strategy for maintaining operations during a business-disrupting event. A disaster recovery plan is a strategy for restoring normal operations after such an event.

Ongoing operations and operational continuity plans are fabricated terms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

When a company is worried about their website being spoofed by a bad actor, for example, a bank is worried that their users will be redirected to a bad actor’s website, they need to concern themselves with Domain Name Service Security (DNSSEC). What feature of DNSSEC helps customers know they are connecting to their bank and not a bad actor’s version?

A. Encrypting the DNS record for confidentiality and providing proof of the source of the record
B. Encrypting the DNS record for confidentiality purposes to protect the source of the record
C. Protecting the availability of DNS information with a second record that the first can be validated against
D. Having the DNS record digitally signed to prove the source of the record

A

D. Having the DNS record digitally signed to prove the source of the record

Explanation:
DNS SECurity (DNSSEC) extensions is a set of specifications primarily aimed at reinforcing the integrity of DNS. It provides cryptographic authentication of DNS data using digital signatures.

It does not protect confidentiality nor the availability of records. The digital signature is used to protect origin authority, data integrity, and authenticated denial of existence.

42
Q

Which of the following is NOT an example of an event attribute that could be used to uniquely identify a particular user in an event log?

A. IP address
B. Process ID
C. GUID
D. Username

A

A. IP address

Explanation:
An event is anything that happens on an IT system, and most IT systems are configured to record these events in various log files. When implementing logging and event monitoring, event logs should include the following attributes to identify the user:

User Identity: A username, user ID, globally unique identifier (GUID), process ID, or other value that uniquely identifies the user, application, etc. that performed an action on a system.
IP Address: The IP address of a system can help to identify the system associated with an event, especially if the address is a unique, internal one. With public-facing addresses, many systems may share the same address.
Geolocation: Geolocation information can be useful to capture in event logs because it helps to identify anomalous events. For example, a company that doesn’t allow remote work should have few (if any) attempts to access corporate resources from locations outside the country or region.

The CCSP doesn’t identify the MAC address as an important attribute to include in event logs.

43
Q

Chinelo has been working with the legal department to ensure that they are in compliance with appropriate laws. The business that he works for is a financial services company. As they are located in the US, which law must they be in compliance with?

A. Service Organization Control (SOC) 1® Type II
B. Sarbanes Oxley (SOX)
C. Federal Information Management Act (FISMA)
D. Basel III

A

B. Sarbanes Oxley (SOX)

Explanation:
SOX is an act passed to protect shareholders and stakeholders from improper practices and errors within financial reporting procedures. According to Cornell law, SOX requires “public companies to adopt internal procedures for ensuring accuracy of financial statements and make the CEO and CFO directly responsible for the accuracy, documentation, and submission of the financial reports and internal control structure.” The CEO could end up with jail time if they report incorrectly. This is what happened to Bernie Ebbers of Worldcom.

Basel III is a European banking regulation.

FISMA is a U.S. government act requiring U.S. government agencies to create and maintain an information security program.

SOC 1® Type II is a type of audit report. As defined by the American Institute of Certified Public Accountants (AICPA), it is about reporting on an Examination of Controls at a Service Organization Relevant to User Entities’ Internal Control Over Financial Reporting (SOC 1®). Type II means that the audit reviews records for a period of time, for example, 1 week, 3 weeks, or 3 months. There is no defined period of time or a minimum amount of time because it is business and audit specific.

44
Q

A bank has to think very carefully about what systems, applications, and data they can store in a public cloud provider because of the laws and regulations they must comply with. Their concern is that if a bad actor were to gain access to the environment, data could be altered or stolen. If that occurs, they need the ability to prosecute the bad actors and hold them accountable for their misdeeds. The bank security staff understands that collecting evidence from a cloud is different from collecting evidence from a traditional data center.

What is the MAIN reason that eDiscovery is typically easier in a traditional data center than it is in a cloud environment?

A. There are no tools available to perform eDiscovery in a cloud environment
B. Cloud providers are often not willing to work with lawyers on legal matters
C. Organizations don’t own any of the data they store in the cloud
D. Systems aren’t able to be simply physically isolated and preserved in a cloud environment

A

D. Systems aren’t able to be simply physically isolated and preserved in a cloud environment

Explanation:
When eDiscovery must be done within a traditional data center, it’s possible to physically isolate the system and preserve the data. In a cloud environment, however, many cloud customers use the same hardware, so it’s not possible to physically isolate a system and preserve it. Instead, special measures must be taken to achieve eDiscovery in a cloud environment.

There are tools to perform eDiscovery. For example, hypervisors have a function called Virtual Machine Introspection (VMI) to pull the information from memory for a specific running virtual machine.

Cloud providers are willing to work with lawyers on legal matters. The bigger providers probably have lawyers on staff.

A company always owns its data. There could be an exception somewhere, but if the customer reads the contract and discovers that they are turning over ownership of their data to the cloud provider, they are likely not to use it.

45
Q

Under the Federal Information Security Management Act (FISMA), all U.S. government agencies are required to conduct risk assessments that align with which framework?

A. International Standards Organization/ International Electrotechnical Commission (ISO/IEC)ISO 31000
B. Federal Risk and Authorization Management Program (FedRAMP)
C. National Institute of Standards and Technology (NIST) Risk Management Framework (RMF)
D. National Institute of Standards and Technology (NIST) Cyber Security Framework (CSF)

A

C. National Institute of Standards and Technology (NIST) Risk Management Framework (RMF)

Explanation:
The NIST Risk Management Framework acts as a guide for risk management practices used by United States federal agencies.

NIST developed the NIST CSF to assist commercial enterprises in developing and executing security strategies.

FedRAMP is a cloud-specific version of NIST 800-53 that contains policies and procedures to assist cloud service providers in adopting security controls and risk assessment.

ISO/IEC 31000 is “Risk Management - Guidelines” to be used during the risk management process.

46
Q

A media company has a hybrid cloud that involves some servers in their data center that utilize the virtualization of the cloud to create a very recoverable and redundant environment. With that private cloud functionality, they also utilize a public cloud provider for their data storage. They have just discovered an Indication of Compromise (IaC) within their application on their servers. It appears that a bad actor has infiltrated and has been copying their data and then altering and deleting what they leave behind.

What security appliances and features will the team need to respond to?

A. A Security Information Event Manager (SIEM) to correlate the logs and send the event information to the syslog server
B. Firewalls and servers that record the events from the Intrusion Detection Systems (IDS) that submit them to the syslog server
C. A Security Information Event Manager (SIEM) to collect logs from the network and correlate the events to enable the incident response team
D. Intrusion Detection Systems (IDS) that collect logs from the firewalls and servers that submit the events to the syslog server

A

C. A Security Information Event Manager (SIEM) to collect logs from the network and correlate the events to enable the incident response team

Explanation:
Correct answer: A Security Information Event Manager (SIEM) to collect logs from the network and correlate the events to enable the incident response team

A Security Information Event Manager (SIEM) system is used to collect logs and store them in a centralized location. Having logs in one centralized location can make it easier to troubleshoot events as they occur. In addition, having the logs in a centralized location and not just on the device they originate from can prevent the risk of log manipulation. The SIEM will also correlate the events that produce the Indications of Compromise (IoC). The Incident Response team can then respond to the intrusion.

The firewalls, servers, IDS, and more send logs to the syslog server, which then sends the logs to the SIEM for analysis.

47
Q

Which phase of the SDLC is the responsibility of the QA team?

A. Testing
B. Design
C. Development
D. Deployment

A

A. Testing

Explanation:
The Software Development Lifecycle (SDLC) describes the main phases of software development from initial planning to end-of-life. While definitions of the phases differ, one commonly-used description includes these phases:

Requirements: During the requirements phase, the team identifies the software's role and the applicable requirements. This includes business, functional, and security requirements.
Design: During this phase, the team creates a plan for the software that fulfills the previously identified requirements. Often, this is an iterative process as the design moves from high-level plans to specific ones. Also, the team may develop test cases during this phase to verify the software against requirements.
Development: This phase is when the software is written. It includes everything up to the actual build of the software, and unit testing should be performed regularly through the development phase to verify that individual components meet requirements.
Testing: After the software has been built, it undergoes more extensive testing. The QA team should verify the software against all test cases and ensure that they map back to and fulfill all of the software’s requirements.
Deployment: During the deployment phase, the software moves from development to release. During this phase, the default configurations of the software are defined and reviewed to ensure that they are secure and hardened against potential attacks.
Operations and Maintenance (O&M): The O&M phase covers the software from release to end-of-life. During O&M, the software should undergo regular monitoring, testing, etc., to ensure that it remains secure and fit for purpose.
48
Q

Yamin has been hired by a large pharmaceutical company as their Information Security Manager. She has been working with the Information Technology department on their self-managed private cloud data center. They are going to add 200 servers to their data center this coming weekend.

Which management process is concerned with adding these new devices?

A. Continuity management
B. Release management
C. Deployment management
D. Change management

A

C. Deployment management

Explanation:
Information Technology Infrastructure Library (ITIL) Deployment Management is a key process within IT Service Management (ITSM) that focuses on the planning, coordination, and implementation of new or updated IT services and systems into the production environment. It ensures that the deployment process is efficient and controlled, and minimizes disruptions to ongoing operations. Because there are 200 servers in an actual physical data center, this is the best answer.

Information Technology Infrastructure Library (ITIL) Release Management is a key process within IT Service Management (ITSM) that focuses on the planning, coordination, and control of services and features. It aims to ensure that changes to IT services and systems are delivered smoothly, with minimal disruption to ongoing operations.

Information Technology Infrastructure Library (ITIL) Change Management is a process within IT Service Management (ITSM) that focuses on controlling and managing changes to IT systems and services in a structured and coordinated manner. It aims to minimize risks, disruptions, and negative impacts associated with changes while ensuring that changes are implemented efficiently and effectively. This is a bit more generic, so if deployment management was not an answer option, this could have worked.

Information Technology Infrastructure Library (ITIL) Continuity Management, also known as IT Service Continuity Management (ITSCM), is a process within IT Service Management (ITSM) that focuses on ensuring the availability and continuity of IT services in the event of a disruption or disaster. It aims to minimize the impact of disruptions on business operations and ensure timely recovery.

49
Q

The American Institute of Certified Public Accountants (AICPA) Service Organization Controls (SOC) 2 is aimed at protecting the five trust principles. They are:

A. Security, Confidentiality, Processing Integrity, Availability, and Privacy
B. Security, Confidentiality, Integrity, Availability, and Classification
C. Availability, Processing Integrity, Sensitivity, Privacy, and Non-repudiation
D. Confidentiality, Integrity, Availability, Privacy, and Sensitivity

A

A. Security, Confidentiality, Processing Integrity, Availability, and Privacy

Explanation:
Correct answer: Security, Confidentiality, Processing Integrity, Availability, and Privacy

The AICPA defines the five trust principles as Security, Confidentiality, Processing Integrity, Availability, and Privacy.

Sensitivity and classification are subjects that businesses do need to concern themselves about. A classification, such as a secret, indicates the level of sensitivity of that piece of data. This then tells the employees how they should be protecting that information, which would be defined within corporate policy.

Non-repudiation means that someone cannot argue with the information or evidence that says that they did something. It is usually first accomplished by using asymmetric cryptographic systems that enable the user to digitally sign something that they created, such as a contract or an email. A digital signature by itself is not enough though. There should also be the following:

Public Key Infrastructure (PKI)
Badge or otherwise controlled doors
Logins to the computer and/or network
Video cameras around the businesses
etc.
50
Q

A smart refrigerator that can send a grocery list to the owner via a push notification to their mobile phone is an example of what type of technology?

A. Containers
B. Virtual machines
C. Blockchain
D. Internet of Things (IoT)

A

D. Internet of Things (IoT)

Explanation:
The Internet of Things (IoT) refers to non-traditional devices (such as lamps, refrigerators, or machines in a manufacturing environment) having access to the internet to perform various processes.

Virtual machines are constructed within the cloud through the use of hypervisors on top of servers.

Containers hold applications in a lighter way than hypervisors.

Blockchain is a technology that creates an immutable (unchangeable) record. It is used in things like cryptocurrency. It is possible to sell anything, though, and use the blockchain as a permanent record of the transaction.

51
Q

For the organization’s cloud environment, they are using a Software as a Service (SaaS) Identity and Access Management (IAM) manager. The users will be using a single sign on (SSO) for both the cloud and on-premise IAM systems. Due to the risks this may present, what is an important component to the organization’s cloud IAM strategy?

A. Cloud vendor due diligence
B. Vendor’s policies and processes
C. Cloud audit controls
D. User education

A

D. User education

Explanation:
In any sort of Identity and Access Management (IAM) system, whether through a SaaS provider solution or through federation with the organization’s on-premise IAM manager, there are risks. In either case, user education is an important component to the organization’s cloud IAM strategy.

It is important to always do vendor due diligence before choosing a vendor. However, the question is asking you to add to the IAM strategy. User education is a better fit for that. The question also says “they are using,” which implies the decision has already been made. If that is the case, it is a little late for due diligence.

Cloud audit controls would be an odd choice. Is it speaking to the cloud provider’s audits? Or the customer’s audits of their cloud? Either way, it is not something to add to the IAM strategy.

The vendor’s policies and processes are probably not within the reach or view of the customer. The customer needs to worry about their own policies and processes, especially those about the IAM.

52
Q

Which major piece of legislation focuses on the security controls and confidentiality of medical records?

A. General Data Protection Regulation (GDPR)
B. Gramm Leach & Bliley Act (GLBA)
C. Health Information Portability and Accountability Act (HIPAA)
D. Sarbanes Oxley (SOX)

A

C. Health Information Portability and Accountability Act (HIPAA)

Explanation:
HIPAA is a major piece of US legislation from 1996 (you do not need to memorize years for the exam), which focused on protecting Protected Health Information (PHI). HIPAA focuses on the security controls and confidentiality of medical records rather than on specific technologies being used.

GDPR is from the European Union (EU) and protects personal data. Most of the world refers to this type of data as Personally Identifiable Information (PII), but the EU clarifies that it is “any information relating to an identifiable natural person (data subject).” This could include health data, but HIPAA is specifically on that topic.

Sarbanes Oxley says that publicly traded businesses must report their financial status accurately. This is a direct result of the failure to do this at Enron.

According to the Federal Trade Commission (FTC) in the US, GLBA “requires financial institutions—companies that offer consumers financial products or services like loans, financial or investment advice, or insurance—to explain their information-sharing practices to their customers and to safeguard sensitive data.” This would be PII data.

53
Q

Yun is working with the application developers as they move through development and into operations with their new application. They are looking to add something to the application that can allow the application to protect itself.

Which of the following is a security mechanism that allows an application to protect itself by responding and reacting to ongoing events and threats?

A. Vulnerability scanning
B. Runtime Application Self-Protection (RASP)
C. Static Application Security Testing (SAST)
D. Dynamic Application Security Testing (DAST)

A

B. Runtime Application Self-Protection (RASP)

Explanation:
Runtime Application Self-Protection (RASP) is a security mechanism that runs on the server and starts when the application starts. RASP allows an application to protect itself by responding and reacting to ongoing events and threats in real time. RASP can monitor the application, continuously looking at its own behavior. This allows the application to detect malicious input or behavior and respond accordingly.

Dynamic Application Security Testing (DAST) is a type of security test that looks at the application in a dynamic or running state. This means that the tester can only use the application. They do not have the source code for the application. This can be used to test if the application behaves as needed or if it can be used maliciously by a bad actor.

Static Application Security Testing (SAST) is a type of test where the application is static or still. That means the application is not in a running state, so what the test has knowledge of and access to is the source code.

Vulnerability scanning is a test that is run on systems to ensure that the systems are properly hardened, and there are not any known vulnerabilities in the system.

54
Q

An information security manager works for a large pharmaceutical company and is working on an Infrastructure as a Service (IaaS) project. This is a huge project to move from a traditional data center to a cloud environment. They have many concerns that they are addressing. The current issue is how they are going to get all their data to the cloud. It is critical to plan for the exit before entry into the cloud. If the data gets in, it is going to have to have a way back out.

Which of the following is their specific concern?

A. Scalability
B. Interoperability
C. Reversibility
D. Portability

A

C. Reversibility

Explanation:
Many people believe that moving from a traditional data center to a cloud environment is a simple, easy, and seamless transition, but this is a common misconception. There is a lot of work and reworking that must go into moving systems to the cloud.

Reversibility is the ability to get out of the cloud provider: to retrieve all the data and artifacts and with the cloud provider having deleted, appropriately, all the artifacts from the systems at the cloud provider.

Portability is the ability to move the data without having to re-enter the data.

Reversibility and portability are similar but have nuanced differences. The key to the question is the word exit. That is to reverse getting in or to exit. If the question was about how to move to a different cloud provider, then portability would be critical on the topic of the data. These definitions are from ISO/IEC 17788, a critical cloud definition document. It is a publicly available ISO, which is unusual. It is a good document to take a look at.

Scalability is the ability for the systems in the cloud to expand the amount of CPU, RAM, storage, and networking access as the customer needs more.

Interoperability is the ability for two different systems to exchange and mutually use that information.

55
Q

An attacker was able to gain access to a cloud environment due to a lack of security controls in place. Once they gained access to that environment, they used those resources to perform a distributed denial of service attack against another organization. What is this type of threat known as?

A. Insecure deserialization
B. Abuse or nefarious use of cloud services
C. Shared technology issues
D. Insufficient logging and monitoring

A

B. Abuse or nefarious use of cloud services

Explanation:
Abuse or nefarious use of cloud services is listed as one of the top twelve threats to cloud environments by the Cloud Security Alliance in the Treacherous 12, one of the eleven in the Egregious 11. Nefarious is defined as wickedly evil. Abuse or nefarious use of cloud services can occur when an attacker is able to launch attacks from a cloud environment either by gaining access to a poorly secured cloud or using a free trial of cloud service.

Shared technology issues speak to the concern that multiple customers/tenants are using the same physical server in the cloud. If the hypervisor or container system does not keep them properly separate, they could see each other’s data, or worse.

Insufficient logging and monitoring means exactly what it says. Many customers should be logging and monitoring those logs better than they are.

Insecure deserialization is on the OWASP Top 10 2017. On the 2021 list, it is now software and data integrity failures. This can allow remote code execution if the application accepts serialized objects. Serialization can be used for Remote Procedure Call (RPC), HTTP cookies, caching, and more.

56
Q

Mutual due diligence and security reviews are MOST likely when setting up a relationship with which of the following?

A. Vendors
B. Customers
C. Regulators
D. Partners

A

D. Partners

Explanation:
An organization may need to communicate with various parties as part of its security and risk management process. These include:

Vendors: Companies rely on vendor-provided solutions, and a vendor experiencing problems could result in availability issues or potential vulnerabilities for their customers. Relationships with vendors should be managed via contracts and SLAs, and companies should have clear lines of communication to ensure that customers have advance notice of potential issues and that they can communicate any observed issues to the vendor.
Customers: Communications between a company and its customers are important to set SLA terms, notify customers of planned and unplanned service interruptions, and otherwise handle logistics and protect the brand image.
Partners: Partners often have more access to corporate data and systems than vendors but are independent organizations. Partners should be treated similarly to employees with defined onboarding/offboarding and management processes. Also, the partnership should begin with mutual due diligence and security reviews before granting access to sensitive data or systems.
Regulators: Regulatory requirements also apply to cloud environments. Organizations receive regulatory requirements and may need to demonstrate compliance or report security incidents to relevant regulators.

Organizations may need to communicate with other stakeholders in specific situations. For example, a security incident or business disruption may require communicating with the public, employees, investors, regulators, and other stakeholders. Organizations may also have other reporting requirements, such as quarterly reports to stakeholders, that could include security-related information.

57
Q

The information security staff has been working on building the management plan for operations in their cloud Infrastructure as a Service (IaaS) environment. They have been working on ensuring that they have taken into consideration the fundamental basics. They have thought about all the scheduling that needs to be planned for when, how, and where any changes are made. They know that orchestration is one of the most important aspects of managing a virtual environment.

What else are they missing?

A. Repudiation
B. Maintenance
C. Patching
D. Scanning

A

B. Maintenance

Explanation:
The management plan for operations in a cloud environment includes scheduling, orchestration, and maintenance. In a cloud environment, it’s vital to ensure that careful planning and management are put in place to operate systems effectively and efficiently for the business.

Repudiation is the ability to deny or argue. For example, a user is able to deny they took an action because they logged in with a group id and password.

Patching is just a part of maintenance. There is more than just patching though when taking care of systems. Configuration management would also be a part of this, among other things.

Scanning would be an action that needs to be taken to understand how things are configured and what the patch levels are.

58
Q

The Unified Extensible Firmware Interface (UEFI) replaces the traditional BIOS and incorporates numerous enhancements. What is the theoretical maximum capacity of a hard drive that UEFI can address?

A. 4.9 zettabytes
B. 10 zettabytes
C. 9.4 zettabytes
D. 4.4 zettabytes

A

C. 9.4 zettabytes

Explanation:
UEFI is a modern firmware interface that serves as a replacement for the traditional Basic Input/Output System (BIOS) found in older computer systems. UEFI provides a standardized interface between the operating system, firmware, and hardware components of a computer. It supports booting off of a drive that is, theoretically, 9.4 zettabytes in size.

59
Q

A pharmaceutical corporation is currently designing their data structure within the cloud. They have a lot of data regarding their formulas, past and present, for their drugs that they have developed. The data that they need to store varies in size and format. This data would be described as which of the following?

A. Semistructured data
B. Unstructured data
C. Correlated data
D. Structured data

A

B. Unstructured data

Explanation:
Unstructured data is data that is commonly referred to as big data. The five characteristics of big data are volume (size), variety (format), velocity, veracity, and variability.

Structured data is predictable in size and format. It fits very nicely within databases.

Semistructured data is stored in the format of a database but can store unpredictable-sized data as well. A field or attribute that allows for a variable amount of information in that field would be unstructured in nature for that field. That makes for a semistructured format.

Correlated data have a mutual relationship with each other. It could be used to describe the information within a table of a database.

60
Q

Riki and her team have been working with a Managed Service Provider (MSP) regarding their Infrastructure as a Service (IaaS) deployment. They are working to move an on-premise data center into the cloud. It is essential that they are clear about their availability expectations. These requirements should be spelled out in which part of the contract?

A. Business Associate Agreement (BAA)
B. Privacy Level Agreement (PLA)
C. Service Level Agreement (SLA)
D. Master Services Agreement (MSA)

A

C. Service Level Agreement (SLA)

Explanation:
The Service Level Agreement (SLA) is made between an organization and a third-party vendor (such as a cloud provider). Availability expectations and needs should be addressed in the SLA.

MSAs define the core responsibility of each company within a contract. For example, the MSA could be responsible for providing the cloud environment and maintaining the physical data center. The customer builds and manages their IaaS. (It is not necessary for the relationship to be exactly this.)

The PLA spells out the types of personal data that would be stored and processed within the cloud and what the expectation of the customer is for the cloud provider to protect that data. A BAA is essentially the same type of document, but it is specific to the US HIPAA regulation.

61
Q

Nakia has been working with the developers to test the application that they are building and will be moving to operations soon. They have been reviewing the code while moving through the running application. What type of test are they performing?

A. Dynamic Application Security Test (DAST)
B. Runtime Application Self-Protection (RASP)
C. Interactive Application Security Test (IAST)
D. Static Application Security Test (SAST)

A

C. Interactive Application Security Test (IAST)

Explanation:
IAST assesses the source code while the application is being used to understand exactly what is happening when a vulnerability is found.

SAST just analyzes the code. There is no running application when SAST is performed.

DAST tests the running application. The source code is not available to the testers.

RASP is security built into the application that allows it to identify threats at runtime.

62
Q

Which of the following techniques uses context and the meaning of text to identify sensitive data in unstructured data?

A. Hashing
B. Pattern Matching
C. Schema Analysis
D. Lexical Analysis

A

D. Lexical Analysis

Explanation:
When working with unstructured data, there are a few different techniques that a data discovery tool can use:

Pattern Matching: Pattern matching looks for data formats common to sensitive data, often using regular expressions. For example, the tool might look for 16-digit credit card numbers or numbers structured as XXX-XX-XXXX, which are likely US Social Security Numbers (SSNs).
Lexical Analysis: Lexical analysis uses natural language processing (NLP) to analyze the meaning and context of text and identify sensitive data. For example, a discussion of “payment details” or “card numbers” could include a credit card number.
Hashing: Hashing can be used to identify known-sensitive files that change infrequently. For example, a DLP solution may have a database of hashes for files containing corporate trade secrets or company applications.

Schema analysis can’t be used with unstructured data because only structured databases have schemas.

63
Q

A team is working to create an application for smart phones that allows users to charge random purchases to a stored credit card. To ensure that the user’s card data is protected as best as possible so that their company is compliant with the Payment Card Industry’s Data Security Standard (PCI DSS), they are looking for a technology to minimize the exposure of card details.

Which technology would minimize the number of locations where the card data exists?

A. Tokenization
B. Obfuscation
C. Encryption
D. Masking

A

A. Tokenization

Explanation:
Tokenization is the practice of utilizing a random or opaque value in data to replace what would otherwise be sensitive data. The card number will be replaced with a random number that can be stored on the phone and transmitted to the bank to make a purchase. The actual card number will be accessible to the bank after the random number is located in a special database.

Encryption only alters the card number for storage and transmission. The card number will still be present on the phone and then transmitted to the bank, even though it will be protected from view through encryption.

Obfuscation is to confuse. Encryption is a type of obfuscation. Obfuscation is to alter the format to make it more difficult to read the sensitive data, the card number in this case.

Masking is to cover, such as the stars or dots you see when you type your password (hopefully on all platforms).

Encryption, obfuscation, and masking do not remove sensitive data. They disguise it in some way. Tokenization replaces the value completely with a random number.

The short version: Tokenization replaces. Obfuscation confuses. Masking covers.

64
Q

The Business Continuity/Disaster Recovery (BC/DR) team has been working for months to update their corporate DR plan. The PRIMARY goal of a DR test is to ensure which of the following?

A. Each administrator knows all the steps of the plan
B. Management is satisfied with the BC/DR plan
C. All production systems are brought back online
D. Recovery Time Objective (RTO) goals are met

A

D. Recovery Time Objective (RTO) goals are met

Explanation:
With any Business Continuity and Disaster Recovery (BCDR) test, the main goal and purpose is to ensure that Recovery Time Objective (RTO) and Recovery Point Objective (RPO) goals are met. When planning the test, staff should consider how to properly follow the objectives and decisions made as part of RPO and RTO analysis.

It is unlikely that all production systems will be brought back on line in the event of a disaster. If the plan is just switching the cloud setup from one region to another, all systems could be brought online. However, there is nothing in the question that says that all production systems are in the cloud or what type of disaster this even is. So, in traditional BC/DR planning, it is not expected that all production systems will be brought back online in the alternate configuration.

Management does need to be satisfied with the plans that are built, but the question is about the goal of the test. The test needs to show that the plan will work. That should make management happy. The immediate answer to the question is to match the RTO goals.

Administrators do not need to know every step of the plan. All administrators need to know is what they need to know, which would likely not be all the steps.

65
Q

Rafaela has been working to determine the correct configuration for a Platform as a Service (PaaS) server-based virtual machine that will be used to crunch numbers for their sales department. She is concerned that a bad actor may gain access to this system and use it for malicious purposes. Her second major concern is managing the budget. She needs to configure the virtual machine with constraints.

Which of the following terms is used to describe the maximum memory or processing utilization allowed by a cloud customer?

A. Shares
B. Reservations
C. Caps
D. Limits

A

D. Limits

Explanation:
The maximum amount of memory and processing utilization allowed by a cloud customer is known as a limit.

Limits are the opposite of reservations. Reservations are used to ensure that cloud customers have the minimum amount of resources needed to run their services.

Shares are the remaining CPU or memory that is unused and is available for the next customer/application that needs to run.

Caps is not a term used in cloud at this time.

66
Q

Which of the following forms of data encryption is MOST likely to use a user-provided password?

A. Storage-Level Encryption
B. Object-Level Encryption
C. File Level Encryption
D. Volume-Level Encryption

A

C. File Level Encryption

Explanation:
Data can be encrypted in the cloud in a few different ways. The main encryption options available in the cloud are:

Storage-Level Encryption: Data is encrypted as it is written to storage using keys known to/controlled by the CSP.
Volume-Level Encryption: Data is encrypted when it is written to a volume connected to a VM using keys controlled by the cloud customer.
Object-Level Encryption: Data written to object storage is encrypted using keys that are most likely controlled by the CSP.
File-Level Encryption: Applications like Microsoft Word and Adobe Acrobat can encrypt files using a user-provided password or a key controlled by an IRM solution.
Application-Level Encryption: An application encrypts its own data using keys provided to it before storing the data (typically in object storage). Keys may be provided by the customer or CSP.
Database-Level Encryption: Databases can be encrypted at the file level or use transparent encryption, which is built into the database software and encrypts specific tables, rows, or columns. These keys are usually controlled by the cloud customer.
67
Q

Ivan is working with a large corporation building their server-based Platform as a Service (PaaS) implementation. There is a specific application that they are configuring, and they want the flexibility and scalability that comes with the cloud but are worried about how much the application could cost them.

What could they use to control the cost yet still allow the application to burst to a certain amount?

A. Limits
B. Authentication
C. Shares
D. Multitenancy

A

A. Limits

Explanation:
Limits and reservations are both terms referring to how resources are allocated in a cloud environment. Limits are added to restrict how much an application, server, or service can burst. This is very useful to control costs. It is usually possible to override that setting if the expansion of services is legitimate and needed.

Reservations refer to the minimum amount of resources that a cloud customer is guaranteed to receive. The opposite of reservations is limits. Limits refer to the maximum of resources that a cloud customer may utilize.

Shares refer to the prioritization of systems even in high utilization periods. Multitenancy refers to multiple cloud customers sharing the same cloud environment.

Authentication is the granting of access to resources.

68
Q

Filippa has been assessing Hardware Security Modules (HSM) for implementation in their data center. She works for a public Cloud Service Provider (CSP), and their customers need access to such products to be able to store their cryptographic keys securely. What standard do HSMs get certified against that has four levels of certification?

A. Federal Information Processing Standard (FIPS) 140-3
B. National Institute of Standards and Technology Special Publication (NIST SP) 800-53
C. International Standards Organization/ International Electrotechnical Committee (ISO/IEC) 27001
D. Payment Card Industry Data Security Standards (PCE DSS)

A

A. Federal Information Processing Standard (FIPS) 140-3

Explanation:
The Federal Information Processing Standard (FIPS) 140-3 and the older FIPS 140-2 version are standards established by the National Institute of Standards and Technology (NIST) in the United States. It defines the security requirements for cryptographic modules used in various information systems, including computers, servers, and telecommunications equipment.

The primary goal of FIPS 140-3 is to ensure the security and integrity of sensitive information by specifying the criteria that cryptographic modules must meet. These modules encompass both hardware and software components involved in encryption, decryption, key management, and other cryptographic operations. There are four (1-4) certification levels.

NIST SP 800-53 is a publication by the National Institute of Standards and Technology (NIST) in the United States. It provides a comprehensive set of security controls and guidelines for federal information systems and organizations.

Payment Card Industry Data Security Standards (PCE DSS) is a set of security standards developed by the Payment Card Industry Security Standards Council (PCI SSC) to ensure the secure handling and protection of credit card data. PCI DSS applies to any organization that processes, stores, or transmits cardholder data. This includes merchants, service providers, financial institutions, and any entity involved in the payment card ecosystem. Compliance with PCI DSS is mandatory for these organizations to ensure the security of cardholder data and prevent unauthorized access or fraud.

ISO/IEC 27001 is an international standard for Information Security Management Systems (ISMS). It provides a systematic and comprehensive approach to managing and protecting sensitive information within an organization.

69
Q

Atticus, a cloud administrator, wants to ensure the security of a virtual server running within an Infrastructure as a Service (IaaS) environment. He would like to run a program on that virtual machine that would analyze all inbound and outbound traffic.

Which of the following should this administrator use?

A. Anti-malware software
B. Host-based Intrusion Detection Software (HIDS)
C. Honeypot
D. Information Rights Management (IRM)

A

B. Host-based Intrusion Detection Software (HIDS)

Explanation:
Host-based Intrusion Detection Software (HIDS) runs on a single host and analyzes all inbound and outbound traffic for that host to detect possible intrusions.

A honeypot is an isolated system used to trick a bad actor into believing that it is a production system so that they will get stuck (in the honey) so that the security operations team has a chance to stop the bad actor before they get any further into the network.

Anti-malware software is designed to detect malicious software such as viruses, worms, keystroke loggers, etc. It does not analyze the traffic the same as an IDS. The IDS’ job is to watch all incoming and outgoing traffic, looking for an intruder.

IRM is effectively the same as Data Rights Management (DRM). IRM/DRM controls access to content. DRM tools are more familiar. They include Kindle, iTunes, Spotify, and any other software that controls access to videos, books, music, etc. IRM tools are used within businesses for corporate content.

70
Q

You see a value like XXXX XXXX XXXX 1234 in the credit card column of a database table. Which of the following data security techniques was used?

A. Masking
B. Hashing
C. Anonymization
D. Encryption

A

A. Masking

Explanation:
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:

Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules.
Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions.
Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number.
Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data.
Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
71
Q

Which law or regulation protects the personal data of natural persons within the European Union?

A. Health Information Portability and Accountability Act (HIPAA)
B. Personal Data Protection Act No. 25,326
C. General Data Protection Regulation (GDPR)
D. Personal Information Protection and Electronic Data Act (PIPEDA)

A

C. General Data Protection Regulation (GDPR)

Explanation:
The General Data Protection Regulation, or GDPR, is a regulation and law that affects all countries in the European Union (EU) and the European Economic Area. The purpose of the GDPR is to protect data on all natural persons within the EU. If the person is within the EU or EEA and their data is collected from anywhere in the world, their data must be protected according to GDPR.

PIPEDA is a Canadian law that requires the protection of personal data.

Personal Data Protection Act No. 25,326 is a similar law in Argentina.

HIPAA is a U.S. law that requires the protection of Protected Health Information (PHI).

72
Q

Which of the following types of BCP/DRP testing poses the LEAST risk of disruption to an organization’s operations?

A. Full Test
B. Simulation
C. Tabletop Exercise
D. Parallel Test

A

C. Tabletop Exercise

Explanation:
Business continuity/disaster recovery plan (BCP/DRP) testing can be performed in various ways. Some of the main types of tests include:

Tabletop Exercises: In a tabletop exercise, the participants talk through a provided scenario. They say what they would do in a situation but take no real actions.
Simulation/Dry Run: A simulation involves working and talking through a scenario like a tabletop exercise. However, the participants may take limited, non-disruptive actions, such as spinning up backup cloud resources that would be used during a real incident.
Parallel Test: In a parallel test, the full BC/DR process is carried out alongside production systems. In a parallel test, the BCP/DRP steps are actually performed.
Full Test:  In a full test, primary systems are taken down as they would be in the simulated event. This test ensures that the BCP/DRP systems and processes are capable of maintaining and restoring operations.
73
Q

NIcole is looking into different public cloud providers for her business. The corporation is looking for the best Platform as a Service (PaaS) vendor to begin moving their servers and applications to. NIcole and her team have been looking at the different audits and certifications that each provider has pursued successfully.

What standard would they look for if they are interested in the cloud provider demonstrating compliance with international recommendations for information security?

A. ISO/IEC 27018 (International Standards Organization / International Electrotechnical Committee)
B. NIST SP 800-37 (National Institute of Standards and Technology Special Publication)
C. AICPA SOC 2 Type II (American Institute for Certified Public Accountants Security Organization Controls)
D. ISO/IEC 27017 (International Standards Organization / International Electrotechnical Committee)

A

D. ISO/IEC 27017 (International Standards Organization / International Electrotechnical Committee)

Explanation:
Correct answer: ISO/IEC 27017 (International Standards Organization / International Electrotechnical Committee)

ISO/IEC 27017 is an international standard that provides guidelines and best practices for information security controls specific to cloud service providers (CSPs) and cloud-based services. It focuses on addressing the unique security risks and challenges associated with cloud computing environments.

ISO/IEC 27018 is an international standard that provides guidelines and best practices for protecting personally identifiable information (PII) in public cloud environments. It specifically addresses the privacy concerns and requirements related to the processing of PII by cloud service providers (CSPs).

AICPA SOC 2 Type II is a framework for evaluating and reporting on the controls and processes of service organizations to ensure the security, availability, processing integrity, confidentiality, and privacy of their systems and data. It is an internationally recognized standard developed by the American Institute of Certified Public Accountants (AICPA) and is commonly used to assess the trustworthiness of service providers.

NIST SP 800-37, titled “Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach,” is a publication by the National Institute of Standards and Technology (NIST) that provides guidance on implementing a risk management framework for federal information systems.

74
Q

Which of the following is NOT an example of an organization that provides cloud design patterns?

A. CSA
B. ISO/IEC
C. SANS
D. Cloud Service Providers

A

B. ISO/IEC

Explanation:
Cloud design patterns offer references for designing secure cloud environments. Some widely-used cloud design patterns include:

SANS Security Principles
Well-Architected Framework (developed by individual cloud providers)
Cloud Security Alliance (CSA) Enterprise Architecture

The International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) has published various standards but is not known for its cloud design patterns.

75
Q

If a corporation has sensitive data with different levels of consequences, it must ensure that the data is not seen by the wrong parties. They need to handle that data carefully. For example, if a healthcare company has sensitive data that includes patient records, billing information, prescription storage information, and so on, it must take care of it.

If the hospital has determined that they will use three classification levels, what is the purpose of the labels on the data?

A. Protect data that can be considered sensitive or classified
B. Know all the locations within an organization where data could be stored
C. Enable the appropriate level of security controls to be applied
D. Classify data based on where it’s located within the organization

A

C. Enable the appropriate level of security controls to be applied

Explanation:
Correct answer: Enable the appropriate level of security controls to be applied

Labels assist the process of applying the appropriate security controls. If the label is on the data, it makes it easier for tools such as Data Loss Prevention (DLP) to do their job. Labels can also help the user to understand what type of data it is and how they need to protect that data.

Labels do not “protect data that can be considered sensitive or classified.” The label just indicates the sensitivity level of the data, which then allows the protection of data to be done properly.

An inventory would allow you to “know all the locations within an organization where data could be stored.” When the data is found, the label will help you understand if it is in the right place, with the right controls around it.

Classifications are done based on the sensitivity of the data, not its location. The location the data is stored needs to be controlled based on the classification level.

76
Q

Thaksin is the information security manager working with the Business Continuity Management department. They have recently been added to a growing business. As they begin their work, they are trying to figure out what they will require to sustain the business in the event of a failure. They are about to determine the availability requirement of the Critical Business Functions (CBFs).

What phase would they be in to do that?

A. Qualitative Risk Analysis
B. Quantitative Risk Analysis
C. Business Impact Analysis
D. Cost-benefit Analysis

A

C. Business Impact Analysis

Explanation:
The Business Impact Analysis (BIA) is the phase of designing a BCDR plan that allows the analysis of the quantitative and qualitative risk analysis as well as determining the amount of time that the CBFs can be off line. This would then allow a cost-benefit analysis to be done on the options that they can use for their recovery sites or systems.

The BIA includes quantitative and qualitative risk analysis as well as determining numbers such as the Maximum Tolerable Downtime (MTD).

77
Q

You work as a system administrator for a medium-sized company that has recently migrated its data storage and backup infrastructure to the cloud. As part of your responsibilities, you are tasked with ensuring the proper backup of critical data in the cloud environment. You need to design a robust backup strategy for the company’s data stored in the cloud.

Which of the following needs to be addressed?

A. Backup frequency, retention periods, redundancy, system health data
B. Backup frequency, labeling, redundancy, disaster recovery procedures
C. Backup frequency, retention periods, maximum tolerable downtime, disaster recovery procedures
D. Backup frequency, retention periods, redundancy, disaster recovery procedures

A

D. Backup frequency, retention periods, redundancy, disaster recovery procedures

Explanation:
There are many things to consider when planning cloud backups, such as the following:

Backup frequency: Determine the appropriate backup frequency based on the criticality and volatility of the data. Consider factors such as data change rate, Recovery Point Objectives (RPO), and business requirements. For critical data, frequent backups, such as daily or even real-time backups, may be necessary.
Retention periods: Define the retention periods for backups, considering regulatory requirements, legal obligations, and business needs. Different data types may have varying retention periods. Ensure compliance with data protection regulations, such as the General Data Protection Regulation (GDPR) or industry-specific standards.
Disaster recovery procedures: Develop a comprehensive disaster recovery plan that outlines procedures for data recovery in the event of a catastrophic failure or data loss, including steps to restore data from backups and verify the integrity of the restored data. Test the disaster recovery plan regularly to ensure its effectiveness.

Some other considerations are data encryption, geographic distribution, and monitoring.

The Maximum Tolerable Downtime (MTD) is how long a company can be without service and is usually identified for a single critical business function at a time.

Labeling would be the marking of the classification on the data. The classification may impact backups, but the label is not a critical need that needs to be addressed.

System health data is actually more data that may need to be backed up.

78
Q

Bai is working on moving the company’s critical infrastructure to a public cloud provider. Knowing that she has to ensure that the company is in compliance with the requirements of the European Union’s (EU) General Data Protection Regulation (GDPR) country specific laws since the cloud provider is the data processor, at what point should she begin discussions with the cloud provider about this specific protection?

A. Establishment of Service Level Agreements (SLA)
B. Configuration of the Platform as a Service (PaaS) windows servers
C. At the moment of reversing their cloud status
D. Data Processing Agreement (DPA) negotiation

A

D. Data Processing Agreement (DPA) negotiation

Explanation:
Under the EU’s GDPR requirements for each country, there is a requirement for a cloud customer to inform the cloud provider that they will be storing personal data (a.k.a. Personally Identifiable Information—PII) on their servers. This is stated in the DPA, which is more generically called a Privacy Level Agreement (PLA). The cloud provider is a processor because they will be storing or holding the data. It is not necessary for the provider to ever use that data to be considered a processor. So, the first point for discussion with the cloud provider regarding the four answer options listed is the DPA negotiation.

The SLAs are part of contract negotiation, but the DPA is specific to the storage of personal data in the cloud, which is the topic of the question. The configuration of the servers and the removal of data from the cloud provider’s environment (reversibility) would involve concerns about personal data. The DPA negotiation is a better answer because the question asks at what point should Bai “begin discussions” with the cloud provider.

79
Q

Pabla has been working with their corporation to understand the impact that particular threats can have on their Infrastructure as a Service (IaaS) implementation. The information gathered through this process will be used to determine the correct solutions and procedures that will be built to ensure survival through many different incidents and disasters. To perform a quantitative assessment, they must determine their Single Loss Expectancy (SLE) for the corporation’s Structured Query Language (SQL) database in the event that the data is encrypted through the use of ransomware.

Which of the following is the BEST definition of SLE?

A. SLE is the value of the event given a certain percentage loss of the asset
B. SLE is the value of the asset given the amount of time it will be offline in a given year
C. SLE is the value of the event given the value of the asset and the time it can be down
D. SLE is the value of the cost of the event multiplied times the asset value

A

A. SLE is the value of the event given a certain percentage loss of the asset

Explanation:
Correct answer: SLE is the value of the event given a certain percentage loss of the asset

SLE is calculated by taking the asset value times the exposure factor. Exposure factor is effectively the percentage of loss of the asset.

The Annual Rate of Occurrence (ARO) is the number of times that event is expected within a given year.

The SLE multiplied times the ARO gives the value of the annualized loss expectancy.

SLE is the value of the cost of the event multiplied times the asset value is an incorrect answer because it is the loss of the asset times the asset value.

SLE is the value of the event given the value of the asset and the time it can be down is an incorrect answer because the time it can be offline is not a factor. That would be the Maximum Tolerable Downtime (MTD).

SLE is the value of the asset given the amount of time it will be offline in a given year is an incorrect answer because it is not the amount of time it can be offline in a given year. That is typically represented by nines (e.g., 99.99999% downtime).

80
Q

Which of the following is NOT a way that Agile differs from Waterfall?

A. One-way movement through phases
B. Ability to address only some requirements
C. Shorter development cycles
D. Iterative process

A

A. One-way movement through phases

Explanation:
Software development teams can use various development methodologies. Some of the most common include:

Waterfall: The waterfall design methodology strictly enforces the steps of the SDLC. Generally, every part of each stage must be completed before moving on to the next; however, some versions allow stepping back to an earlier phase as needed or only addressing some of the software’s requirements in each go-through.
Agile: Agile development methodologies differ from Waterfall in that they are iterative. During each iteration, the team identifies requirements and works to fulfill them in a set (short) period before moving on to the next phase. Shorter development cycles enable the team to adapt to changing requirements, and Agile practices commonly embrace automation to support repeated processes and security testing (DevSecOps) to streamline the development process.
81
Q

Your organization is in the process of migrating to the cloud. Mid-migration you come across details in an agreement that may leave you non-compliant with a particular law. Who would be the BEST contact to discuss your cloud-environment compliance with legal jurisdictions?

A. Stakeholder
B. Regulator
C. Consultant
D. Partner

A

B. Regulator

Explanation:
As a CCSP, you are responsible for ensuring that your organization’s cloud environment adheres to all applicable regulatory requirements. By staying current on regulatory communications surrounding cloud computing and maintaining contact with approved advisors and, most crucially, regulators, you should be able to assure compliance with legal jurisdictions.

A partner is a generic term that can be used to refer to many different companies. For example, an auditor can be considered a partner.

A stakeholder is someone who has responsibility for caring for a part of the business.

A consultant could assist with just about anything. It all depends on what their skills are. It is plausible that a consultant could help with legal issues. However, regulators definitely understand the laws, so that makes for the best answer.

82
Q

Which of the following types of storage stores data alongside metadata that could be used for classification labels?

A. Ephemeral
B. Volume
C. Object
D. Raw

A

C. Object

Explanation:
Cloud-based infrastructure can use a few different forms of data storage, including:

Ephemeral: Ephemeral storage mimics RAM on a computer. It is intended for short-term storage that will be deleted when an instance is deleted.
Long-Term: Long-term storage solutions like Amazon Glacier, Azure Archive Storage, and Google Coldline and Archive are designed for long-term data storage. Often, these provide durable, resilient storage with integrity protections.
Raw: Raw storage provides direct access to the underlying storage of the server rather than a storage service.
Volume: Volume storage behaves like a physical hard drive connected to the cloud customer’s virtual machine. It can either be file storage, which formats the space like a traditional file system, or block storage, which simply provides space for the user to store anything.
Object: Object storage stores data as objects with unique identifiers associated with metadata, which can be used for data labeling.
83
Q

Silas is working for a healthcare company as a cloud architect. As he is designing how the company will move its data and services to a public cloud provider, he assesses the use of Infrastructure as a Service (IaaS). In an IaaS deployment, who is responsible for the Operating Systems (OS) that will be deployed as the servers, routers, switches, and firewalls?

A. Cloud Service Provider (CSP)
B. It is shared between the CSC and the CSP
C. Cloud Service Customer (CSC)
D. Internet Service Provider (ISP)

A

C. Cloud Service Customer (CSC)

Explanation:
Using cloud services always results in responsibility being shared between the CSC and the CSP. Who is responsible for which depends on the deployment model. Since this is an IaaS deployment model, that means that the customer can build a virtual data center. The customer brings the operating systems, which will be virtual routers, switches, servers, and firewalls. As a result, the customer is exclusively responsible for the OSs.

The physical environment is the CSP’s responsibility. There is a shared responsibility around the infrastructure.

The ISP is the cloud carrier that provides access to the cloud.

84
Q

Estella, the information security manager, is working with senior management to prepare and plan for a new data center. As a cloud provider, they know it is critical to ensure that their customers’ data is protected as needed. One of the key industries that they serve is the health care industry, and Estella understands that there are specific laws that govern the protection of the Protected Health Information (PHI), which includes x-rays, blood tests, drug prescriptions, and so on.

What is the primary physical consideration that must be determined FIRST when building a data center?

A. Redundancy
B. Budget
C. Natural disasters
D. Location

A

D. Location

Explanation:
Location is the major and primary concern when building a data center. It’s important to understand the jurisdiction where the data center will be located. This means understanding the local laws and regulations under that jurisdiction. In the USA, one of the relevant laws is HIPAA, which does have requirements for where data is stored geographically. Additionally, the physical location of the data center will also drive requirements for protecting data during threats such as natural disasters.

Natural disasters are something to consider, but location covers both natural disasters and laws.

Redundancy is important, but it will be designed and built in as the company progresses with this plan.

Budget is important, but the location, laws, regulations, and natural disasters would most likely be the first concern. These can all be discussed in the answer: location.

85
Q

Roland has been working on understanding what data his company has, who owns it, and what can be done with it. What is he doing?

A. Data Inventory
B. Data processing
C. Risk assessment
D. Data ownership

A

A. Data Inventory

Explanation:
Data inventory is doing exactly what is in the question, understanding what data they have, and where it is. The owner of any given piece of data is responsible for that piece or set of data but that is who owns it, which is not what Roland is doing.

Risk assessment is doing quantitative or qualitative assessment related to what could happen to data, systems, business, etc.

Data processing would be using the data in some way, such as charging a credit card for a purchase, running a payroll, or even just holding the data on a server, according to the GDPR.

86
Q

The fact that a cloud provider manages updates to the environment in a Platform as a Service (PaaS) model introduces which of the following potential risks?

A. Resource Sharing
B. Interoperability Issues
C. Persistent Backdoors
D. Virtualization

A

B. Interoperability Issues

Explanation:
Platform as a Service (PaaS) environments inherit all the risks associated with IaaS models, including personnel threats, external threats, and a lack of relevant expertise. Some additional risks added to the PaaS model include:

Interoperability Issues: With PaaS, the cloud customer develops and deploys software in an environment managed by the provider. This creates the potential that the customer’s software may not be compatible with the provider’s environment or that updates to the environment may break compatibility and functionality.
Persistent Backdoors: PaaS is commonly used for development purposes since it removes the need to manage the development environment. When software moves from development to production, security settings and tools designed to provide easy access during testing (i.e. backdoors) may remain enabled and leave the software vulnerable to attack in production.
Virtualization: PaaS environments use virtualized OSs to provide an operating environment for hosted applications. This creates virtualization-related security risks such as hypervisor attacks, information bleed, and VM escapes.
Resource Sharing: PaaS environments are multitenant environments where multiple customers may use the same provider-supplied resources. This creates the potential for side-channel attacks, breakouts, information bleed, and other issues with maintaining tenant separation.
87
Q

You’re revising your organization’s data protection policy to guarantee that your cloud deployment is adequately protected. In the policy, you have specified that cryptographic erasure will be implemented at appropriate points for your Infrastructure and Platform as a Service (IaaS & PaaS). Which phase of the data lifecycle does this apply to?

A. Store
B. Archive
C. Create
D. Destruction

A

D. Destruction

Explanation:
Cryptographic erasure is a type of data destruction. If the key that encrypted the data is destroyed, then it should be very difficult for someone to guess/brute force/break that key, so it is as if it were destroyed (or so we hope.)

Store and archive are both storage, one for the shorter term and one for the longer term. We do not want to destroy the data there.

The create phase is the opposite of destroying. It is when the user is generating or modifying content. The Cloud Security Alliance (CSA) puts modifying content in the create phase.

88
Q

A consulting company has setup and configured their new Platform as a Service (PaaS) object-based storage. They have ensured that the storage will encrypt the data that is stored there. What else should they do to ensure the data will exist when it is needed?

A. Block public access
B. Data retention policy
C. Data Backup
D. Sharing permissions

A

C. Data Backup

Explanation:
This may look like an easy question at first. However, you may not have had that thought once you start looking at the answers. The clue to the question is that data needs to exist. Data backups are incredibly important to ensure that they actually happen. The backups should also be tested to ensure that they work.

The data retention policy is also critical. Probably the second best answer. It is not the best because the focus of data retention is how long the data should be retained. One focus of that is that the data must be destroyed at an appropriate time. The question is focused on existence, not destruction.

Blocking public access is likely what needs to be almost all the time. That is to make sure that the data does not end up in the wrong hands. If the data is publicly accessible, it would not be a problem in the sense that the question is about the existence of the data. One of the most likely hacks would be to steal data. The thief is not that likely to delete it.

Sharing permissions can be set to block or allow sharing. If sharing is blocked, it does not help ensure that the data will exist. If it is shared, it could. So it depends on the setting. Therefore, data backup is a better answer because it absolutely helps to ensure data will exist.

Note that this question addresses availability, not confidentiality or integrity. That does not mean that they are not important. They are simply not addressed here.

89
Q

Luna works for a medical laboratory. Laws keep advancing and new laws keep being introduced that affect the security controls that they must have within their business. Which of the following requires the laboratory to report a data breach if the breach includes Protected Health Information (PHI) left in the clear somewhere such as a Platform as a Service (PaaS) deployment?

A. General Data Protection Regulation (GDPR)
B. Sarbanes Oxley (SOX)
C. Health Information Portability and Accountability Act (HIPAA)
D. Health Information Technology for Economic and Clinical Health (HITECH)

A

D. Health Information Technology for Economic and Clinical Health (HITECH)

Explanation:
HITECH is an extension to the US HIPAA regulation that was enacted in 2009. It requires the disclosure of data breaches of unprotected/unencrypted personal health records.

HIPAA is a US regulation requiring the protection of PHI. It includes the privacy rule and the explanation of security controls that need to be added.

SOX is a US regulation that enhances corporate governance, financial reporting, and accountability. It aims to restore public trust in the financial markets by imposing stricter regulations and requirements on public companies and their auditors.

GDPR is a European Union (EU) law that sets out a set of core principles that organizations must follow when processing personal data. These principles include lawfulness, fairness, and transparency in data processing; purpose limitation; data minimization; accuracy; storage limitation; integrity and confidentiality; and accountability.

90
Q

Which of the following types of tests has the potential to cause a real outage?

A. Simulation
B. Parallel Test
C. Full Test
D. Tabletop Exercise

A

C. Full Test

Explanation:
Business continuity/disaster recovery plan (BCP/DRP) testing can be performed in various ways. Some of the main types of tests include:

Tabletop Exercises: In a tabletop exercise, the participants talk through a provided scenario. They say what they would do in a situation but take no real actions.
Simulation/Dry Run: A simulation involves working and talking through a scenario like a tabletop exercise. However, the participants may take limited, non-disruptive actions, such as spinning up backup cloud resources that would be used during a real incident.
Parallel Test: In a parallel test, the full BC/DR process is carried out alongside production systems. In a parallel test, the BCP/DRP steps are actually performed.
Full Test:  In a full test, primary systems are taken down as they would be in the simulated event. This test ensures that the BCP/DRP systems and processes are capable of maintaining and restoring operations.
91
Q

Dawson works for a manufacturing firm that has been in business for decades. They began entering their data into computers back when computers were new to the workplace. They now have so much data that they are not sure where exactly all their sensitive data or their intellectual property is. They are in the process of data discovery. There are many tools that exist to help a company understand what data they have.

Data discovery starts with what step?

A. Data Flow Analysis
B. Data Source Identification
C. Data Categorization
D. Metadata Analysis

A

B. Data Source Identification

Explanation:
The first step in data discovery is the identification of data sources. This includes databases, file systems, cloud storage, applications, and other data repositories. It is important to have a comprehensive understanding of where data resides to effectively manage and secure it.

Then data can be categorized based on its type, sensitivity, and relevance.

The next step would be to understand the data flow. This includes understanding data flows between different systems, applications, and departments. Mapping data flows helps in identifying potential data leakage points, compliance risks, and areas for improvement in data governance.

After that, there is the metadata analysis. Metadata provides additional information about data, such as its structure, format, and attributes. Analyzing metadata helps in understanding data relationships, dependencies, and quality. It assists in identifying data ownership, data lineage, and data integration points.

92
Q

Your organization uses an Infrastructure as a Service (IaaS) cloud model, and you need to select a storage mechanism that allows metadata tags for easier organization of your data, voice, and video. What would be the BEST option for your organization?

A. Structured data
B. Object storage
C. Semistructured data
D. Block storage

A

B. Object storage

Explanation:
Each storage unit in object storage is an object that you can think of as a file, which has metadata to describe the item. A business can simplify the categorization and retrieval of unstructured data by utilizing object storage.

While block storage is ideal for big, organized data sets that require frequent access and updates, it does not have metadata tags.

Structured data is data that is stored in databases. It is structured because it is predictable. Every row has the same fields. All columns (fields) have a fixed size. But that is the data, not the storage of the data.

Semistructured data is an organization of unstructured data in a structured method. Unstructured data is files, tweets, videos, presentations, etc. There is no predictable feature that is consistent with each of those items.

93
Q

A user might provide their username as part of which stage of IAM?

A. Accountability
B. Identification
C. Authentication
D. Authorization

A

B. Identification

Explanation:
Identity and Access Management (IAM) services have four main practices, including:

Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering.
Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in.
Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this.
Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.
94
Q

Teo works for a large multinational bank, and they have built a private cloud with a Managed Service Provider. As they were building it, they setup multiple accounts for different projects and different departments of the business. It was critical that they maintained the security required by banking and privacy regulations.

What is the term for the isolation required to protect each of the projects and departments?

A. Software Defined Networking
B. Multi-tenancy
C. Hybrid cloud
D. Tenants

A

B. Multi-tenancy

Explanation:
Multitenancy describes the scenario in which cloud providers have multiple customers sharing the same pool of resources or residing within the same environment. These cloud customers are kept isolated from each other for security purposes. This is true in private clouds as well. The difference is that the “customers” come from different projects or departments. The isolation is necessary to keep them separate. It would not be good for someone in customer service to be able to see the Research and Development (R&D) files. Nor would it be good for the engineering department to see all the files in Human Resources (HR). Traditionally, they would all have separate physical servers in the data center. Now, they have virtual servers on the same physical server, and isolation from each other must still occur. For more information, read ISO/IEC 17799. Most ISO documents you must pay for, but this one is free. It has definitions for terms in the cloud. There is much confusion with some of these terms if you look at different people’s interpretations. For this, go to the source and check it out.

This is not a hybrid cloud because there is only a mention of a private cloud in the question. Hybrid must involve either public or community, and the question does not mention them.

The projects and departments are tenants, but that does not isolate them. The logic of multi-tenancy is when isolation is discussed.

Software Defined Networking (SDN) is a newer way for switches and routers to function, but they are so low in the protocol stack that they cannot isolate tenants from each other.

95
Q

Didre is working for a large software company with one of the developer teams as an information security specialist. They are designing an application that is building micro-services that will operate within an Infrastructure as a Service (IaaS) environment. They will be using an Application Programming Interface (API) to communicate between these services.

Which API would best suit their needs?

A. Simple Object Access Protocol (SOAP)
B. Remote Procedure Call (RPC)
C. Graph Query Language (GraphQL)
D. Representation State Transfer (REST)

A

C. Graph Query Language (GraphQL)

Explanation:
Graph Query Language (GraphQL) was created by Facebook in 2015. It is best used with mobile APIs, complex systems, and micro-services.

REpresentation State Transfer (REST) is great for public APIs and simple resource-driven applications.

Simple Object Access Protocol (SOAP) is great for payment gateways, Customer Relationship Management (CRM) solutions, and legacy system support.

Remote Procedure Call (RPC) is good for command and action-oriented APIS and high performance communication in massive micro-service systems.

96
Q

A large organization has just implemented a Security Information and Event Manager (SIEM). Their information security manager, Gretel, has the challenge of ensuring they get the most they can out of this new system. The complexity of the Infrastructure as a Service (IaaS) deployment really complicates their ability to have a good comprehensive view into the activities in the cloud that they need to respond to. With virtual routers, switches, servers, security appliances, and all the virtual desktops sending logs in for the Security Operations Center (SOC) to analyze the SIEM, it makes it easier to connect the different events across their deployment.

Which function of a SIEM is being described here?

A. Reporting
B. Alerting
C. Aggregation
D. Forensics

A

C. Aggregation

Explanation:
Security Information and Event Management (SIEM) systems are able to take data and logs from a large number of sources and connect the events that apply to a possible attack across the network. This process is known as aggregation.

Alerts can then be created once the Indications of Compromise (IoC) have been identified by the SIEM through the process of aggregation, usually.

Reports can be generated by the SIEM to provide information at varying degrees of detail depending on the audience.

The information from the SIEM and the logs it stores are often used in forensics to allow the investigators to understand/uncover what actually happened after an incident/attack.

97
Q

Rogelio is working with the deployment team to deploy 50 new servers as virtual machines (VMs). The servers that he will be deploying will be a combination of different Operating Systems (OS) and Databases (DB). When deploying these images, it is critical to make sure…

A. That the golden images are always used for each deployment
B. That the VMs are updated and patched as soon as they are deployed
C. That the VM images are pulled from a trusted external source
D. That the golden images are used and then patched as soon as it is deployed

A

A. That the golden images are always used for each deployment

Explanation:
The golden image is the current and up-to-date image that is ready for deployment into production. If an image needs patching, it should be patched offline and then the new, better version is turned into the new current golden image. Patching servers in deployment is not the best idea. Patching the image offline is the advised path to take.

The golden image should be built within a business, not pulled from an external source, although there are exceptions. It is critical to know the source of the image (IT or security) and to make sure that it is being maintained and patched on a regular basis.

98
Q

Which of the following forms of access control is the SOLE responsibility of the cloud provider?

A. Privilege Access
B. User Access
C. Service Access
D., Physical Access

A

D., Physical Access

Explanation:
Key components of an identity and access management (IAM) policy in the cloud include:

User Access: User access refers to managing the access and permissions that individual users have within a cloud environment. This can use the cloud provider’s IAM system or a federated system that uses the customer’s IAM system to manage access to cloud services, systems, and other resources.
Privilege Access: Privileged accounts have more access and control in the cloud, potentially including management of cloud security controls. These can be controlled in the same way as user accounts but should also include stronger access security controls, such as mandatory multi-factor authentication (MFA) and greater monitoring.
Service Access: Service accounts are used by applications that need access to various resources. Cloud environments commonly rely heavily on microservices and APIs, making managing service access essential in the cloud.

Physical access to cloud servers is the responsibility of the cloud service provider, not the customer.

99
Q

Data discovery can be described as which of the following?

A. The practice of safeguarding encryption keys
B. The method of using masking, obfuscation, or anonymization to protect sensitive data
C. A set of controls and practices put in place to ensure that data is only accessible to those authorized to access it
D. A business intelligence operation and a user-driven process to look for patterns or specific attributes within data

A

D. A business intelligence operation and a user-driven process to look for patterns or specific attributes within data

Explanation:
Correct answer: A business intelligence operation and a user-driven process to look for patterns or specific attributes within data

Data discovery can be described as a business intelligence operation and a user-driven process to look for patterns or specific attributes within data. Businesses need to explore the data that they have to make intelligent business decisions. These decisions could be regarding when to sell products, what discounts to include, what products customers are interested in, what lines of business should be expanded, and so on.

A set of controls and practices put in place to ensure that data is only accessible to those authorized to access it is basically a description of the core security tenant of confidentiality.

Methods of protecting data include masking, obfuscation, and anonymization. Masking is to cover, such as dots on the screen when you type in your password. Obfuscation is to confuse. A teenager changing the font on their documents to wingdings so that their parents can’t read it is an example of obfuscation. There are more sophisticated methods that can be used by software developers (and others) to protect their code.

Key management is the practice of safeguarding encryption keys.

100
Q

Vendor lock-in is MOST associated with which of the following SaaS-specific risks?

A. Virtualization
B. Interoperability Issues
C. Web Application Security
D. Proprietary Formats

A

D. Proprietary Formats

Explanation:
A Software as a Service (SaaS) environment has all of the risks that IaaS and PaaS environments have, as well as new risks of its own. Some risks unique to SaaS include:

Proprietary Formats: With SaaS, a customer is using a vendor-provided solution. This may use proprietary formats that are incompatible with other software or create a risk of vendor lock-in if the organization’s systems are built around these formats.
Virtualization: SaaS uses even more virtualized environments than PaaS, increasing the potential for VM escapes, information bleed, and similar threats.
Web Application Security: Most SaaS offerings are web applications with a provided application programming interface (API). Both web apps and APIs have potential vulnerabilities and security risks that could exist in these solutions.
101
Q
A