Pocket Prep 12 Flashcards

1
Q

In which cloud service model does the CSP’s responsibility extend to securing operating systems, database management systems (DBMSs), and similar components made available to the cloud customer?

A. PaaS
B. IaaS
C. All service models
D. SaaS

A

A. PaaS

Explanation:
Compute resources include the components that offer memory, CPU, disk, networking, and other services to the customer. In all cases, the cloud service provider (CSP) is responsible for the physical infrastructure providing these services.

However, at the software level, responsibility depends on the cloud service model in use, including:

Infrastructure as a Service (IaaS): In an IaaS environment, the CSP provides and manages the physical components, virtualization software, and networking infrastructure. The customer is responsible for configuring and securing their VMs and the software installed in them.
Platform as a Service (PaaS): In a PaaS environment, the CSP’s responsibility extends to offering and securing the operating systems, database management systems (DBMSs), and other services made available to a customer’s applications. The customer is responsible for properly configuring and using these services and the security of any software that they install or use.
Software as a Service (SaaS): In a SaaS environment, the CSP is responsible for everything except the custom settings made available to the cloud customer. For example, if a cloud storage drive can be set to be publicly accessible, that is the customer’s responsibility, not the CSP’s.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which framework, developed by the International Data Center Authority (IDCA), covers all aspects of data center design, including cabling, location, connectivity, and security?

A. Infinity Paradigm
B. HITRUST
C. OCTAVE
D. Risk Management Framework

A

A. Infinity Paradigm

Explanation:
The International Data Center Authority (IDCA)) is responsible for developing the Infinity Paradigm, which is a framework intended to be used for operations and data center design. The Infinity Paradigm covers aspects of data center design, which include location, cabling, security, connectivity, and much more.

Risk Management Framework (RMF) is defined by NIST as “a process that integrates security, privacy, and cyber supply chain risk management activities into the system development life cycle. The risk-based approach to control selection and specification considers effectiveness, efficiency, and constraints due to applicable laws, directives, Executive Orders, policies, standards, or regulations.”

The Health Information Trust Alliance (HITRUST) is a non-profit organization. They are best known for developing the HITRUST Common Security Framework (CSF), in collaboration with healthcare, technology, and information security organizations around the world. It aligns standards from ISO, NIST, PCI, and regulations like HIPAA.

The Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) is a software threat modeling technique by Carnegie Mellon University that was developed for the US Department of Defense (DoD).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Ardal is the information security manager working for a manufacturing company that specializes in molded silicon kitchen products. They are moving their customer data and product information into a Platform as a Service (PaaS) public cloud environment. Ardal and his team have been analyzing the risks associated with this move so that they can ensure the most appropriate security controls are in place.

Which of the following is TRUE regarding the transfer of risk?

A. Transfer of risk is often the cheapest option for responding to risk
B. RIsk is never truly transferred. Transference simply shares the risk with another company.
C. Risk transfer should always be the first avenue that an organization takes to respond to risk
D. Risk transfer can only be done when the organization has exhausted all other risk responses

A

B. RIsk is never truly transferred. Transference simply shares the risk with another company.

Explanation:
Risk transference is better stated as risk sharing, although transfer is the common word in use. When data is placed on a cloud provider’s infrastructure, it does not remove the risk for the customer. It does not give the risk to the provider. The customer is always responsible for their data.

Risk transfer/share simply means that the cloud provider here has a responsibility to also care for the data. The critical word in that last sentence is also. Under GDPR, the cloud provider is required to care for the data, and a Data Processing Agreement (DPA) should be created to inform the provider of their responsibilities. A DPA is a Privacy Level Agreement (PLA) more generically.

Risk transfer can be done at any time and is not necessarily the cheapest of the options.

Risk transfer is not the first avenue for risk management. There are four options. This is just one of them. The other three are risk reduction/mitigation, risk avoidance, and risk acceptance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Your organization is in the process of migrating to the cloud. Mid-migration you come across details in an agreement that may leave you non-compliant with a particular law. Who would be the BEST contact to discuss your cloud-environment compliance with legal jurisdictions?

A. Stakeholder
B. Consultant
C. Regulator
D. Partner

A

C. Regulator

Explanation:
As a CCSP, you are responsible for ensuring that your organization’s cloud environment adheres to all applicable regulatory requirements. By staying current on regulatory communications surrounding cloud computing and maintaining contact with approved advisors and, most crucially, regulators, you should be able to assure compliance with legal jurisdictions.

A partner is a generic term that can be used to refer to many different companies. For example, an auditor can be considered a partner.

A stakeholder is someone who has responsibility for caring for a part of the business.

A consultant could assist with just about anything. It all depends on what their skills are. It is plausible that a consultant could help with legal issues. However, regulators definitely understand the laws, so that makes for the best answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Rafferty just configured the server-based Platform as a Service (PaaS) that they are using for their company, a government contractor. The server will be used to perform computations related to customer actions on their e-commerce website. He is concerned that they may not have enough CPU and memory allocated to them when they need it.

What should he do?

A. Ensure the limits will not cause any problems with the service
B. Make sure that the server has available share space
C. Ensure a reservation is made at the minimum level needed
D. Set a limit to make sure that the service will work correctly

A

C. Ensure a reservation is made at the minimum level needed

Explanation:
A minimum resource that is granted to a cloud customer within a cloud environment is known as a reservation. With a reservation, the cloud customer should always have, at the minimum, the amount of resources needed to power and operate any of their services.

On the flip side, limits are the opposite of reservations. A limit is the maximum utilization of memory or processing allowed for a cloud customer. It is a good idea to set to control costs, especially on a new service.

The share space is what is available for any customer to utilize. The cloud works on a first come, first served approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

An organization has decided that the best course of action to handle a specific risk is to obtain an insurance policy. The insurance policy will cover any financial costs of a successful risk exploit. Which type of risk response is this an example of?

A. Risk transference
B. Risk mitigation
C. Risk avoidance
D. Risk acceptance

A

A. Risk transference

Explanation:
When an organization obtains an insurance policy to cover the financial burden of a successful risk exploit, this is known as risk transference or risk sharing. It’s important to note that with risk transference, only the financial losses would be covered by the policy, but it would not do anything to cover the loss of reputation the organization might face.

Risk avoidance is when a decision is made to not engage in, or to stop engaging in, risky behavior.

Risk mitigation or risk reduction is when controls are put in place to reduce the chance of a threat being realized or to minimize the impact of it once it does happen.

Risk acceptance always needs to be done because no matter how much of the other three options are done, risk cannot be eliminated. Who accepts the risk, though, is something that a business needs to carefully consider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A malicious actor created a free trial account for a cloud service using a fake identity. Once the free trial cloud environment was up and running, they used it as a launch pad for several cloud-based attacks. Because they used a fake identity to set up the free trial, it would be difficult (if not impossible) for the attacks to be traced back to them.

What type of cloud-based threat is being described here?

A. Denial-of-service
B. Shared technology issues
C. Abuse or nefarious use of cloud services
D. Advanced persistent threats

A

C. Abuse or nefarious use of cloud services

Explanation:
Abuse or nefarious use of cloud services is listed as one of the top twelve threats to cloud environments by the Cloud Security Alliance. Abuse or nefarious use of cloud services occurs when an attacker is able to launch attacks from a cloud environment either by gaining access to a poorly secured cloud or using a free trial of cloud service. Often, when using a free trial, the attacker will configure everything using a fake identity so attacks can’t be traced back to them.

A Denial-of-Service (DoS) attack is when the bad actor causes a system to max out or fill up so that a user is not able to do any work.

Shared technology is the core nature of clouds, especially public clouds. If the cloud provider does not take care to ensure that each tenant is not properly isolated or they do not take care of the operating systems, it could lead to so many possible problems. If the hypervisors, Microsoft servers, Linux servers, or any of the other software is not patched or configured properly, it is possible that data could leak between tenants or cause other issues.

Advance Persistent Threats (APT) are when very skilled and aggressive bad actors, probably operating on behalf of a government, create software that will slowly cause problems for another country or business. So the word “advanced” speaks to the skill of the bad actors. The word “persistent” speaks to malicious software being in place over a long period of time to cause a great number of problems. If you are unfamiliar with APTs, do a little research into Stuxnet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

If software developers and the supporting team were to ask the following four questions, what would they be doing?

What are we working on?
What could go wrong?
What are we going to do about it?
Did we do a good job?

A. Evaluating the Recovery Point Objective (RPO)
B. Determining Maximum Tolerable Downtime (MTD)
C. Performing threat modeling
D. Performing a quantitative risk assessment

A

C. Performing threat modeling

Explanation:
The four questions are the basic idea behind threat modeling. Threat modeling allows the team to identify, communicate, and understand threats and mitigations. There are several techniques, such as STRIDE, PASTA, TRIKE, and OCTAVE.

STRIDE is one of the most prominent models used for threat modeling. Tampering with data is included in the STRIDE model. DREAD is another model, but it does not include tampering with data as a category. TOGAF and REST are not threat models. STRIDE includes the following six categories:

Spoofing identify
Tampering with data
Repudiation
Information disclosure
Denial of service
Elevation of privileges

A quantitative risk assessment is when the Single Loss Expectency (SLE), Annual Rate of Occurrence (ARO), and Annualized Loss Expectency (ALE) are calculated based on the threat to a specific asset.

Determining the MTD is a step in Business Continuity/Disaster Recovery/Continuity planning. It answers the question of how long an asset can be unavailable before it is a significant problem for the business.

Evaluating the RPO is also a part of Business Continuity/Disaster Recovery/Continuity planning. The RPO is the value that represents how much data can be lost before it too is a problem for the business.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Brocky has been working with a project team analyzing the risks that could occur as this project progresses. The analysis that their team has been performing used descriptive information rather than financial numbers. Which type of assessment have they been performing?

A. Quantitative assessment
B. Fault tree analysis
C. Qualitative assessment
D. Root cause analysis

A

C. Qualitative assessment

Explanation:
There are two main assessment types that can be done for assessing risk: qualitative assessments and quantitative assessments. While quantitative assessments are data driven, focusing on items such as Single Loss Expectancy (SLE), Annual Rate of Occurrence (ARO), and Annual Loss Expectancy (ALE), qualitative assessments are descriptive in nature and not data driven.

Fault tree analysis is actually a combination of quantitative and qualitative assessments. The question is looking for something that is not financial and that would be the quantitative. So this is more than what the question is about.

Root cause analysis is what is done in problem management from ITIL. Root cause analysis analyzes why some bad event has happened so that the root cause can be found and fixed so that it does not happen again.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Hillary is working to ensure that her company receives the services it requires from its cloud service provider. They have a contract with Service Level Agreements (SLAs) for their bandwidth and uptime. What is Hillary doing?

A. Change management
B. Information Technology Service Management (ITSM)
C. Business Continuity Planning (BCP)
D. ITIL (formerly Information Technology Infrastructure Library)

A

B. Information Technology Service Management (ITSM)

Explanation:
ITSM is effectively ISO 20000-1 and is based on ITIL. Managing the services from the cloud provider matches ITSM slightly better than ITIL, but ITIL was included as an answer option for discussion purposes. ITSM is a comprehensive approach to designing, delivering, managing, and improving IT services within an organization. It focuses on aligning IT services with the needs of the business and ensuring that the IT services provided are efficient, reliable, and of high quality. ITSM involves a set of practices, processes, and policies that guide the entire service lifecycle, from service strategy and design to service transition, operation, and continual service improvement.

Key characteristics of ITSM include:

Customer-centric: ITSM emphasizes understanding and meeting the needs of customers and end-users. It aims to improve customer satisfaction and overall service experience.
Process-oriented: ITSM adopts a process-driven approach, defining workflows and procedures to ensure consistent and repeatable service delivery.
Focus on continual improvement: ITSM encourages regular evaluation and optimization of IT services and processes to increase efficiency and effectiveness.

ITIL involves managing data centers more specifically, so it matches the work of the cloud provider slightly better.

Key characteristics of ITIL include:

Service lifecycle approach: ITIL is structured around the service lifecycle, consisting of five core stages: Service Strategy, Service Design, Service Transition, Service Operation, and Continual Service Improvement.
Process framework: ITIL defines a range of processes that cover various aspects of IT service management, including incident management, problem management, change management, service level management, and more.
Widely adopted standard: ITIL has become a de facto standard for ITSM and is widely adopted by organizations globally.

Change management is a structured and organized approach to managing and implementing organizational changes. It involves planning, coordinating, communicating, and monitoring modifications to various aspects of the organization, such as processes, systems, technology, culture, or organizational structure.

BCP is about planning for when there are failures, not the basic management of a cloud vendor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Gherorghe is working with the cloud operations department after a variety of strange behaviors have been seen in their Infrastructure as a Service (IaaS) environment. They are now looking for a tool or toolset that can help them identify fraudulent, illegal, or other undesirable behavior within their client-server datasets.

What tool or toolset can provide assistance with this?

A. Database Activity Monitor (DAM)
B. Application Programming Interface (API) gateway
C. Web Application Firewall (WAF)
D. eXtensible Markup Language (XML) firewall

A

A. Database Activity Monitor (DAM)

Explanation:
Gartner defines DAMs as “a suite of tools that can be used to support the ability to identify and report on fraudulent, illegal or other undesirable behavior.” These tools include Oracle’s Enterprise Manager. These tools have evolved from monitoring user traffic in databases. They are useful for so many different uses to know what is going on with user traffic in and out of databases.

A WAF is a layer 7 firewall that monitors web applications, HTML, and HTTP traffic.

An API gateway is also a layer 7 device. However, this one monitors APIs. This would include SOAP and REpresentation State Transfer (REST).

XML firewalls also exist at layer 7. This monitors XML traffic only. APIs would include XML and JavaScript Object Notation (JSON).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Biometrics and passwords are part of which stage of IAM?

A. Authentication
B. Accountability
C. Authorization
D. Identification

A

A. Authentication

Explanation:
Identity and Access Management (IAM) services have four main practices, including:

Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering.
Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in.
Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this.
Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Antonia has recently been hired by a cancer treatment facility. One of the first training programs that she is required to go through at the office is related to the protection of individually identifiable health information. Which law is this related to and which country does it apply to?

A. Health Insurance Portability and Accountability Act (HIPAA), USA
B. Health Insurance Portability and Accountability Act (HIPAA), Canada
C. Gramm-Leach-Bliley Act (GLBA), USA
D. General Data Protection Regulation (GDPR), Germany

A

A. Health Insurance Portability and Accountability Act (HIPAA), USA

Explanation:
The Health Insurance Portability and Accountability Act (HIPAA) is concerned with the security controls and confidentiality of Protected Health Information (PHI). It’s vital that anyone working in any healthcare facility be aware of HIPAA regulations.

The Gramm-Leach-Bliley Act, officially named the Financial Modernization Act of 1999, focuses on PII as it pertains to financial institutions, such as banks.

GDPR is an EU specific regulation that encompasses all organizations in all different industries.

The privacy act of 1988 is an Australian law that requires the protection of personal data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Oya and her risk assessment team are working on preparing to perform their annual assessment of the risks that their cloud data center could experience. What is the correct order of risk management steps?

A. Prepare, categorize, select, implement, assess, authorize, monitor
B. Authorize, prepare, assess, categorize, select, implement, monitor
C. Assess, authorize, prepare, categorize, select, implement, monitor
D. Prepare, assess, categorize, select, implement, authorize, monitor

A

A. Prepare, categorize, select, implement, assess, authorize, monitor

Explanation:
The National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) lists the correct order of the risk management steps as the following: prepare, categorize, select, implement, assess, authorize, and monitor. The prepare phase is where Oya and her team are. They are getting into the process of analyzing the risks for the cloud data center. Then they will categorize the risks and threats based on the impact they could have on the organization. The select phase is when controls are selected to reduce the likelihood or impact of the threats.

If there are new controls or simply new settings that need to be configured, this is done in the implement phase. When the assess phase is active, the team is looking to see if the controls are in place and working properly. The authorize phase is when senior management is informed of all that can be found, determined, chosen, and analyzed, and they authorize their business to have their production environments configured in this new way. Lastly, there is ongoing monitoring that is performed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The Business Continuity/Disaster Recovery (BC/DR) team has been working for months to update their corporate DR plan. The PRIMARY goal of a DR test is to ensure which of the following?

A. All production systems are brought back online
B. Management is satisfied with the BC/DR plan
C. Recovery Time Objective (RTO) goals are met
D. Each administrator knows all the steps of the plan

A

C. Recovery Time Objective (RTO) goals are met

Explanation:
With any Business Continuity and Disaster Recovery (BCDR) test, the main goal and purpose is to ensure that Recovery Time Objective (RTO) and Recovery Point Objective (RPO) goals are met. When planning the test, staff should consider how to properly follow the objectives and decisions made as part of RPO and RTO analysis.

It is unlikely that all production systems will be brought back on line in the event of a disaster. If the plan is just switching the cloud setup from one region to another, all systems could be brought online. However, there is nothing in the question that says that all production systems are in the cloud or what type of disaster this even is. So, in traditional BC/DR planning, it is not expected that all production systems will be brought back online in the alternate configuration.

Management does need to be satisfied with the plans that are built, but the question is about the goal of the test. The test needs to show that the plan will work. That should make management happy. The immediate answer to the question is to match the RTO goals.

Administrators do not need to know every step of the plan. All administrators need to know is what they need to know, which would likely not be all the steps.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which of the Trust Services principles must be included in a Service Organization Controls (SOC) 2 audit?

A. Privacy
B. Security
C. Availability
D. Confidentiality

A

B. Security

Explanation:
The Trust Service Criteria from the American Institute of Certified Public Accountants (AICPA) for the Security Organization Controls (SOC) 2 audit report is made up of five key principles: Availability, Confidentiality, Process integrity, Privacy, and Security. Security is always required as part of a SOC 2 audit. The other four principles are optional.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A bad actor working for an enemy state has created malware that has the purpose of stealing data from the other country regarding their military and its products and capabilities. The bad actor has planted malware on the enemy’s systems and has left it, undetected, for eight months. What is the name of this type of attack?

A. Insecure Application Programming Interface (API)
B. Human error
C. Advanced persistent threat (APT)
D. Malicious insider

A

C. Advanced persistent threat (APT)

Explanation:
Many types of malware and malicious programs are loud and aim to disrupt a system or network. Advanced Persistent Threats (APTs) are the opposite. APTs are attacks that attempt to steal data and stay hidden in the system or network for as long as possible. The longer the APT can stay in the system, the more data it is able to collect. The advanced part of APT is in reference to the skill level of the bad actor.

A malicious insider would be performing bad actions within the business acting without their knowledge. The enemy is probably operating with knowledge inside the government.

Human error is a problem for a business, but it is an accident. Creating malware is not accidental; it is intentional and malicious.

An insecure API is not an attack. It is a vulnerability. There is some weakness in the coding or implementation that leaves it vulnerable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Cloud security is a difficult task, made all the more difficult by laws and regulations imposing restrictions on cross-border data transfers. The actual hardware in the cloud can be located anywhere, so it is critical to understand where your data resides. Which of the following statements is true regarding who is responsible for the data?

A. The cloud service provider (CSP) retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
B. The cloud administrator retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
C. The cloud service customer retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
D. Both the cloud service provider (CSP) and the cloud service customer (CSC) retain responsibility for the data’s security regardless of whether cloud or non-cloud services are employed

A

C. The cloud service customer retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed

Explanation:
Correct answer: The cloud service customer retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed

Regardless of whether cloud or non-cloud services are utilized, the data controller [the Cloud Service Customer (CSC)] is ultimately responsible for the data’s security. Cloud security encompasses more than data protection; it also encompasses applications and infrastructure.

According to the European Union (EU) General Data Protection Regulation (GDPR) requirements, the cloud provider is responsible for the data in its protection. The reason the answer that says “both” is not correct is the correct answer contains the word ultimate. Ultimately, the cloud customer is always responsible for their data.

This question also does not mention GDPR, so it is difficult to determine if there is a legal responsibility for the data while it is in the cloud provider’s care, as we do not actually know where on the planet the question is referring to.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

It is necessary within a business to control data at all stages of the lifecycle. Erika is working at a corporation to setup, deploy, and monitor a Data Loss Prevention (DLP) solution. Which component of DLP is involved in the process of applying corporate policy regarding storage of data?

A. Enforcement
B. Identification
C. Discovery
D. Monitoring

A

A. Enforcement

Explanation:
DLP is made up of three major components. They include discovery, monitoring, and enforcement. Enforcement is the final stage of DLP implementation. It is the enforcement component that applies policies and then takes actions, such as deleting data.

Identification is the first piece of IAAA and is the statement of who you claim to be, such as a user ID.

The CSA SecaaS Category 2 document is a good read on the topic of DLP and the cloud and is highly recommended.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Blythe has been working for a Fortune 500 healthcare company for many years now. They are beginning to transition from their on-prem data center to a cloud-based solution. She and her team are working to put together information to present to the Board of Directors (BoD) regarding what they can expect from a move to the cloud.

Which of the following statements is most likely true when moving from an on-prem data center to Infrastructure as a Service (IaaS)?

A. Moving to the cloud will have a predictable OpEx. However, the security in the cloud is higher.
B. A traditional data center will have lower costs on the Operational Expenditures (OpEx) side and higher Capital Expenditures (CapEx)
C. A traditional data center has a more secure operating environment than a cloud environment
D. The pricing for cloud computing will be less predictable than that of a traditional data center

A

D. The pricing for cloud computing will be less predictable than that of a traditional data center

Explanation:
A traditional on-prem data center has a higher CapEx, but OpEx is not lower. It is likely the same or higher. The operating environment could be more secure in either environment. The security of the cloud-based IaaS depends on two factors: the security of the cloud provider’s data center and the configurations within the IaaS. It could be more secure in the cloud. The OpEx in the cloud may eventually be predictable, but especially when moving to the cloud, it is not as predictable as some may prefer.

So, with each of those thoughts, that leaves the best answer as “the pricing in the cloud is less predictable than an on-prem data center.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Bai is working on moving the company’s critical infrastructure to a public cloud provider. Knowing that she has to ensure that the company is in compliance with the requirements of the European Union’s (EU) General Data Protection Regulation (GDPR) country specific laws since the cloud provider is the data processor, at what point should she begin discussions with the cloud provider about this specific protection?

A. Configuration of the Platform as a Service (PaaS) windows servers
B. Establishment of Service Level Agreements (SLA)
C. At the moment of reversing their cloud status
D. Data Processing Agreement (DPA) negotiation

A

D. Data Processing Agreement (DPA) negotiation

Explanation:
Under the EU’s GDPR requirements for each country, there is a requirement for a cloud customer to inform the cloud provider that they will be storing personal data (a.k.a. Personally Identifiable Information—PII) on their servers. This is stated in the DPA, which is more generically called a Privacy Level Agreement (PLA). The cloud provider is a processor because they will be storing or holding the data. It is not necessary for the provider to ever use that data to be considered a processor. So, the first point for discussion with the cloud provider regarding the four answer options listed is the DPA negotiation.

The SLAs are part of contract negotiation, but the DPA is specific to the storage of personal data in the cloud, which is the topic of the question. The configuration of the servers and the removal of data from the cloud provider’s environment (reversibility) would involve concerns about personal data. The DPA negotiation is a better answer because the question asks at what point should Bai “begin discussions” with the cloud provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Haile is a cloud operator who has been reviewing the Indications of Compromise (IoC) from the company’s Security Information and Event Manager (SIEM). The SIEM reviews the log outputs to find these possible compromises. Where should detailed logging be in place within the cloud?

A. Only access to the hypervisor and the management plane
B. Wherever the client accesses the management plane only
C. Each level of the virtualization infrastructure as well as wherever the client accesses the management plane
D. Only specific levels of the virtualization structure

A

C. Each level of the virtualization infrastructure as well as wherever the client accesses the management plane

Explanation:
Logging is imperative for a cloud environment. Role-based access should be implemented, and logging should be done at each and every level of the virtualization infrastructure as well as wherever the client accesses the management plane (such as a web portal).

The SIEM cannot analyze the logs to find the possible compromise points unless logging is enabled, and the logs are delivered to that central point. This is necessary in case there is a compromise, which could happen anywhere within the cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Which of the following is MOST relevant to an organization’s network of applications and APIs in the cloud?

A. Service Access
B. User Access
C. Privilege Access
D. Physical Access

A

A. Service Access

Explanation:
Key components of an identity and access management (IAM) policy in the cloud include:

User Access: User access refers to managing the access and permissions that individual users have within a cloud environment. This can use the cloud provider’s IAM system or a federated system that uses the customer’s IAM system to manage access to cloud services, systems, and other resources.
Privilege Access: Privileged accounts have more access and control in the cloud, potentially including management of cloud security controls. These can be controlled in the same way as user accounts but should also include stronger access security controls, such as mandatory multi-factor authentication (MFA) and greater monitoring.
Service Access: Service accounts are used by applications that need access to various resources. Cloud environments commonly rely heavily on microservices and APIs, making managing service access essential in the cloud.

Physical access to cloud servers is the responsibility of the cloud service provider, not the customer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Winta is using a program to create a spreadsheet after having collected information regarding the sales cycle that the business has just completed. What phase of the cloud data lifecycle is occurring?

A. Archive
B. Store
C. Create
D. Share

A

C. Create

Explanation:
Generating a new spreadsheet is the create phase of the data lifecycle. Create is the generation of new data/voice/video in any manner. The Cloud Security Alliance (CSA) also indicates that the create phase is when data is modified. Not everyone agrees with that last sentence, but this is an exam that is a joint venture between the CSA and (ISC)2, so it is good to know that is what they say in the guidance 4.0 document.

As soon as the data is created, it needs to be moved to persistent storage (hard disk drive or solid state drive).

If that spreadsheet is moved into long-term storage for future reference, then, if needed, that would be the archive phase.

Sending the spreadsheet to the boss for their review (or to anyone else) would be the share phase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Which of the following types of SOC reports provides high-level information about an organization’s controls intended for public dissemination?

A. SOC 2 Type II
B. SOC 1
C. SOC 3
D. SOC 2 Type I

A

C. SOC 3

Explanation:
Service Organization Control (SOC) reports are generated by the American Institute of CPAs (AICPA). The three types of SOC reports are:

SOC 1: SOC 1 reports focus on financial controls and are used to assess an organization’s financial stability.
SOC 2: SOC 2 reports assess an organization's controls in different areas, including Security, Availability, Processing Integrity, Confidentiality, or Privacy. Only the Security area is mandatory in a SOC 2 report.
SOC 3: SOC 3 reports provide a high-level summary of the controls that are tested in a SOC 2 report but lack the same detail. SOC 3 reports are intended for general dissemination.

SOC 2 reports can also be classified as Type I or Type II. A Type I report is based on an analysis of an organization’s control designs but does not test the controls themselves. A Type II report is more comprehensive, as it tests the effectiveness and sustainability of the controls through a more extended audit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Rufus is working for a growing manufacturing business. They have been upgrading their manufacturing equipment over the years to product versions that include internet connectivity for maintenance and management information. This has increased the amount of logs that need to be filtered. Due to the volume of log data generated by systems, it poses a challenge for his organization to perform log reviews efficiently and effectively.

What can his organization implement to help solve this issue?

A. Secure Shell (SSH)
B. System Logging protocol (syslog) server
C. Security Information and Event Manager (SIEM)
D. Data Loss Prevention (DLP)

A

C. Security Information and Event Manager (SIEM)

Explanation:
An organization’s logs are valuable only if the organization makes use of them to identify activity that is unauthorized or compromising. Due to the volume of log data generated by systems, the organization can implement a System Information and Event Monitoring (SIEM) system to overcome these challenges. The SIEM system provides the following:

Log centralization and aggregation
Data integrity
Normalization
Automated or continuous monitoring
Alerting
Investigative monitoring

A syslog server is a centralized logging system that collects, stores, and manages log messages generated by various devices and applications within a network. It provides a way to consolidate and analyze logs from different sources, allowing administrators to monitor system activity, troubleshoot issues, and maintain security. However, it does not help to correlate the logs as the SIEM does.

SSH is a networking protocol that encrypts transmissions. It works at layer 5 of the OSI model. It is commonly used to transmit logs to the syslog server. It is not helpful when analyzing logs. It only secures the transmission of the logs.

DLP tools are used to monitor and manage the transmission or storage of data to ensure that it is done properly. With DLP, the concern is that there will be a data breach/leak unintentionally by the users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

When developing a business continuity and disaster recovery (BC/DR) plan, what step should be completed after the scope has been defined?

A. Test the plan
B. Recovery strategies
C. Embed in the user community
D. Business Impact Assessment (BIA)

A

D. Business Impact Assessment (BIA)

Explanation:
After defining the scope, the next step of developing a BC/DR plan is to perform a business impact assessment. This stage determines what should be included in the plan and looks at items such as the Recovery Time Objective (RTO) and Recovery Point Objective (RPO). It will be necessary during this stage to identify critical systems within the environment.

Based on the knowledge found during the BIA, it is then necessary to develop the recovery strategy. For example, when a failure occurs, will the business fail to a different region within that cloud provider or will they fail to a different cloud provider.

The solution must be tested to ensure that it will work when needed.

At the end of the BC/DR planning process, everyone who needs to know about the plan is aware.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Freeya has been assisting cloud data architects with planning how they will securely store data in their Platform as a Service implementation. They know that leaving a key with the encrypted data is not advised. If someone has the key, they can read the data. They are exploring options in the cloud to protect those keys without costing too much money.

What is the most efficient and cost effective way of storing a key for data that is not exceedingly sensitive?

A. Utilize client-side encryption and decryption with the key stored in the virtual machine
B. Utilize server-side encryption and decryption with the key stored in the virtual machine
C. Utilize a cloud Key Management Service (KMS) to encrypt the data encryption key
D. Utilize a cloud Hardware Security Module (HSM) to encrypt and decrypt the data

A

C. Utilize a cloud Key Management Service (KMS) to encrypt the data encryption key

Explanation:
It is essential that we understand the options for how and where to encrypt data in the cloud. There are many solutions that vary per cloud service provider.

Using a KMS is a great solution today for most companies. The cost is fairly low, if not free for KMS. With KMS, the customer generates a Data Encryption Key (DEK) that is used to encrypt the actual data. The DEK is then encrypted with a Customer Master Key (CMK). The CMK is stored in the KMS. The plaintext data is only available on the server or the customer side.

An HSM is a much more expensive option. The encryption and decryption of the data actually occurs within the HSM.

Client-side encryption and decryption is not so bad, but the key should never be stored in the VM. When the image is decrypted to be spun up, the key will be accessible to anyone who can see that image.

The same can be said about server-side encryption and decryption. It is not entirely wrong, but the key should never be stored within the image.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Your company is looking for a way to ensure that their most critical servers are online when needed. They are exploring the options that their Platform as a Service (PaaS) cloud provider can offer them. The one that they are most interested in has the highest level of availability possible. After a cost-benefit analysis based on their threat assessment, they think that this will be the best option. The cloud provider describes the option as a grouping of resources with a coordinating software agent that facilitates communication, resource sharing, and routing of tasks.

What term matches this option?

A. Server cluster
B. Server redundancy
C. Storage controller
D. Security group

A

A. Server cluster

Explanation:
Server clusters are a collection of resources linked together by a software agent that enables communication, resource sharing, and task routing. Server clusters are considered active-active since they include at least two servers (and any other needed resources) that are both active at the same time.

Server redundancy is usually considered active-passive. Only one server is active at a time. The second waits for a failure to occur; then, it will take over.

Storage controllers are used for storage area networks. It is possible that the servers in the question are storage servers, but more likely they contain the applications that the users and/or the customers require. Therefore, server clustering is the correct answer.

Security groups are effectively virtualized local area networks protected by a firewall.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Which of the following cloud service models has the FEWEST potential external risks and threats that the customer must consider?

A. Software as a Service
B. Platform as a Service
C. Function as a Service
D. Infrastructure as a Service

A

D. Infrastructure as a Service

Explanation:
In an Infrastructure as a Service (IaaS) environment, the customer has the greatest control over its infrastructure stack. This means that it needs to rely less on the service provider than in other service models and, therefore, has fewer potential external security risks and threats.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Bao is able to connect to his home’s thermostat using the internet on his phone and adjust the temperature remotely. This is an example of which type of technology?

A. Blockchain
B. Internet of Things (IoT)
C. Machine learning (ML)
D. Artificial Intelligence (AI)

A

B. Internet of Things (IoT)

Explanation:
The Internet of Things (IoT) refers to the use of non-traditional computing devices (such as lamps, thermostats, and other home appliances) accessing the internet. Although some do consider laptops, smart phones, and computers to be part of IoT, for the exam, these items are unlikely to be considered part of the IoT.

Machine Learning (ML) is computers being able to process data to determine answers. The answers could confirm (or not) a hypothesis. Or it is possible for ML to come to a conclusion without any preconceived hypothesis. ML is a component of AI.

AI is having computers process data the same way a human brain could. Arguably, we are not at AI yet. Some say we are at narrow AI. It is still an evolving technology for sure.

Blockchain creates a ledger of transactions that are permanent and verifiable. A common use today is cryptocurrency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Which of the following characteristics of cloud computing enables a cloud provider to operate cost-effectively by distributing costs across multiple cloud customers?

A. On-Demand Self-Service
B. Rapid Elasticity and Scalability
C. Metered Service
D. Resource Pooling

A

D. Resource Pooling

Explanation:
The six common characteristics of cloud computing include:

Broad Network Access: Cloud services are widely available over the network, whether using web browsers, secure shell (SSH), or other protocols.
On-Demand Self-Service: Cloud customers can redesign their cloud infrastructure at need, leasing additional storage or processing power or specialized components and gaining access to them on-demand.
Resource Pooling: Cloud customers lease resources from a shared pool maintained by the cloud provider at need. This enables the cloud provider to take advantage of economies of scale by spreading infrastructure costs over multiple cloud customers.
Rapid Elasticity and Scalability: Cloud customers can expand or contract their cloud footprint at need, much faster than would be possible if they were using physical infrastructure.
Measured or Metered Service: Cloud providers measure their customers’ usage of the cloud and bill them for the resources that they use.
Multitenancy: Public cloud environments are multitenant, meaning that multiple different cloud customers share the same underlying infrastructure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Which of the following terms is MOST related to the chain of custody?

A. Confidentiality
B. Non-repudiation
C. Availability
D. Integrity

A

B. Non-repudiation

Explanation:
Non-repudiation refers to a person’s inability to deny that they took a particular action. Chain of custody helps to enforce non-repudiation because it demonstrates that the evidence has not been tampered with in a way that could enable someone to deny their actions.

Confidentiality, integrity, and availability are the “CIA triad” that describes the main goals of security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

The organization has deployed a federated single sign-on system (SSO) and is configured to generate tokens for users and send them to the service provider. Which BEST describes this organization’s role?

A. Domain Registrar
B. Certificate Authority (CA)
C. Identity Provider (IdP)
D. Service Provider (SP)

A

C. Identity Provider (IdP)

Explanation:
The organization would act as the identity provider, while the relying party would act as the service provider. The identity provider is the organization that generates tokens for users because it has the ability to authenticate the users. In this scenario, the organization is authenticating their own employees.

The SP is the organization that provides the service that will be used by the users, for example, a sales force.

The CA is used to verify the X.509 certificates. Encryption should be used within the SSO system, but the question doesn’t mention anything encryption related.

A domain registrar is the business that corporations go to to register a domain name, for example, PocketPrep.com.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Which regulation would be used to build a risk-based policy for cost-effective security for government agencies?

A. Gramm-Leach-Bliley Act (GLBA)
B. Health Information Portability Accountability Act (HIPAA)
C. Federal Information Security Management Act (FISMA)
D. Protected Health Information (PHI)

A

C. Federal Information Security Management Act (FISMA)

Explanation:
US government agencies must build risk-based policies for cost-effective security. Government agencies are not immune to bad actors attacking them. In the past, the security within government agencies was not very good, so this regulation demands that they do better.

GLBA is an extension to Sarbanes-Oxley that demands that personal data be protected with the financial data. The HIPAA requires that Protected Health Information (PHI) be protected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Who should have access to the management plane in a cloud environment?

A. A highly vetted and limited set of administrators
B. Security Operation Center (SOC) personnel
C. A single, highly vetted administrator
D. Software developers deploying virtual machines

A

A. A highly vetted and limited set of administrators

Explanation:
If compromised, the management plane would provide full control of the cloud environment to an attacker. Due to this, only a highly vetted and limited set of administrators should have access to the management plane. However, you will want more than a single administrator. If the single administrator leaves or is no longer able to perform management duties, the ability of the business to manage their cloud environment would be compromised.

Software developers deploying virtual machines may need access, but they would be in the highly vetted group of administrators if that is the case. The same would be true for SOC personnel. They need to be vetted and trusted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Complete the following sentence with the MOST accurate statement: Cloud environments . . .
A. consist of far fewer systems and servers
B. are generally operated out of one physical location
C. are built of components that are completely different from those used in a traditional environment
D. take the level of concern away from the cloud customer and place it onto the cloud provider

A

D. take the level of concern away from the cloud customer and place it onto the cloud provider

Explanation:
While it may seem that a cloud infrastructure is completely different from that of a traditional data center, all the components that exist in a traditional data center are still needed in the cloud. The main difference is that within a cloud environment, the responsibility and level of concern is moved away from the cloud customer to the cloud provider. However, not all concerns, but this answer is the best out of the other three statements. The cloud provider is responsible for the physical data center and its security, and depending on the level of service that the customer buys, they are also possibly responsible for the virtual server and applications.

One way to look at the CCSP exam is that it is a data center exam. The language is sometimes changed to the newer cloud language.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Ben is part of an incident response (IR) team that has found that a bad actor has compromised a database full of personal information regarding their customers. What they must do is a good forensic investigation to figure out exactly what has been compromised, how, and hopefully by whom.

Which of the following can provide information regarding the runtime state of a running virtual machine?

A. Technical readiness
B. Digital forensics
C. Virtual Machine Introspection (VMI)
D. Hashing and digital signatures

A

C. Virtual Machine Introspection (VMI)

Explanation:
VMI is a tool that allows for information regarding the runtime state of a running virtual machine to be monitored. It tracks the events, such as interrupts and memory writes, which allow the content of memory to be collected for a running virtual machine.

Hashing and digital signatures can be used to provide evidence that the digital evidence has not been changed or modified.

Digital forensics is the process of collecting digital forensic evidence and examining it. Digital forensic science includes the analysis of media, software, and networks.

Technical readiness would be getting ready to perform evidence collection and analysis when needed in the future.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Georgi is the data architect for a real estate corporation. He has been designing the data structure that they will use when they move into the cloud using Platform as a Service (PaaS). His team has come to the conclusion that they will use a relational database in the cloud. What is this type of data called?

A. Structured data
B. Semistructured data
C. Unstructured data
D. Unmapped data

A

A. Structured data

Explanation:
Structured data is data that has a known format and content type. One example of structured data is the data that is housed in relational databases. This data is housed in specific fields that have a known structure and potential values of data. Having the data organized in these fields makes it easy to search and analyze.

Data lakes and big data are considered unstructured data. Unstructured data is unpredictable. It includes all different types of data (e.g., documents, spreadsheet, images, videos, etc.) Since each file is not predictable in size or format, it is considered unstructured.

Semistructured data is a relational database that has a field that allows for unstructured data to be entered and stored in it.

Unmapped data is not really a term, but it could be considered data that is not classified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

In cloud computing, the security of Domain Name System (DNS) is very important to prevent a bad actor from hijacking DNS and redirecting network traffic. To prevent misinformation from being passed throughout the DNS environment, DNS Security (DNSSec) protects the recursive resolver information using what?

A. Symmetric encryption
B. Advanced Encryption Standard
C. Digital signatures
D. Hashing algorithms

A

C. Digital signatures

Explanation:
DNSSec is a protocol that works as a security addition to the standard DNS protocol. DNSSEC works by ensuring all Fully Qualified Domain Name (FQDN) responses are validated. The recursive resolver uses a private key to digitally sign information that is sent in DNS updates. The signature allows the receiving DNS server to validate any DNS information that it receives, thus preventing a bad actor from redirecting network traffic.

Creation of digital signatures is done with asymmetric algorithms, not symmetric. Symmetric algorithms do not provide a way to prove authenticity because they function with a shared key. The Advanced Encryption Standard (AES) is a symmetric algorithm.

Hashing algorithms are not used to validate the source of information. They can be used to verify the integrity, but it is not possible to verify that a FQDN maps to a specific Internet Protocol (IP) address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Carin is working at a real estate company as the information security manager. She was recently hired to begin to build a solid information security program. Up until now, the company has only had a few policies and procedures in place as well as desktop firewalls and a network Intrusion Detection System (IDS). She knows there is a lot of work to do to build a secure environment for the users, especially since they handle a lot of sensitive customer personal information. Today she is looking at how a data leak could occur within this business.

If they determine that the data is most likely to be leaked through their website when the bad actor is able to compromise a stored link that redirects the user to the bad actor’s site where they enter and share their credentials with the bad actor, what phase of the data lifecycle would this be?

A. Store
B. Use
C. Archive
D. Destroy

A

B. Use

Explanation:
Since the user is logging in through the bad actor’s site, this would be the use phase. The user is logging in to view the data. It is not being modified, nor is it being shared with someone else.

The data is stored on the website, or behind the website, but that is not what the user is doing. The user is accessing it now, so that is use.

Archival is when the data is intentionally moved into a long-term storage location. The data is not being moved in this question, only viewed.

Similarly, the data is not being destroyed. The bad actor may destroy it when they log in with the stolen credentials, but that is not the concern at the moment. That is in the future.q

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

If researchers and scientists want to work together to share research data, they can through the use of the cloud. When scientists at a research university want to access this data, they need to be authenticated. It is possible for this to be done locally by their own university, as it acts as the Identity Provider (IdP).

Which of the following authentication mechanisms are they using?

A. Federated Identity Management (FIM)
B. eXtensible Markup Language (XML)
C. Multifactor authentication
D. Kerberos

A

A. Federated Identity Management (FIM)

Explanation:
Federated identity management allows something like the scenario in the question to work well, especially when the university that knows the scientists does the authentication by acting as the identity provider. Each university (if, that is, where each scientist works) can authenticate those that they know, such as their own students or employees. They become the identity provider in a FIM setup.

Multifactor authentication is always a good idea or at least to consider. The question, though, is asking about a cloud environment where they can have the university be the IdP, that is a FIM environment.

Kerberos is a single sign on that has been used widely since it was developed in the late 1980s. In a way, it is similar to technologies that enable FIM, such as OAuth and SAML.

XML is used by SAML and other environments, but it is not the IdP or FIM technology. It may be used within, depending on the setup.

41
Q

Which of the following tools might be used to generate an SBOM?

A. SCA
B. IAST
C. DAST
D. SAST

A

A. SCA

Explanation:
Some common tools for application security testing include:

Static Application Security Testing (SAST): SAST tools inspect the source code of an application for vulnerable code patterns. It can be performed early in the software development lifecycle but can’t catch some vulnerabilities, such as those visible only at runtime.
Dynamic Application Security Testing (DAST): DAST bombards a running application with anomalous inputs or attempted exploits for known vulnerabilities. It has no knowledge of the application’s internals, so it can miss vulnerabilities. However, it is capable of detecting runtime vulnerabilities and configuration errors (unlike SAST).
Interactive Application Security Testing (IAST): IAST places an agent inside an application and monitors its internal state while it is running. This enables it to identify unknown vulnerabilities based on their effects on the application.
Software Composition Analysis (SCA): SCA is used to identify the third-party dependencies included in an application and may generate a software bill of materials (SBOM). This enables the developer to identify vulnerabilities that exist in this third-party code.
42
Q

Bao is able to connect to his home’s thermostat using the internet on his phone and adjust the temperature remotely. This is an example of which type of technology?

A. Blockchain
B. Internet of Things (IoT)
C. Machine learning (ML)
D. Artificial Intelligence (AI)

A

B. Internet of Things (IoT)

Explanation:
The Internet of Things (IoT) refers to the use of non-traditional computing devices (such as lamps, thermostats, and other home appliances) accessing the internet. Although some do consider laptops, smart phones, and computers to be part of IoT, for the exam, these items are unlikely to be considered part of the IoT.

Machine Learning (ML) is computers being able to process data to determine answers. The answers could confirm (or not) a hypothesis. Or it is possible for ML to come to a conclusion without any preconceived hypothesis. ML is a component of AI.

AI is having computers process data the same way a human brain could. Arguably, we are not at AI yet. Some say we are at narrow AI. It is still an evolving technology for sure.

Blockchain creates a ledger of transactions that are permanent and verifiable. A common use today is cryptocurrency.

43
Q

Which of the following main goals of IRM is MOST concerned with HOW a user accesses a resource?

A. Access Models
B. Enforcement
C. Data Rights
D. Provisioning

A

A. Access Models

Explanation:
Information rights management (IRM) involves controlling access to data, including implementing access controls and managing what users can do with the data. The three main objectives of IRM are:

Data Rights: Data rights define what users are permitted to do with data (read, write, execute, forward, etc.). It also deals with how those rights are defined, applied, changed, and revoked.
Provisioning: Provisioning is when users are onboarded to a system and rights are assigned to them. Often, this uses roles and groups to improve the consistency and scalability of rights management, as rights can be defined granularly for a particular role or group and then applied to everyone that fits in that group.
Access Models: Access models take the means by which data is accessed into account when defining rights. For example, data presented via a web application has different potential rights (read, copy-paste, etc.) than data provided in files (read, write, execute, delete, etc.).

Enforcement is not a main objective of IRM.

44
Q

Odette is working with the cloud architects to plan a variety of information security elements for a new cloud deployment. They are using Infrastructure as a Service (IaaS) to extend their current data center capabilities. They will be migrating almost all data storage and processing power to the cloud over the next 18 months. They need first to ensure that they have a secure connection between their data center and the cloud IaaS.

What technology would be a good idea to use here?

A. Transport Layer Security
B. Encapsulating Security Payload
C. Secure Shell
D. Internet Protocol Security

A

D. Internet Protocol Security

Explanation:
Internet Protocol Security (IPSec) can be used to encrypt and authenticate packets during transmission between two systems. Examples of this include between two servers, between two network devices, and between network devices and servers. It works very nicely from the edge router in the data center across the internet service provider to the edge router in their IaaS deployment.

Encapsulating Security Payload (ESP) is the option within IPSec that encrypts the payloads of the IP packets. This is definitely a feature to turn on within IPSec but not the only feature needed. Authentication Header (AH) and tunnel mode would be two more great options to use, which makes IPSec the more complete answer.

Transport Layer Security (TLS) is good for ensuring the confidentiality of web-based sessions as well as authentication of the edge appliances. It can be used in other uses. However, it is not the best protocol for router-to-router connections. This is a layer 4 protocol where IPSec is layer 3.

Secure Shell (SSH) is a layer 5 protocol that is good for securing the connections from the cloud administrators to their routers, switches, and servers for configuration purposes.

45
Q

What role works with the Information Technology (IT) department to ensure that cloud systems comply with contractual and regulatory obligations?

A. Fourth party auditor
B. Cloud auditor
C. Internal auditor
D. External auditor

A

C. Internal auditor

Explanation:
Internal auditors serve as an organization’s trusted counsel. Internal auditors collaborate with Information Technology (IT) to provide a proactive strategy that balances advisory and assurance services. Internal auditors may be affiliated with the organization, or they may be an independent entity that is unaffiliated with the organization. In most cases, an internal auditor will conduct audits in the conventional sense, ensuring that cloud systems comply with contractual and regulatory obligations. The key to the question is works with. The external auditor is hired to perform an audit, but they would not work with IT.

Cloud auditors audit the cloud provider. They are usually referred to as the third party. The first party is the cloud customer. The second party is the Cloud Service Provider (CSP). The third party is the external auditor that audits the provider (a cloud auditor). The fourth party is the contractors the third party hires

46
Q

Sigrid works for a medium-sized business as their information security manager. The corporation handles a lot of personal details about their customers, and they know they must protect that according to the laws. Sigrid and her team have made the decision to use International Standards Organization/International Electrotechnical Commission 27001.

What are they working on designing at this point?

A. Information Security Management System
B. eDiscovery management plan
C. Data handling procedures
D. Audit plan

A

A. Information Security Management System

Explanation:
ISO/IEC 27001 provides guidelines for creating and managing an Information Security Management System (ISMS).

The statement of Standards for Attestation Engagements (SSAE) can be used to design an audit plan.

Data handling procedures is one of the things that can be planned when building an ISMS.

eDiscovery management plan is another topic that can be designed using ISO/IEC 27001, and there is ISO 27050 that is specific to eDiscovery.

47
Q

The move to utilize cloud resources partnered with an increasingly regulated and dispersed supply chain elevates the priority of stakeholder coordination. Which of the following stakeholder groups is the LEAST likely to have contracts or formal agreements with a cloud provider?

A. Customers
B. Vendors
C. Regulators
D. Partners

A

C. Regulators

Explanation:
CSPs are likely to have contracts or some form of agreement with vendors, partners, and customers, but rarely (if ever) with a regulator. Cloud providers purchase servers, routers, firewalls, switches, etc., so they will have contracts with vendors.

They would have contracts with the auditors that come in and assess their environments, possibly for SOC 2 or ISO 27001 audits. There would be a contract between the CSP and the audit company. Auditors are considered partners according to ISO 17788.

The CSP would definitely have contracts with their customers. This is probably the first contract people think of when talking about clouds.

The organization/tenant/customer is responsible for ensuring their cloud environment is in compliance with all regulatory obligations applicable to their organization. However, this is not done through a contract with the regulators.

48
Q

Freeya has been assisting cloud data architects with planning how they will securely store data in their Platform as a Service implementation. They know that leaving a key with the encrypted data is not advised. If someone has the key, they can read the data. They are exploring options in the cloud to protect those keys without costing too much money.

What is the most efficient and cost effective way of storing a key for data that is not exceedingly sensitive?

A. Utilize client-side encryption and decryption with the key stored in the virtual machine
B. Utilize a cloud Key Management Service (KMS) to encrypt the data encryption key
C. Utilize server-side encryption and decryption with the key stored in the virtual machine
D. Utilize a cloud Hardware Security Module (HSM) to encrypt and decrypt the data

A

B. Utilize a cloud Key Management Service (KMS) to encrypt the data encryption key

Explanation:
It is essential that we understand the options for how and where to encrypt data in the cloud. There are many solutions that vary per cloud service provider.

Using a KMS is a great solution today for most companies. The cost is fairly low, if not free for KMS. With KMS, the customer generates a Data Encryption Key (DEK) that is used to encrypt the actual data. The DEK is then encrypted with a Customer Master Key (CMK). The CMK is stored in the KMS. The plaintext data is only available on the server or the customer side.

An HSM is a much more expensive option. The encryption and decryption of the data actually occurs within the HSM.

Client-side encryption and decryption is not so bad, but the key should never be stored in the VM. When the image is decrypted to be spun up, the key will be accessible to anyone who can see that image.

The same can be said about server-side encryption and decryption. It is not entirely wrong, but the key should never be stored within the image.

49
Q

A malicious actor created a free trial account for a cloud service using a fake identity. Once the free trial cloud environment was up and running, they used it as a launch pad for several cloud-based attacks. Because they used a fake identity to set up the free trial, it would be difficult (if not impossible) for the attacks to be traced back to them.

What type of cloud-based threat is being described here?

A. Shared technology issues
B. Advanced persistent threats
C. Abuse or nefarious use of cloud services
D. Denial-of-service

A

C. Abuse or nefarious use of cloud services

Explanation:
Abuse or nefarious use of cloud services is listed as one of the top twelve threats to cloud environments by the Cloud Security Alliance. Abuse or nefarious use of cloud services occurs when an attacker is able to launch attacks from a cloud environment either by gaining access to a poorly secured cloud or using a free trial of cloud service. Often, when using a free trial, the attacker will configure everything using a fake identity so attacks can’t be traced back to them.

A Denial-of-Service (DoS) attack is when the bad actor causes a system to max out or fill up so that a user is not able to do any work.

Shared technology is the core nature of clouds, especially public clouds. If the cloud provider does not take care to ensure that each tenant is not properly isolated or they do not take care of the operating systems, it could lead to so many possible problems. If the hypervisors, Microsoft servers, Linux servers, or any of the other software is not patched or configured properly, it is possible that data could leak between tenants or cause other issues.

Advance Persistent Threats (APT) are when very skilled and aggressive bad actors, probably operating on behalf of a government, create software that will slowly cause problems for another country or business. So the word “advanced” speaks to the skill of the bad actors. The word “persistent” speaks to malicious software being in place over a long period of time to cause a great number of problems. If you are unfamiliar with APTs, do a little research into Stuxnet.

50
Q

Which of the following cloud roles and responsibilities involves maintaining cloud infrastructure AND meeting SLAs?

A. Regulatory
B. Cloud Service Broker
C. Cloud Service Provider
D. Cloud Service Partner

A

C. Cloud Service Provider

Explanation:
Some of the important roles and responsibilities in cloud computing include:

Cloud Service Provider: The cloud service provider offers cloud services to a third party. They are responsible for operating their infrastructure and meeting service level agreements (SLAs).
Cloud Customer: The cloud customer uses cloud services. They are responsible for the portion of the cloud infrastructure stack under their control.
Cloud Service Partners: Cloud service partners are distinct from the cloud service provider but offer a related service. For example, a cloud service partner may offer add-on security services to secure an organization’s cloud infrastructure.
Cloud Service Brokers: A cloud service broker may combine services from several different cloud providers and customize them into packages that meet a customer’s needs and integrate with their environment.
Regulators: Regulators ensure that organizations — and their cloud infrastructures — are compliant with applicable laws and regulations. The global nature of the cloud can make regulatory and jurisdictional issues more complex.
51
Q

Ben is part of an incident response (IR) team that has found that a bad actor has compromised a database full of personal information regarding their customers. What they must do is a good forensic investigation to figure out exactly what has been compromised, how, and hopefully by whom.

Which of the following can provide information regarding the runtime state of a running virtual machine?

A. Hashing and digital signatures
B. Virtual Machine Introspection (VMI)
C. Digital forensics
D. Technical readiness

A

B. Virtual Machine Introspection (VMI)

Explanation:
VMI is a tool that allows for information regarding the runtime state of a running virtual machine to be monitored. It tracks the events, such as interrupts and memory writes, which allow the content of memory to be collected for a running virtual machine.

Hashing and digital signatures can be used to provide evidence that the digital evidence has not been changed or modified.

Digital forensics is the process of collecting digital forensic evidence and examining it. Digital forensic science includes the analysis of media, software, and networks.

Technical readiness would be getting ready to perform evidence collection and analysis when needed in the future.

52
Q

Upon review, an information security professional noticed that one of their cloud applications included a SELECT statement. The engineer has asked the developers of the application to modify the code so that user-supplied input must be validated and attackers are unable to send malicious Structured Query Language (SQL) statements through the application.

What type of attack is this engineer trying to prevent?

A. Cross-Site Scripting (XSS)
B. Cross-Site Request Forgery (CSRF)
C. Injection
D. eXtensible Markup Language (XML) External Entity

A

C. Injection

Explanation:
A SQL injection attack occurs when an attacker sends malicious SQL statements to the application via data input fields. To prevent these types of attacks, developers can use techniques such as permit list input validation, using prepared statements, and escaping all user-supplied input.

Cross-Site Scripting (XSS) is actually a type of injection attack as well. XSS is a type of security vulnerability that occurs when a web application does not properly sanitize user-supplied input and allows malicious code to be injected into web pages viewed by other users. It is a common attack vector that can have severe consequences if exploited.

Cross-Site Request Forgery (CSRF) is a type of security vulnerability that occurs when a malicious actor tricks a victim into unknowingly executing unintended actions on a web application. It takes advantage of the trust a website has in a user’s browser and can result in unauthorized actions being performed on the victim’s behalf.

eXtensible Markup Language (XML) External Entity or XML External Entity (XEE) refers to a security vulnerability that exists in applications processing XML inputs. It occurs when an application allows the inclusion of external entities within an XML document, which can lead to the disclosure of sensitive information, denial of service attacks, or even remote code execution.

53
Q

Deco is the information security professional for an organization that specializes in market research for an athletic supply company. There is a great deal of information that needs to be processed to determine which products are of most interest in which gyms around the world. The corporation has developed a data lake that is stored in the cloud in a Platform as a Service, server-based system. It is necessary to ensure that the data is protected from changes, deletions, or leaks. Deco is working to create the processes that need to exist to protect that data and is working to determine who will be responsible for the data from determining the classification to which security controls need to be in place around that data.

This individual would be known as which of the following?

A. Data processor
B. Data controller
C. Data custodian
D. Data owner

A

D. Data owner

Explanation:
A data owner is the party that maintains full responsibility and ownership of data. Data owners determine the appropriate controls that are necessary to protect that data, including its classification.

The data controller determines if and how personal data can be collected and how long it can be stored, basically at a policy level.

The data custodian is simply whoever is in possession of the data, which includes IT, end users, senior staff, etc.

The data processor handles data (processes and stores the data) within their systems. The data processor is not the employee of the data controller. Think of a payroll company that handles that process for a small business. The separate payroll company would be the data processor. It does include holding or storing of data, so our cloud providers are data processors if personal data is stored there. This is per GDPR in Europe,

54
Q

For which of the following is data discovery the EASIEST?

A. Structured data
B. Mostly structured data
C. Unstructured data
D. Semi-structured data

A

A. Structured data

Explanation:
The complexity of data discovery depends on the type of data being analyzed. Data is commonly classified into one of three categories:

Structured: Structured data has a clear, consistent format. Data in a database is a classic example of structured data where all data is labeled using columns. Data discovery is easiest with structured data because the data discovery tool just needs to understand the structure of the database and the context to identify sensitive data.
Unstructured Data: Unstructured data is at the other extreme from structured data and includes data where no underlying structure exists. Documents, emails, photos, and similar files are examples of unstructured data. Data discovery in unstructured data is more complex because the tool needs to identify data of interest completely on its own.
Semi-Structured Data: Semi-structured data falls between structured and unstructured data, having some internal structure but not to the same degree as a database. HTML, XML, and JSON are examples of semi-structured data formats that use tags to define the function of a particular piece of data.

Mostly structured is not a common classification for data.

55
Q

DAST is classified as which of the following types of testing?

A. Black-box
B. Gray-box
C. White-box
D. Clear-box

A

A. Black-box

Explanation:
Software testing can be classified as one of a few different types, including:

White-box: In white-box or clear-box testing, the tester has full access to the software and its source code and documentation. Static application security testing (SAST) is an example of this technique.
Gray-box: The tester has partial knowledge of and access to the software. For example, they may have access to user documentation and high-level architectural information.
Black-box: In this test, the attacker has no specialized knowledge or access. Dynamic application security testing (DAST) is an example of this form of testing.
56
Q

Your organization is considering using a Data Rights Management (DRM) solution that incorporates dynamic policy controls. Which of the following is the MOST accurate description of this functionality?

A. Expiration dates and time-limitations can be applied
B. Data is secure no matter where it is stored
C. The illicit or unauthorized copying of data is prohibited
D. Permissions can be modified after a document has been shared

A

D. Permissions can be modified after a document has been shared

Explanation:
Dynamic policy controls allow data owners to modify the permissions for their protected data even after it has been shared with others. The key word is “dynamic”—things can be changed. The other features are true but static as stated.

All other options are descriptions of functionalities provided by other features of data rights management solutions.

One of the core features of DRM, which is also known as Information Rights Management (IRM), is that copying of data can be controlled, and the control is persistent no matter where the data is.

It is possible to control expiration dates and time limits. For example, Amazon video allows the rental of a video. You can hold onto the rental for a couple of months, but once you push play you have 24 or 48 hours to complete the video before it is removed from view.

57
Q

A hospital wants to create a cloud network, a community cloud, that will allow researchers to share data. This hospital is working on curing certain types of cancer, and it is necessary for the researchers to have access to the medical data of patients who have those types of cancer and have had different tests and treatments.

What must be done with the patients’ data for this to be acceptable?

A. Tokenization
B. Obfuscation
C. Anonymization
D. Masking

A

C. Anonymization

Explanation:
Anonymization is a data privacy technique used to protect the privacy of individuals by removing or altering Personally Identifiable Information (PII) from datasets. The goal of anonymization is to transform data in such a way that it becomes practically impossible to identify specific individuals from the anonymized data.

Masking is a data protection technique used to obfuscate or hide sensitive or PII within a dataset. It involves replacing certain characters or values with non-sensitive or fictional data, usually stars or asterisks, while preserving the overall structure and format of the original data.

Obfuscation is a technique used to deliberately obscure or make code, data, or information difficult to understand or analyze. It is commonly employed in software development and cybersecurity to protect intellectual property, prevent reverse engineering, or hinder malicious activities.

Tokenization is a data protection technique that involves replacing sensitive data elements, such as credit card numbers or PII, with unique identifiers called tokens. The original data is securely stored in a separate location called a token vault or tokenization system.

58
Q

An organization is in the process of fighting a civil legal battle with a previous employee. The organization has requested that one of their engineers search for and collect electronic data (such as emails and stored files) regarding the case so that it can be used in court proceedings.

What task has this engineer been asked to complete?

A. Digital examination
B. Digital discovery
C. eDiscovery
D. eForensics

A

C. eDiscovery

Explanation:
eDiscovery is the process of searching for and collecting electronic data of any kind (emails, digital images, documents, etc.) so that the data can be used in either civil legal proceedings or criminal legal proceedings. ISO/IEC 27050 is a guide for eDiscovery.

The other terms are not the term of use today.

59
Q

Which of the following is an example of a logical or software-level consideration when designing a data center?

A. Location
B. HVAC
C. Multivendor Pathway Connectivity
D. Tenant Partitioning

A

D. Tenant Partitioning

Explanation:
Tenant partitioning is an example of a logical consideration for data center design. HVAC and multivendor pathway connectivity are environmental considerations, and location is a physical consideration.

60
Q

Abeeku has responded to an Indication of Compromise (IoC) that the Security Information Event Manager (SIEM) has reported. It is discovered that a bad actor has gained access to a critical server that contains data that must remain confidential per the country’s law. While responding, the team led by Abeeku has found information within the server and networking devices regarding how the bad actor gained access and what they have been able to see.

What does the team need to do?

A. Perform incident management to include tracking the bad actor
B. Perform evidence management, including maintaining chain of custody
C. Perform capacity management to make sure the server remains available
D. Perform continuity management to ensure server availability

A

B. Perform evidence management, including maintaining chain of custody

Explanation:
Evidence management is concerned with maintaining the chain of custody in a forensics investigation.

Capacity management is focused on maintaining the required resources needed to meet SLAs.

Incident management is focused on limiting the impact of incidents on an organization.

Continuity management is concerned with developing a business continuity and disaster recovery plan.

61
Q

Shawna is working with her team to secure the Application Programing Interface (API) that will be used in a cloud deployment. They will be using a RESTful API (RePresentation State Transfer). To protect the request/response transmissions, she should ensure which of the following?

A. Regular security testing is done
B. Both the requests and responses are encrypted
C. The requests are always authenticated
D. The API keys are not hard coded

A

B. Both the requests and responses are encrypted

Explanation:
Protecting the transmissions by encrypting them is the best answer. Encrypting data in transmission protects the data from prying eyes. This is true whether it is an API that is transmitting information or a standard client-server application.

Authenticating the requests is always a good thing to do and is typically done with API keys. They should not be hard coded into the API. The question, though, is about protecting the transmissions.

Regular testing is also a good thing to do with any software that includes APIs. Again, the question is asking for the transmission to be protected, and encrypting it is the wise thing to do.

62
Q

Sky has been working with the sales department managing a database and its security. They are currently looking at the data that they have collected about sales trends, and they are working to determine which of their products needs to be moved from the warehouse to the stores and which stores they need to move them to.

Which state is the data in at the moment?

A. Data in transition
B. Data in use
C. Data at rest
D. Data in motion

A

B. Data in use

Explanation:
The three data states include data in use, data in motion (also called data in transit), and data at rest. Data in use refers to when the data is being actively used, and in this question that is what is happening as they examine the data.

Data in motion, or data in transit, refers to when the data is in active transmission across the network. The data will have moved from the database to their screen, but the focus is sales people looking at and analyzing the data. That is data in use.

Data at rest refers to data being stored in an idle state.

Data in transition is not one of these three data states.

63
Q

Through Common Criteria, what does an EAL4 score tell us about an organization’s security practices and results?

A. It has been methodically designed, tested, and reviewed
B. It has been semi-formally designed and tested
C. It has been structurally tested
D. It has been functionally tested

A

A. It has been methodically designed, tested, and reviewed

Explanation:
Correct answer: It has been methodically designed, tested, and reviewed

The possible Evaluation Assurance Level (EAL) scores are as follows:

EAL1 - Functionally tested
EAL2 - Structurally tested
EAL3 - Methodically tested and checked
EAL4 - Methodically designed, tested, and reviewed
EAL5 - Semi-formally designed and tested
EAL6 - Semi-formally verified design and tested
EAL7 - Formally verified design and tested

This is a simple question, but it is here because you might need this information for the test.

64
Q

What is the name for a dummy system designed to attract an attacker’s attention and provide early warning of an attack?

A. Sandbox
B. Container
C. Bastion Host
D. Honeypot

A

D. Honeypot

Explanation:
A honeypot is a dummy system designed to attract an attacker’s attention and waste their time while allowing defenders to detect and observe the attack.

Bastion hosts are systems designed to provide access to a private network from a less secure one.

Sandboxing involves running software in an isolated environment where it can’t cause damage to production systems.

Containers wrap an application and its dependencies in a package to improve portability.

65
Q

In traditional data centers, physical separation and segregation are used to secure data. However, these concepts are not possible in, nor applicable to, cloud environments. With concepts like multitenancy and resource pooling at the forefront of cloud technologies, which of the following is used to keep data private?

A. Security Assertion Markup Language (SAML)
B. Antimalware
C. Encryption
D. Object storage

A

C. Encryption

Explanation:
To keep data private anywhere today, encryption is required. There are different types of encryption for data at rest, data in use, and data in transit. For confidentiality, we use symmetric encryption today.

Antimalware will not keep data private. It is designed to find any type of malicious software it can. The malicious software could endanger the privacy of data, but we need encryption to actually protect the data.

Object storage might help protect the confidentiality of data, but relying on that alone to protect confidentiality is a mistake. We need to render the data unreadable, which is the point of encryption.

SAML is a markup programming language that creates a token for a user to submit as proof of their identity.

66
Q

The ability to deploy VMs and use block data storage in the cloud is a feature of which cloud service model?

A. FaaS
B. IaaS
C. SaaS
D. PaaS

A

B. IaaS

Explanation:
Cloud services are typically provided under three main service models:

Software as a Service (SaaS): Under the SaaS model, the cloud provider offers the customer access to a complete application developed by the cloud provider. Webmail services like Google Workspace and Microsoft 365 are examples of SaaS offerings.
Platform as a Service (PaaS): In a PaaS model, the cloud provider offers the customer a managed environment where they can build and deploy applications. The cloud provider manages compute, data storage, and other services for the application.
Infrastructure as a Service (IaaS): In IaaS, the cloud provider offers an environment where the customer has access to various infrastructure building blocks. AWS, which allows customers to deploy virtual machines (VMs) or use block data storage in the cloud, is an example of an IaaS platform.

Function as a Service (FaaS) is a form of PaaS in which the customer creates individual functions that can run in the cloud. Examples include AWS Lambda, Microsoft Azure Functions, and Google Cloud Functions.

67
Q

An engineer just purchased an application suite for her organization. The application is hosted by a cloud provider and that cloud provider maintains and manages the application itself as well the entire infrastructure and platform. The application is accessed over the internet and is not installed locally on any employee’s machine.

What type of cloud service is being described here?

A. Infrastructure as a Service (IaaS)
B. Communication as a Service (CaaS)
C. Software as a Service (SaaS)
D. Platform as a Service (PaaS)

A

C. Software as a Service (SaaS)

Explanation:
SaaS is a cloud service in which the cloud provider manages and maintains everything from the application/software itself to the servers they run on and the platform they were built on. The cloud client is not responsible for anything to do with managing the program; they can simply access it over the internet.

IaaS allows a customer to bring their servers, databases, routers, switches, firewalls, Intrustion Detection Systems (IDS), applications, and so on to the cloud provider. Effectively, it allows the customer to build a virtual Data Center (DC). The question says that the cloud provider manages all this, so it cannot be IaaS.

PaaS allows a customer to lease a server-based or serverless environment. The customer brings or builds their software with PaaS; therefore, it does not match the question.

Communication as a Service is mentioned in the ISO/IEC 17788 document, although most do not consider it one of the main options. However, this would involve some kind of communication software. The question does not mention anything specific about the application, so Software as a Service is a better option as an answer.

68
Q

Which of the following standards seeks to provide internationally accepted guidelines for eDiscovery processes and best practices?

A. National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53
B. Federal Information Security Management Act (FISMA)
C. Payment Card Industry Data Security Standard (PCI DSS)
D. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27050

A

D. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27050

Explanation:
The ISO/IEC 27050 standard provides guidelines for eDiscovery processes and best practices. ISO/IEC 27050 covers all steps of eDiscovery processes, including identification, preservation, collection, processing, review, analysis, and the final production of the requested data archive.

FISMA is a U.S. act that requires U.S. government agencies to implement processes and controls to protect the confidentiality, integrity, and availability of information systems.

NIST SP 800-53 is a catalog of security and privacy controls for information security systems and assets.

PCI DSS mandates 12 controls that must be implemented if a company is handling or processing credit card or bank card numbers and associated data.

69
Q

Your cloud environment has changed significantly during the last year. Several of these adjustments resulted in service interruptions. You’ll want to develop a mechanism to track these modifications with information to be prepared to rollback to the former setup if necessary. What tool can be used to assist you?

A. Release management
B. Capacity management
C. Configuration management
D. Change management

A

C. Configuration management

Explanation:
Configuration management is required. Configuration management technologies aid in cloud deployment management by centrally storing and archiving cloud configurations. It enables the tracking of configuration changes and the identification of the individuals who made the changes. These provisions enable you to guarantee that your cloud conforms with applicable regulations. In ITIL, this is defined as the parameter setting for configuration items.

Change management is defined by ITIL as additions, modifications, and removal of anything that can affect services. If configuration management was not an answer option, this could have been the correct one. The question is trying to point specifically to tracking the configuration changes so configuration management is the better answer.

Release management is used for adding new services and features or changing them.

Capacity management is ensuring that there is a plan to meet the demand for resources or services when needed.

70
Q

Carrie is working as the information security team lead with the data architects. They are working to ensure that once data is entered into the database it will retain that exact value and will not be changed or corrupted. What mechanism could Carrie use to verify the integrity of the data over time?

A. Message Digest (MD) 5
B. Key Management Solution (KMS)
C. Public Key Cryptography Standard (PKCS)
D. Advanced Encryption Standard (AES)

A

A. Message Digest (MD) 5

Explanation:
Hashing is a process that can be used to verify the integrity of data. MD5 is a hashing algorithm. This is because if you use the same hashing algorithm on the same data time and time again, the hash value that is generated will be the same. If the data is changed, the hash value will be different, confirming that the integrity of the data is not intact.

AES is a symmetric encryption algorithm. While encryption can protect the data in the database, it does not prove the integrity of the data.

KMS is a tool that is used to store and protect cryptographic keys.

PKCS is another tool for managing crypto keys. Managing crypto keys is not related to proving integrity. Cryptography is usually used to protect confidentiality, maybe authenticity.

71
Q

The potential for malicious functionality to be included in dependencies is a concern MOST related to which of the following?

A. Open Source Software
B. Third Party Software
C. API Security
D. Supply Chain Security

A

A. Open Source Software

Explanation:
Some important considerations for secure software development in the cloud include:

API Security: In the cloud, the use of microservices and APIs is common. API security best practices include identifying all APIs, performing regular vulnerability scanning, and implementing access controls to manage access to the APIs.
Supply Chain Security: An attacker may be able to access an organization’s systems via access provided to a partner or vendor, or a failure of a provider’s systems may place an organization’s security at risk. Companies should assess their vendors’ security and ability to provide services via SOC2 and ISO 27001 certifications.
Third-Party Software: Third-party software may contain vulnerabilities or malicious functionality introduced by an attacker. Also, the use of third-party software is often managed via licensing, with whose terms an organization must comply. Visibility into the use of third-party software is essential for security and legal compliance.
Open Source Software: Most software uses third-party and open-source libraries and components, which can include malicious functionality or vulnerabilities. Developers should use software composition analysis (SCA) tools to build a software bill of materials (SBOM) to identify any potential vulnerabilities in components used by their applications.
72
Q

Which of the following is a standard that defines the requirements for cryptographic modules?

A. FIPS 140-2
B. G-Cloud
C. FedRAMP
D. Common Criteria

A

A. FIPS 140-2

Explanation:
Cloud providers’ systems may be subject to certification against standards that address a specific component, such as a cryptographic module. Examples of these system/subsystem product certifications include:

Common Criteria: Common Criteria (CC) are guidelines for comparing various security systems. A protection profile describes the security requirements of systems being compared, and the evaluation assurance level (EAL) describes the level of testing performed on the system, ranging from 1 (lowest) to 7 (highest).
FIPS 140-2: Federal Information Processing Standard (FIPS) 140-2 is a US government standard for cryptographic modules. FIPS compliance is necessary for organizations that want to work with the US government and mandates the use of secure cryptographic algorithms like AES.

FedRAMP and G-Cloud are standards used by the US and UK governments.

73
Q

Ha-yoon is the data architect for her corporation. She has been designing the way that they are going to move the Structure Query Language (SQL) databases into the Platform as a Service (PaaS) deployment. What type of data is this?

A. Unstructured data
B. Semistructured data
C. Structured data
D. Content-based discovery

A

C. Structured data

Explanation:
Structured data describes data that is in a known format and content type. The most common example of structured data is found within relational databases (e.g., SQL).

Unstructured data is not in a predictable format. Big data and data lakes are an example.

Semi-structured data is unstructured data placed in a structured database.

Content-based discovery is a type of data discovery.

74
Q

When testing software, it is essential to ensure that it is not vulnerable to attacks that are both known and unknown. Known hardware and software weakness types are which of the following?
A. Are identified by Common Weakness Enumeration (CWE) scores based on the Common Weakness Scoring System (CWSS), which is a community developed project
B. Are identified by Common Vulnerabilities Enumeration (CVE) scores based on the Common Weakness Scoring System (CWSS), which is a community developed project
C. Are identified by Common Weakness Enumeration (CWE) scores based on the National Institute of Standards and Technology (NIST) Risk Management Framework (RMF), which is a community developed project
D. Are identified by National Vulnerability Database (NVD) scores based on the Common Weakness Scoring System (CWSS), which is a community developed project

A

A. Are identified by Common Weakness Enumeration (CWE) scores based on the Common Weakness Scoring System (CWSS), which is a community developed project

Explanation:
Correct answer: Are identified by Common Weakness Enumeration (CWE) scores based on the Common Weakness Scoring System (CWSS), which is a community developed project

The CWE list identifies known hardware and software weakness types such as XML Injection, whereas the CVE list identifies unique vulnerabilities, such as “local privilege escalation due to improper soft link handling. The following products are affected: Acronis Cyber Protect Home Office (Windows) before build 40107.”

The CWSS is a scoring system designed by the community to prioritize “software weaknesses in a consistent, flexible, open manner.”

The NIST RMF is a risk management process to give the government and business a way to perform “security, privacy, and cyber supply chain risk management activities into the system development life cycle.”

The NVD is the U.S. government repository of standards-based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. The NVD includes databases of security checklist references, security-related software flaws, misconfigurations, product names, and impact metrics. NIST uses the CVEs.

75
Q

Hillary is working to ensure that her company receives the services it requires from its cloud service provider. They have a contract with Service Level Agreements (SLAs) for their bandwidth and uptime. What is Hillary doing?

A. Change management
B. Business Continuity Planning (BCP)
C. Information Technology Service Management (ITSM)
D. ITIL (formerly Information Technology Infrastructure Library)

A

C. Information Technology Service Management (ITSM)

Explanation:
ITSM is effectively ISO 20000-1 and is based on ITIL. Managing the services from the cloud provider matches ITSM slightly better than ITIL, but ITIL was included as an answer option for discussion purposes. ITSM is a comprehensive approach to designing, delivering, managing, and improving IT services within an organization. It focuses on aligning IT services with the needs of the business and ensuring that the IT services provided are efficient, reliable, and of high quality. ITSM involves a set of practices, processes, and policies that guide the entire service lifecycle, from service strategy and design to service transition, operation, and continual service improvement.

Key characteristics of ITSM include:

Customer-centric: ITSM emphasizes understanding and meeting the needs of customers and end-users. It aims to improve customer satisfaction and overall service experience.
Process-oriented: ITSM adopts a process-driven approach, defining workflows and procedures to ensure consistent and repeatable service delivery.
Focus on continual improvement: ITSM encourages regular evaluation and optimization of IT services and processes to increase efficiency and effectiveness.

ITIL involves managing data centers more specifically, so it matches the work of the cloud provider slightly better.

Key characteristics of ITIL include:

Service lifecycle approach: ITIL is structured around the service lifecycle, consisting of five core stages: Service Strategy, Service Design, Service Transition, Service Operation, and Continual Service Improvement.
Process framework: ITIL defines a range of processes that cover various aspects of IT service management, including incident management, problem management, change management, service level management, and more.
Widely adopted standard: ITIL has become a de facto standard for ITSM and is widely adopted by organizations globally.

Change management is a structured and organized approach to managing and implementing organizational changes. It involves planning, coordinating, communicating, and monitoring modifications to various aspects of the organization, such as processes, systems, technology, culture, or organizational structure.

BCP is about planning for when there are failures, not the basic management of a cloud vendor.

76
Q

Darriel is working for a Cloud Service Provider (CSP). He is the information security manager on the project to build their new data center as they expand their operations to the west. They have been working to secure the lot and building physically with fencing, cameras, secured doors, and so on. As they move into securing the logical side to the data center, he is concerned with the most critical element that needs to be secured for everyone’s protection.

What element is that?

A. Management plane
B. Hypervisor
C. Physical network equipment
D. Remote Desktop Protocol (RDP)

A

A. Management plane

Explanation:
In virtual environments, the management plane has access to all the hypervisors and hosted systems. While this creates ease of use for administrators, it can also lead to security risks. If an attacker were able to compromise the management plane, they would be able to compromise all the hypervisors and hosted systems in the environment.

If the hypervisor was compromised, that would be a serious problem. The management plane is the access to the hypervisor. Accessing the hypervisor from outside of the cloud provider’s network is probably harder than gaining access if the management plane is compromised. So, the management plane is more important here to secure.

RDP is a common protocol used by administrators to access servers. If this is used and it was compromised it would be a problem but not as severe, probably, as a management plane compromise. That is because of the wider reach of the management plane in the cloud over control of the virtual environments.

If someone could compromise the physical network equipment, that would also be a problem. To compromise the hardware, the path would probably be through compromising the vendor or physically gaining access to the data center. Both are less likely than a management plane compromise. And again, the level of damage that can be caused is probably lower with compromising the physical network rather than the management plane.

77
Q

Which of the following centralizes identity management for an organization?

A. MFA
B. CASB
C. IdP
D. Federation

A

C. IdP

Explanation:
Federated Identity: Federated identity allows users to use the same identity across multiple organizations. The organizations set up their IAM systems to trust user credentials developed by the other organization.
Single Sign-On (SSO): SSO allows users to use a single login credential for multiple applications and systems. The user authenticates to the SSO provider, and the SSO provider authenticates the user to the apps using it.
Identity Providers (IdPs): IdPs manage a user’s identities for an organization. For example, Google, Facebook, and other organizations offer identity management and SSO services on the Web.
Multi-Factor Authentication (MFA): MFA requires a user to provide multiple authentication factors to log into a system. For example, a user may need to provide a password and a one-time password (OTP) sent to a smartphone or generated by an authenticator app.
Cloud Access Security Broker (CASB): A CASB sits between cloud applications and users and manages access and security enforcement for these applications. All requests go through the CASB, which can perform monitoring and logging and can block requests that violate corporate security policies.
Secrets Management: Secrets include passwords, API keys, SSH keys, digital certificates, and anything that is used to authenticate identity and grant access to a system. Secrets management includes ensuring that secrets are randomly generated and stored securely.

78
Q

Identifying ownership, limitations on distribution, and similar information is part of which of the following?

A. Data mapping
B. Data dispersion
C. Data labeling
D. Data flow diagram

A

C. Data labeling

Explanation:
Data dispersion is when data is distributed across multiple locations to improve resiliency. Overlapping coverage makes it possible to reconstruct data if a portion of it is lost.

A data flow diagram (DFD) maps how data flows between an organization’s various locations and applications. This helps to maintain data visibility and implement effective access controls and regulatory compliance.

Data mapping identifies data requiring protection within an organization. This helps to ensure that the data is properly protected wherever it is used.

Data labeling contains metadata describing important features of the data. For example, data labels could include information about ownership, classification, limitations on use or distribution, and when the data was created and should be disposed of.

79
Q

Your organization is transitioning from one cloud service provider to another. Within the current cloud provider’s network, there is a Structured Query Language (SQL) database, and it is encrypted using client-side encryption. To ensure that data is not retrievable even after it has been requested to be destroyed, actions should be taken by your business.

Which data disposal method is the BEST for ensuring data recovery is impossible?

A. Shredding
B. Acid bath
C. Degaussing
D. Crypto-shredding

A

D. Crypto-shredding

Explanation:
The optimal solution would be cryptographic shredding. Crypto-shredding destroys the encryption key, rendering the data unrecoverable. Because the key is in the customer’s possession, this is very possible. The normal description of crypto-shredding is that first you encrypt and then destroy the key. Here it is already encrypted. If, in the process of destroying data, it has to be encrypted, then delete the key, but that does not ensure that the data that is on the cloud’s network is overwritten. When you newly encrypt, you are creating a new copy of the data. Do be careful to ensure that the key the data is encrypted with is destroyed.

Shredding, acid bath, and degaussing are all physical destruction methods. You must have the drive in your possession to do this. There are two problems with that: 1) If you are using a public cloud provider, you are highly unlikely to have physical possession of the drives. 2) Data is chunked or sharded and disbursed across many, many drives that are normally shared with other tenants, making it even more unlikely for this to happen.

Shredding is putting the drive through a machine that chops the drive into small pieces.

An acid bath is where acid is actually put on the drive.

Degaussing alters the magnetic state of magnetic media. Basically, it is a large magnet that you bring into contact with a magnetic drive, but this does not work on Solid State Drives (SSD). SSDs need to be shredded.

80
Q

Which of the following is an XML-based standard used to exchange information in the authorization and authentication process, which was put out by the OASIS consortium and its Security Services Technical Committee?

A. Security Assertion Markup Language (SAML)
B. Open Identification (OpenID)
C. Open Authorization (OAuth)
D. Web Service Federation (WS-Federation)

A

A. Security Assertion Markup Language (SAML)

Explanation:
SAML is an XML-based standard used to exchange information in the authorization and authentication process. SAML 2.0, which was adopted in 2005, is the latest standard put out by the nonprofit OASIS consortium and its Security Services Technical Committee.

WS-Federation can use token technologies such as SAML. It was originally designed by Microsoft, Verisign, and IBM.

OAuth began with work done by Blaine Cook. The Internet Engineering Task Force (IETF) published OAuth 2.0. Open Authorization can be used with Open Identification.

OpenID is used to identify and authenticate; this is from the Open ID Foundation.

81
Q

A regional train company has been upgrading their trains with the latest technology that allows them to communicate with the systems onboard the train better. They are using this for passenger safety, predictive maintenance, condition monitoring of the engine, wheels, etc. They are concerned that they cannot always communicate with the train when it is moving from the servers.

What technology do they need to move to next to enhance these Internet of Things (IoT) capabilities?

A. Internet of Things
B. Edge computing
C. Fog computing
D. Cloud computing

A

C. Fog computing

Explanation:
Fog computing is a term that Cisco created that is gaining traction. Fog computing moves the processing of data to a local fog node or IoT gateway. This would be onboard the train itself. Then, when it reconnects to the internet, it can upload information for the company to manage.

Edge computing is the idea of moving the processing to the logical edge of the network as close to the user and their systems as possible.

Cloud computing is what this entire course and certification is about. In domain 1, the focus is understanding that the cloud, especially a public cloud, is using servers and services located in a data center somewhere, hopefully not too far away, but then again, it can be for redundancy purposes.

The Internet of Things is considered by some to just be the connecting of things like this train to the internet. Others consider it to be anything connected to the internet. Neither is right. It is good to know both to be able to sort out questions. But Internet of Things is not the answer because the question is asking you to enhance IoT.

82
Q

As an information security expert working for a large pharmaceutical company, Michael knows the importance of protecting their intellectual property. They have built a virtual data center (vDC) in a public cloud Infrastructure as a Service (IaaS). Before moving into the IaaS, they did plan properly to remove their data properly from the cloud provider’s servers when needed. When data is old and needs to be removed and put into a different environment that they use as an archive, they have to ensure it is removed properly.

What could they do?

A. Overwriting
B. Crypto shredding
C. Shredding
D. Degaussing

A

B. Crypto shredding

Explanation:
Since this is an IaaS on a public cloud, there is a limit to the options that they have available for this scenario. If they planned properly ahead of time, they can re-encrypt their data and then destroy that new key. Once the key is destroyed, it would be necessary to perform a brute force to find the key. Making it fairly well protected from bad actors. If you combine that with the fact that public cloud providers do not store a single piece of data on a single server, it is divided and dispersed across multiple servers, it makes it a little more difficult for a bad actor to even find the data to perform a brute force attack successfully.

Overwriting is the process of writing a pattern of ones and zeros over the data. For especially sensitive data, it may be best to overwrite the data more than once. However, the way the cloud works to store data might not make this an option. If data is stored on a single drive, overwriting would work. For the cloud provider, this is definitely an option but not that likely for a customer.

Shredding and degaussing are physical destruction of a drive. This is only possible for the cloud provider in a public cloud. Degaussing only works on magnetic drives. It alters the magnetic state in such a way that it renders the drive useless. That is great if you combine it with physical shredding of a drive.

83
Q

Pablo is responsible for ensuring the protection of information within the Infrastructure as a Service (IaaS) that the cloud architecture team is working on designing. To ensure that data can be destroyed properly when necessary, he has been working with the team to determine the appropriate method.

What is the best option as an IaaS customer?

A. Degaussing
B. Shredding
C. Overwriting
D. Cryptographic erasure

A

D. Cryptographic erasure

Explanation:
Cryptographic erasure is a method of data sanitization. Cryptographic erasing is done using encryption and the destruction of the encryption key as a method of data destruction. It is the only standard option available to an IaaS customer. A side note: This is not possible for the customer to do if the customer is using Software as a Service (SaaS).

Overwriting, degaussing, and shredding all require physical access to the drive. An IaaS customer would normally not have that. The customer could put in their contract (if the provider agrees) that they want the provider to take those actions when drives are taken out of service.

Overwriting is literally overwriting the sectors on a drive many times (7x or 11x or 20x, etc.) in a particular pattern to render the data unrecoverable. It is not a perfect solution, but it does help.

Degaussing takes the destruction of data further by altering the magnetic state of magnetic drives. This, too, is not perfect, but it is much better than overwriting.

Shredding, or physical destruction, of the drive is probably the best solution. It does depend on the size of the remaining pieces after shredding. There are some good German standards that define that the drives should result in pieces 2 mm x 4 mm, or something similar, depending on the sensitivity of the data.

84
Q

Which of the following refers to running code in an isolated environment to prevent potential ill effects on production systems?

A. Virtualization
B. Containerization
C. Microservices
D. Sandboxing

A

D. Sandboxing

Explanation;
Sandboxing is when applications are run in an isolated environment, often without access to the Internet or other external systems. Sandboxing can be used for testing application code without placing the rest of the environment at risk or evaluating whether a piece of software contains malicious functionality.

Application virtualization creates a virtual interface between an application and the underlying operating system, making it possible to run the same app in various environments. One way to accomplish this is containerization, which combines an application and all of its dependencies into a container that can be run on an OS running the containerization software (Docker, etc.). Microservices and containerized applications commonly require orchestration solutions such as Kubernetes to manage resources and ensure that updates are properly applied.

85
Q

A cloud security professional has been asked to ensure that an organization’s systems have been hardened against known attacks and weaknesses and then provide a report outlining those weaknesses. What is the BEST course of action for this cloud security professional?

A. Perform dynamic application security testing
B. Perform static application security testing
C. Perform a vulnerability scan
D. Perform a penetration test

A

C. Perform a vulnerability scan

Explanation:
Vulnerability scans are typically performed by an organization against their own systems. Vulnerability scans are relatively simple to perform and use known tests and signatures and can quickly output a report displaying the weaknesses. The vulnerability scan is the best choice out of the options listed to provide the requested outcome.

A penetration test goes beyond the needs of the question by verifying the weaknesses. It is also dangerous and should not be performed without clear, written permission.

Both static and dynamic application testing can reveal known weaknesses, but they address a single application at a time and are usually performed before release to production. The question says “systems,” which implies more of a production environment.

86
Q

Jaana is the cloud information security professional working at a start-up company that is looking for the best way to manage their new invention. They are looking for a way to minimize the distance that data has to travel when they have hundreds to thousands of sensors deployed at a corporation. The data that these sensors are handling is also sensitive in nature, so they are looking for a technology that would enable them to keep the data as confidential as possible.

What technology could aid in these goals?

A. Software as a Service (SaaS)
B. Edge computing
C. Quantum computing
D. Fog computing

A

B. Edge computing

Explanation:
Edge computing is where the data processing happens at or near the edge of the network rather than in a centralized location. The edge could be servers deployed closer to the edge or at the devices themselves. This is possible with smartphones, tablets, and IoT devices. It can also improve the security and privacy if the data is kept and processed locally.

Fog computing extends the principles of edge computing to create a more hierarchical approach. Data processing and storage happens at the ends and also at intermediate network nodes such as routers, switches, and gateways.

Software as a Service (SaaS) processes data at the servers in the cloud. This does not aid the company in the question nearly as well as edge computing.

Quantum computing relies on qubits or quantum bits to store information as opposed to electrical bits in our current technology.

87
Q

Which of the following main objectives of IRM deals likely uses roles and groups to implement granular permission management at scale?

A. Data Rights
B. Enforcement
C. Provisioning
D. Access Models

A

C. Provisioning

Explanation:
Information rights management (IRM) involves controlling access to data, including implementing access controls and managing what users can do with the data. The three main objectives of IRM are:

Data Rights: Data rights define what users are permitted to do with data (read, write, execute, forward, etc.). It also deals with how those rights are defined, applied, changed, and revoked.
Provisioning: Provisioning is when users are onboarded to a system and rights are assigned to them. Often, this uses roles and groups to improve the consistency and scalability of rights management, as rights can be defined granularly for a particular role or group and then applied to everyone that fits in that group.
Access Models: Access models take the means by which data is accessed into account when defining rights. For example, data presented via a web application has different potential rights (read, copy-paste, etc.) than data provided in files (read, write, execute, delete, etc.).

Enforcement is not a main objective of IRM.

88
Q

You are working in Germany for a health care company. It is necessary for the company to ensure that they protect the personal data of the patients. One of the things that the corporation must do is ensure that they have a record of who accessed what record at any given time. How would they have confirmation of which user accessed a record?

A. Authorization
B. Identification
C. Authentication
D. Federation

A

C. Authentication

Explanation:
Authentication is the process of confirming an identity through the use of one or more of the factors of authentication. Authentication can be done with something you know, have, or are.

Identification is the first piece of that puzzle, which should allow a user to uniquely state their identity to a system through something like a user id or email address. This is not the correct answer because of the word confirmation in the question. Identification is just a statement of the user’s name, effectively. It does not prevent a user from lying or misstating an identity.

Authorization is the process of granting access to resources. Common methods are Access Control Lists (ACL) and Role-Based Access Control (RBAC). Permissions, such as read or write, are granted at this point.

Federation is the process of implementing standard processes and technologies across various organizations so that they can join their identity management systems together.

89
Q

Sabin is a cloud administrator and has now provisioned a Virtual Private Network (VPN) connection from a server in their Platform as a Service (PaaS) environment to a user’s VPN software. The connection is going to use the Transport Layer Security (TLS) protocol. What are the two pieces that make up TLS?

A. TLS establishment protocol and TLS connection protocol
B. TLS establishment protocol and TLS record protocol
C. TLS handshake protocol and TLS record protocol
D. TLS handshake protocol and TLS connection protocol

A

C. TLS handshake protocol and TLS record protocol

Explanation:
Transport Layer Security (TLS) replaced Secure Socket Layer (SSL) as the standard method for encryption of traffic across a network. TLS is made up of two main layers. The first layer is the TLS handshake protocol. This protocol is what negotiates and establishes the actual TLS connection. The second layer is the TLS record protocol. The TLS record protocol is the actual secure communication method for transferring the data.

90
Q

Identity and access management (IAM) and misconfigurations are common security issues associated with which of the following?

A. Hypervisor Security
B. Ephemeral Computing
C. Serverless Technology
D. Container Security

A

D. Container Security

Explanation:
Some important security considerations related to virtualization include:

Hypervisor Security: The primary virtualization security concern is isolation or ensuring that different VMs can’t affect each other or read each other’s data. VM escape attacks occur when a malicious VM exploits a vulnerability in the hypervisor or virtualization platform to accomplish this.
Container Security: Containers are self-contained packages that include an application and all of the dependencies that it needs to run. Containers improve portability but have security concerns around poor access control and container misconfigurations.
Ephemeral Computing: Ephemeral computing is a major benefit of virtualization, where resources can be spun up and destroyed at need. This enables greater agility and reduces the risk that sensitive data or resources will be lying around abandoned.
Serverless Technology: Serverless applications are deployed in environments managed by the cloud service provider. Outsourcing server management can make serverless systems more secure, but it also means that organizations can’t deploy traditional security solutions that require an underlying OS to operate.
91
Q

For which of the cloud service models does the cloud customer commonly have responsibility for Operating System (OS) patch management?

A. Infrastructure as a Service (IaaS)
B. Communication as a Service (CaaS)
C. Software as a Service (SaaS)
D. Platform as a Service (PaaS)

A

A. Infrastructure as a Service (IaaS)

Explanation:
The CSP is fully responsible for patch management of the underlying physical infrastructure, but IaaS and PaaS customers commonly have patch management responsibilities. In PaaS, it is debatable if the customer will patch the operating system in a server-based PaaS. If it is server-less, the customer does not see the OS and therefore cannot patch it. In IaaS, the customer owns the OSs, so they must patch them. Since IaaS and PaaS are both possible answers, the better one to select is the definite one. So, IaaS is the better answer.

In a SaaS and CaaS environments, the customer has no responsibility for patching.

92
Q

A cloud data architect is interested in grouping data elements of similar types together. This would allow her to quickly locate similar data in the future and add or verify the security controls. What could the cloud data architect use to accomplish this?

A. Labeling
B. Metadata
C. Hashing
D. Classification

A

A. Labeling

Explanation:
Labeling is the process of adding “labels” to data elements. These labels must be configured with consistency throughout the entire organization. Labels are used to group data elements together and provide information about them. The label would contain the classification level for that piece of data. The label is what is used to view what level of sensitivity (classification) a piece of data is so that the security levels can be verified.

The metadata can include the classification level as well, but it is the word label that reflects the classification. The metadata would induce the data creator or owner, date of creation, and other things that are similar.

Hashing is not involved here, but it could be a control used to verify the integrity of the data.

93
Q

Which of the following identifies vulnerabilities based on internal visibility into a running application?

A. SAST
B. IAST
C. DAST
D. SCA

A

B. IAST

Explanation:
Some common tools for application security testing include:

Static Application Security Testing (SAST): SAST tools inspect the source code of an application for vulnerable code patterns. It can be performed early in the software development lifecycle but can’t catch some vulnerabilities, such as those visible only at runtime.
Dynamic Application Security Testing (DAST): DAST bombards a running application with anomalous inputs or attempted exploits for known vulnerabilities. It has no knowledge of the application’s internals, so it can miss vulnerabilities. However, it is capable of detecting runtime vulnerabilities and configuration errors (unlike SAST).
Interactive Application Security Testing (IAST): IAST places an agent inside an application and monitors its internal state while it is running. This enables it to identify unknown vulnerabilities based on their effects on the application.
Software Composition Analysis (SCA): SCA is used to identify the third-party dependencies included in an application and may generate a software bill of materials (SBOM). This enables the developer to identify vulnerabilities that exist in this third-party code.
94
Q

In which of the following cloud environments is the company most likely responsible for the physical infrastructure?

A. Private Cloud
B. Multi-Cloud
C. Community Cloud
D. Public Cloud

A

A. Private Cloud

Explanation:
The physical environment where cloud resources are hosted depends on the cloud model in use:

Public Cloud: Public cloud infrastructure will be hosted by the CSP within their own data centers.
Private Cloud: Private clouds are usually hosted by an organization within its own data center. However, third-party CSPs can also offer virtual private cloud (VPC) services.
Community Cloud: In a community cloud, one member of the community hosts the cloud infrastructure in their data center. Third-party CSPs can also host community clouds in an isolated part of their environment.

Hybrid and multi-cloud environments will likely have infrastructure hosted by different organizations. A hybrid cloud combines public and private cloud environments, and a multi-cloud infrastructure uses multiple cloud providers’ services.

95
Q

Which of the following system and communications protection steps is designed to prevent potential issues caused by employee errors?

A. Separation of System and User Functionality
B. Cryptographic Key Establishment and Management
C. Boundary Protection
D. Denial-of-Service Prevention

A

A. Separation of System and User Functionality

Explanation:
NIST SP 800-53, Security and Privacy Controls for Information Systems and Organizations defines 51 security controls for systems and communication protection. Among these are:

Policy and Procedures: Policies and procedures define requirements for system and communication protection and the roles, responsibilities, etc. needed to meet them.
Separation of System and User Functionality: Separating administrative duties from end-user use of a system reduces the risk of a user accidentally or intentionally misconfiguring security settings.
Security Function Isolation: Separating roles related to security (such as configuring encryption and logging) from other roles also implements separation of duties and helps to prevent errors.
Denial-of-Service Prevention: Cloud resources are Internet-accessible, making them a prime target for DoS attacks. These resources should have protections in place to mitigate these attacks as well as allocate sufficient bandwidth and compute resources for various systems.
Boundary Protection: Monitoring and filtering inbound and outbound traffic can help to block inbound threats and stop data exfiltration. Firewalls, routers, and gateways can also be used to isolate and protect critical systems.
Cryptographic Key Establishment and Management: Cryptographic keys are used for various purposes, such as ensuring confidentiality, integrity, authentication, and non-repudiation. They must be securely generated and secured against unauthorized access.
96
Q

As the information security manager working with the devops team, Aria has helped them through the threat modeling process that they performed. The team was able to uncover the concerns that needed to be addressed through the planning, coding, and testing of their new product. They successfully assessed and addressed hundreds of specific concerns successfully. The next phase of the Secure Software Development Lifecycle (SSDLC) would be which of the following?

A. Requirements phase
B. Maintenance phase
C. Deployment phase
D. Testing phase

A

C. Deployment phase

Explanation:
There are many different names that are used for the phases of the SSDLC. It is critical to be flexible. The (ISC)2 Common Body of Knowledge book lists the phases as:

Requirements
Design
Development
Testing
Deployment
Operations and Maintenance

The question included planning, coding, and testing. This would correlate to the phases of requirements/design, development, and testing. So the next phase is deployment.

Deployment is moving the software to the live production environment.

The operations and maintenance phase includes pushing out continual updates, bug fixes, security patches, and anything else needed to keep the software running securely and operating as it should.

97
Q

Which of the following is NOT a threat for which the CSP bears some responsibility?

A. Denial of Service
B. Improper Disposal
C. Theft or Media Loss
D. Unauthorized Provisioning

A

D. Unauthorized Provisioning

Explanation:
Data storage in the cloud faces various potential threats, including:

Unauthorized Access: Cloud customers should implement access controls to prevent unauthorized users from accessing data. Also, a cloud service provider (CSP) should implement controls to prevent data leakage in multitenant environments.
Unauthorized Provisioning: The ease of setting up cloud data storage may lead to shadow IT, where cloud resources are provisioned outside of the oversight of the IT department. This can incur additional costs to the organization and creates security and compliance challenges since the security team can’t secure data that they don’t know exists.
Regulatory Non-Compliance: Various regulations mandate security controls and other requirements for certain types of data. A failure to comply with these requirements — by failing to protect data or allowing it to flow outside of jurisdictional boundaries — could result in fines, legal action, or a suspension of the business’s ability to operate.
Jurisdictional Issues: Different jurisdictions have different laws and regulations regarding data security, usage, and transfer. Many CSPs have locations around the world, which can violate these laws if data is improperly protected or stored in an unauthorized location.
Denial of Service: Cloud environments are publicly accessible and largely accessible via the Internet. This creates the risk of Denial of Service attacks if the CSP does not have adequate protections in place.
Data Corruption or Destruction: Data stored in the cloud can be corrupted or destroyed by accident, malicious intent, or natural disasters.
Theft or Media Loss: CSPs are responsible for the physical security of their data centers. If these security controls fail, an attacker may be able to steal the physical media storing an organization’s data.
Malware: Ransomware and other malware increasingly target cloud environments as well as local storage. Access controls, secure backups, and anti-malware solutions are essential to protecting cloud data against theft or corruption.
Improper Disposal: The CSP is responsible for ensuring that physical media is disposed of correctly at the end of life. Cloud customers can also protect their data by using encryption to make the data stored on a drive unreadable.
98
Q

Amelia works for a medium-sized company as their lead information security manager. She has been working with the development and operations teams on their new application that they are building. They are building an application that will interact with their customers through the use of an Application Programming Interface (API). Due to the nature of the application, it has been decided that they will use SOAP.

That means that the data must be formatted using which of the following?

A. YAML (YAML Ain’t Markup Language)
B. eXtensible Markup Language (XML)
C. Java Script Object Notation (JSON)
D. Coffee Script Object Notation (CSON)

A

B. eXtensible Markup Language (XML)

Explanation:
The SOAP only permits the use of XML-formatted data, while REpresentational State Transfer (REST) allows for the use of a variety of data formats, including both XML and JSON. SOAP is most commonly used when the use of REST is not possible.

XML, JSON, YAML, CSON are all data formats.

99
Q

Antonia has recently been hired by a cancer treatment facility. One of the first training programs that she is required to go through at the office is related to the protection of individually identifiable health information. Which law is this related to and which country does it apply to?

A. Health Insurance Portability and Accountability Act (HIPAA), USA
B. Gramm-Leach-Bliley Act (GLBA), USA
C. Health Insurance Portability and Accountability Act (HIPAA), Canada
D. General Data Protection Regulation (GDPR), Germany

A

A. Health Insurance Portability and Accountability Act (HIPAA), USA

Explanation:
The Health Insurance Portability and Accountability Act (HIPAA) is concerned with the security controls and confidentiality of Protected Health Information (PHI). It’s vital that anyone working in any healthcare facility be aware of HIPAA regulations.

The Gramm-Leach-Bliley Act, officially named the Financial Modernization Act of 1999, focuses on PII as it pertains to financial institutions, such as banks.

GDPR is an EU specific regulation that encompasses all organizations in all different industries.

The privacy act of 1988 is an Australian law that requires the protection of personal data.

100
Q

You work for a real estate company who are defining the protection mechanisms they are going to use for the Platform as a Service (PaaS) deployment that will store data in the cloud. They will be using block storage technology. That technology will hold their pre-sales documents. In what access model is the owner responsible for defining the restrictions on a per-document basis?

A. Discretionary Access Control (DAC)
B. Non-discretionary Access Control (NDAC)
C. Role-based Access Control (RBAC)
D. Mandatory Access Control (MAC)

A

A. Discretionary Access Control (DAC)

Explanation:
The owner of a document is responsible for defining the limits on a per-document basis under a Discretionary Access Control (DAC) model. This entails manually configuring sharing for documents that contain user authentication information for a database. It is up to the owner’s discretion to grant access to someone or not.

MAC is a very secure environment that is commonly used around a country’s sensitive documents (e.g., military top secret files). Access is controlled based on the classification of the data, the user’s clearance level, and their need to know.

NDAC is defined by the U.S. government as another name for MAC.

RBAC is an access control model that defines access based on the user’s role within the organization.

101
Q
A