Pocket Prep 11 Flashcards

1
Q

Pabla has been working with their corporation to understand the impact that particular threats can have on their Infrastructure as a Service (IaaS) implementation. The information gathered through this process will be used to determine the correct solutions and procedures that will be built to ensure survival through many different incidents and disasters. To perform a quantitative assessment, they must determine their Single Loss Expectancy (SLE) for the corporation’s Structured Query Language (SQL) database in the event that the data is encrypted through the use of ransomware.

Which of the following is the BEST definition of SLE?

A. SLE is the value of the event given a certain percentage loss of the asset
B. SLE is the value of the asset given the amount of time it will be offline in a given year
C. SLE is the value of the event given the value of the asset and the time it can be down
D. SLE is the value of the cost of the event multiplied times the asset value

A

A. SLE is the value of the event given a certain percentage loss of the asset

Explanation:
Which of the following is the BEST definition of SLE?
A

SLE is the value of the event given a certain percentage loss of the asset

Correct answer: SLE is the value of the event given a certain percentage loss of the asset

SLE is calculated by taking the asset value times the exposure factor. Exposure factor is effectively the percentage of loss of the asset.

The Annual Rate of Occurrence (ARO) is the number of times that event is expected within a given year.

The SLE multiplied times the ARO gives the value of the annualized loss expectancy.

SLE is the value of the cost of the event multiplied times the asset value is an incorrect answer because it is the loss of the asset times the asset value.

SLE is the value of the event given the value of the asset and the time it can be down is an incorrect answer because the time it can be offline is not a factor. That would be the Maximum Tolerable Downtime (MTD).

SLE is the value of the asset given the amount of time it will be offline in a given year is an incorrect answer because it is not the amount of time it can be offline in a given year. That is typically represented by nines (e.g., 99.99999% downtime).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following NIST controls for system and communication protection is MOST closely related to management of tasks such as encryption and logging configurations?

A. Cryptographic Key Establishment and Management
B. Boundary Protection
C. Security Function Isolation
D. Separation of System and User Functionality

A

C. Security Function Isolation

Explanation:
NIST SP 800-53, Security and Privacy Controls for Information Systems and Organizations defines 51 security controls for systems and communication protection. Among these are:

Policy and Procedures: Policies and procedures define requirements for system and communication protection and the roles, responsibilities, etc. needed to meet them.
Separation of System and User Functionality: Separating administrative duties from end-user use of a system reduces the risk of a user accidentally or intentionally misconfiguring security settings.
Security Function Isolation: Separating roles related to security (such as configuring encryption and logging) from other roles also implements separation of duties and helps to prevent errors.
Denial-of-Service Prevention: Cloud resources are Internet-accessible, making them a prime target for DoS attacks. These resources should have protections in place to mitigate these attacks as well as allocate sufficient bandwidth and compute resources for various systems.
Boundary Protection: Monitoring and filtering inbound and outbound traffic can help to block inbound threats and stop data exfiltration. Firewalls, routers, and gateways can also be used to isolate and protect critical systems.
Cryptographic Key Establishment and Management: Cryptographic keys are used for various purposes, such as ensuring confidentiality, integrity, authentication, and non-repudiation. They must be securely generated and secured against unauthorized access.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Donie is working for a corporation that is about to undergo an audit. The auditor knows that the corporation is subject to the Federal Information Security Management Act (FISMA). Which type of corporation is Donie employed by?

A. Retail
B. Government agency
C. Healthcare
D. Banking

A

B. Government agency

Explanation:
The organization is a government agency. Government agencies are affected by the Federal Information Security Management Act (FISMA). The law defines a comprehensive framework to protect government information, operations, and assets against natural or human-made threats. It requires that government agencies conduct vulnerability scans.

None of the other organizations are affected by FISMA.

Healthcare in the U.S. must be in compliance with the Health Insurance Portability and Accountability Act (HIPAA).

Banking is subject to Basel III in Europe.

Retail must comply with the Payment Card Industry Data Security Standard (PCI DSS).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Rebekah has been working with software developers on mechanisms that they can implement to protect data at different times. There is a need to use data from a customer database in another piece of software. However, it is necessary to ensure that all personally identifiable elements are removed first.

The process of removing all identifiable characteristics from data is known as which of the following?

A. Anonymization
B. Obfuscation
C. Masking
D. De-identification

A

A. Anonymization

Explanation:
Anonymization is the removal of all personally identifiable pieces of information, both direct and indirect.

Data de-identification is the removal of all direct identifiers. It leaves the indirect ones in place.

Masking is to cover or hide information. This is commonly seen when a user types in their password, yet all that is seen on the screen are stars or dots.

Obfuscation is to confuse. Encryption is one form of obfuscation, but it can be done with other techniques. For example, instead of transmitting data in base 16, it could be sent in base 64.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Containerization is an example of which of the following?

A. Serverless
B. Microservices
C. Application virtualization
D. Sandboxing

A

C. Application virtualization

Explanation:
Application virtualization creates a virtual interface between an application and the underlying operating system, making it possible to run the same app in various environments. One way to accomplish this is containerization, which combines an application and all of its dependencies into a container that can be run on an OS running the containerization software (Docker, etc.). Microservices and containerized applications commonly require orchestration solutions such as Kubernetes to manage resources and ensure that updates are properly applied.

Sandboxing is when applications are run in an isolated environment, often without access to the Internet or other external systems. Sandboxing can be used for testing application code without placing the rest of the environment at risk or evaluating whether a piece of software contains malicious functionality.

Serverless applications are hosted in a Platform as a Service (PaaS) cloud environment, where management of the underlying servers and infrastructure is the responsibility of the cloud provider, not the cloud customer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Defining clear, measurable, and usable metrics is a core component of which of the following operational controls and standards?

A. Continual Service Improvement Management
B. Change Management
C. Continuity Management
D. Information Security Management

A

A. Continual Service Improvement Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Your organization is in the process of migrating to the cloud. Mid-migration you come across details in an agreement that may leave you non-compliant with a particular law. Who would be the BEST contact to discuss your cloud-environment compliance with legal jurisdictions?

A. Regulator

B. Stakeholder
C. Partner
D. Consultant

A

A. Regulator

Explanation:
As a CCSP, you are responsible for ensuring that your organization’s cloud environment adheres to all applicable regulatory requirements. By staying current on regulatory communications surrounding cloud computing and maintaining contact with approved advisors and, most crucially, regulators, you should be able to assure compliance with legal jurisdictions.

A partner is a generic term that can be used to refer to many different companies. For example, an auditor can be considered a partner.

A stakeholder is someone who has responsibility for caring for a part of the business.

A consultant could assist with just about anything. It all depends on what their skills are. It is plausible that a consultant could help with legal issues. However, regulators definitely understand the laws, so that makes for the best answer.
Reference:

(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 253.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Load and stress testing are examples of which type of testing?

A. Usability Testing
B. Functional Testing
C. Unit Testing
D. Non-Functional Testing

A

D. Non-Functional Testing

Explanation:
Functional testing is used to verify that software meets the requirements defined in the first phase of the SDLC. Examples of functional testing include:

Unit Testing: Unit tests verify that a single component (function, module, etc.) of the software works as intended.
Integration Testing: Integration testing verifies that the individual components of the software fit together correctly and that their interfaces work as designed.
Usability Testing: Usability testing verifies that the software meets users’ needs and provides a good user experience.
Regression Testing: Regression testing is performed after changes are made to the software and verifies that the changes haven’t introduced bugs or broken functionality.

Non-functional testing tests the quality of the software and verifies that it provides necessary functionality not explicitly listed in requirements. Load and stress testing or verifying that sensitive data is properly secured and encrypted are examples of non-functional testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following cloud roles and responsibilities involves maintaining cloud infrastructure AND meeting SLAs?

A. Regulatory
B. Cloud Service Broker
C. Cloud Service Provider
D. Cloud Service Partner

A

C. Cloud Service Provider

Explanation:
Some of the important roles and responsibilities in cloud computing include:

Cloud Service Provider: The cloud service provider offers cloud services to a third party. They are responsible for operating their infrastructure and meeting service level agreements (SLAs).
Cloud Customer: The cloud customer uses cloud services. They are responsible for the portion of the cloud infrastructure stack under their control.
Cloud Service Partners: Cloud service partners are distinct from the cloud service provider but offer a related service. For example, a cloud service partner may offer add-on security services to secure an organization’s cloud infrastructure.
Cloud Service Brokers: A cloud service broker may combine services from several different cloud providers and customize them into packages that meet a customer’s needs and integrate with their environment.
Regulators: Regulators ensure that organizations — and their cloud infrastructures — are compliant with applicable laws and regulations. The global nature of the cloud can make regulatory and jurisdictional issues more complex.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Tristan is the cloud information security manager working for a pharmaceutical company. They have connected to the community cloud that was built by the government health agency to advance science, diagnosis, and patient care. They also have stored their own data with a public cloud provider in the format of both databases and data lakes.

What have they built?

A. Public cloud
B. Hybrid cloud
C. Storage area network
D. Private cloud

A

B. Hybrid cloud

Explanation:
A hybrid cloud deployment model is a combination of two of the three options: public, private, and community. It could be public and private, private and community, or public and community as in the question. A public cloud example is Amazon Web Service (AWS). A private cloud is built for a single company. Fundamentally, it means that all the tenants on a single server are from the same company. A community example is the National Institute of Health (NIH), which built a community cloud to advance science, diagnosis, and patient care.

A Storage Area Network (SAN) is the physical and virtual structure that holds data at rest. SAN protocols include Fibre Channel and iSCSI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which of the following risks associated with PaaS environments includes hypervisor attacks and VM escapes?

A. Virtualization
B. Persistent Backdoors
C. Interoperability Issues
D. Resource Sharing

A

A. Virtualization

Explanation:
Platform as a Service (PaaS) environments inherit all the risks associated with IaaS models, including personnel threats, external threats, and a lack of relevant expertise. Some additional risks added to the PaaS model include:

Interoperability Issues: With PaaS, the cloud customer develops and deploys software in an environment managed by the provider. This creates the potential that the customer’s software may not be compatible with the provider’s environment or that updates to the environment may break compatibility and functionality.
Persistent Backdoors: PaaS is commonly used for development purposes since it removes the need to manage the development environment. When software moves from development to production, security settings and tools designed to provide easy access during testing (i.e. backdoors) may remain enabled and leave the software vulnerable to attack in production.
Virtualization: PaaS environments use virtualized OSs to provide an operating environment for hosted applications. This creates virtualization-related security risks such as hypervisor attacks, information bleed, and VM escapes.
Resource Sharing: PaaS environments are multitenant environments where multiple customers may use the same provider-supplied resources. This creates the potential for side-channel attacks, breakouts, information bleed, and other issues with maintaining tenant separation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Organizations like yours are looking for guidance on how to meet business objectives while also managing and minimizing the risks that come with implementing cloud computing solutions. Which of the following would be the most helpful?

A. Cloud Security Alliance (CSA)
B. Open Web Application Security Project (OWASP)
C. Internet Assigned Numbers Authority (IANA)
D. Institute of Electrical and Electronics Engineers (IEEE)

A

A. Cloud Security Alliance (CSA)

Explanation:
The Cloud Security Alliance (CSA) is an organization that offers guidance to organizations deploying a cloud environment. They provide support to cloud providers and customers to enable trust in the cloud. This includes the Cloud Controls Matrix (CCM) and the Enterprise Architecture [formerly the Trusted Cloud Initiative (TCI)].

OWASP is a group working to improve the security of applications.

IANA is a global organization that is responsible for the assignment of Internet Protocol (IP) addresses.

IEEE is an association for electrical and electronics engineers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Client care representatives in your firm are now permitted to access and see customer accounts. For added protection, you’d like to build a feature that obscures a portion of the data when a customer support representative reviews a customer’s account. What type of data protection is your firm attempting to implement?

A. Obfuscation
B. Tokenization
C. Masking
D. Encryption

A

C. Masking

Explanation:
The organization is trying to deploy masking. Masking obscures data by displaying only the last four/five digits of a social security or credit card number, for example. As a result, the data is incomplete in the absence of the blocked/removed content. The rest of the information can appear to be there, but the user only sees “*” or a dot.

Tokenization is the process of removing data and placing a token in its place. The question is about part of the data being available, so that does not work.

Encryption takes the data and makes it unreadable. It’s unusual to encrypt, for example, the first part of a credit card number. So, this does not work, either.

Obfuscation is to “confuse.” If data has been obfuscated, the attacker would be left confused when looking at it. Think encryption. It is a way to obscure the data. There are other ways, though. Again, this is not going to work because the user sees a lot of asterisks or dots. That is not obfuscation, that is masking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

For which of the following is data discovery the EASIEST?

A. Semi-structured data
B. Structured data
C. Mostly structured data
D. Unstructured data

A

B. Structured data

Explanation:
The complexity of data discovery depends on the type of data being analyzed. Data is commonly classified into one of three categories:

Structured: Structured data has a clear, consistent format. Data in a database is a classic example of structured data where all data is labeled using columns. Data discovery is easiest with structured data because the data discovery tool just needs to understand the structure of the database and the context to identify sensitive data.
Unstructured Data: Unstructured data is at the other extreme from structured data and includes data where no underlying structure exists. Documents, emails, photos, and similar files are examples of unstructured data. Data discovery in unstructured data is more complex because the tool needs to identify data of interest completely on its own.
Semi-Structured Data: Semi-structured data falls between structured and unstructured data, having some internal structure but not to the same degree as a database. HTML, XML, and JSON are examples of semi-structured data formats that use tags to define the function of a particular piece of data.

Mostly structured is not a common classification for data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which of the following cloud audit mechanisms is designed to identify anomalies or trends that could point to events of interest?

A. Correlation
B. Log Collection
C. Packet Capture
D. Access Control

A

A. Correlation

Explanation:
Three essential audit mechanisms in cloud environments include:

Log Collection: Log files contain useful information about events that can be used for auditing and threat detection. In cloud environments, it is important to identify useful log files and collect this information for analysis. However, data overload is a common issue with log management, so it is important to collect only what is necessary and useful.
Correlation: Individual log files provide a partial picture of what is going on in a system. Correlation looks at relationships between multiple log files and events to identify potential trends or anomalies that could point to a security incident.
Packet Capture: Packet capture tools collect the traffic flowing over a network. This is often only possible in the cloud in an IaaS environment or using a vendor-provided network mirroring capability.

Access controls are important but not one of the three core audit mechanisms in cloud environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A cloud provider has assembled all the cloud resources together, from routers to servers and switches, as well as the Central Processing Unity (CPU), Random Access Memory (RAM), and storage within the servers. Then they made them available for allocation to their customers. Which term BEST describes this process?

A. Reversibility
B. Data portability
C. Resource pooling
D. On-demand self-service

A

C. Resource pooling

Explanation:
Cloud providers may choose to do resource pooling, which is the process of aggregating all the cloud resources together and allocating them to their cloud customers. There is pooling of physical equipment into the datacenter. Then there is a pool of resources within a server that are allocated to running virtual machines. That is the Central Processing Unity (CPU), the Random Access Memory (RAM), and the network bandwidth that is available.

Reversibility is the ability to get all the company’s artifacts out of the cloud provider’s equipment, and what is on the provider’s equipment is appropriately deleted.

Portability is the ability to move data from one provider to another without having to reenter the data.

On-demand self-service is the ability for the customer/tenant to use a portal to purchase and provision cloud resources without having much, if any, interaction with the cloud provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Integrity protections like hash functions are important to demonstrate which necessary attribute of evidence?

A. Accurate
B. Complete
C. Admissible
D. Authentic

A

A. Accurate

Explanation:
Typically, digital forensics is performed as part of an investigation or to support a court case. The five attributes that define whether evidence is useful include:

Authentic: The evidence must be real and relevant to the incident being investigated.
Accurate: The evidence should be unquestionably truthful and not tampered with (integrity).
Complete: The evidence should be presented in its entirety without leaving out anything that is inconvenient or would harm the case.
Convincing: The evidence supports a particular fact or conclusion (e.g., that a user did something).
Admissible: The evidence should be admissible in court, which places restrictions on the types of evidence that can be used and how it can be collected (e.g., no illegally collected evidence).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Amelia works for a medium-sized company as their lead information security manager. She has been working with the development and operations teams on their new application that they are building. They are building an application that will interact with their customers through the use of an Application Programming Interface (API). Due to the nature of the application, it has been decided that they will use SOAP.

That means that the data must be formatted using which of the following?

A. eXtensible Markup Language (XML)
B. Java Script Object Notation (JSON)
C. Coffee Script Object Notation (CSON)
D. YAML (YAML Ain’t Markup Language)

A

A. eXtensible Markup Language (XML)

Explanation:
The SOAP only permits the use of XML-formatted data, while REpresentational State Transfer (REST) allows for the use of a variety of data formats, including both XML and JSON. SOAP is most commonly used when the use of REST is not possible.

XML, JSON, YAML, CSON are all data formats.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which stage of the IAM process relies heavily on logging and similar processes?

A. Authentication
B. Identification
C. Accountability
D. Authorization

A

C. Accountability

Explanation:
Identity and Access Management (IAM) services have four main practices, including:

Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering.
Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in.
Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this.
Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A financial organization has purchased an Infrastructure as a Service (IaaS) cloud service from their cloud provider. They are consolidating and migrating their on-prem data centers (DC) into the cloud. Once they are setup in the cloud, they will have their servers, routers, and switches configured as needed with all of the network-based security appliances such as firewalls and Intrusion Detection Systems (IDS).

What type of billing model should this organization expect to see?

A. Locked-in monthly payment that never changes
B. Metered usage that changes based upon resource utilization
C. One up-front cost to purchase cloud equipment
D. Up-front equipment purchase, then a locked-in monthly fee afterward

A

B. Metered usage that changes based upon resource utilization

Explanation:
In an IaaS environment (and Platform as a Service (PaaS) as well as Software as a Service (SaaS)), the customer can expect to only pay for the resources that they are using. This is far more cost effective and allows for greater scalability. However, this type of billing does mean that the price is not locked-in, and it could change as the need for resources either increases or decreases from month to month.

There is no equipment to purchase with cloud services (IaaS, PaaS or SaaS). You could purchase equipment if you want to build a private cloud. However, there is no mention of that in the question. The standard cloud definition excludes the “locked-in monthly payment.” A company could offer that, but it is outside of the cloud as defined in NIST SP 800-145 and ISO 17788.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Yun is working with the application developers as they move through development and into operations with their new application. They are looking to add something to the application that can allow the application to protect itself.

Which of the following is a security mechanism that allows an application to protect itself by responding and reacting to ongoing events and threats?

A. Vulnerability scanning
B. Dynamic Application Security Testing (DAST)
C. Runtime Application Self-Protection (RASP)
D. Static Application Security Testing (SAST)

A

C. Runtime Application Self-Protection (RASP)

Explanation:
Runtime Application Self-Protection (RASP) is a security mechanism that runs on the server and starts when the application starts. RASP allows an application to protect itself by responding and reacting to ongoing events and threats in real time. RASP can monitor the application, continuously looking at its own behavior. This allows the application to detect malicious input or behavior and respond accordingly.

Dynamic Application Security Testing (DAST) is a type of security test that looks at the application in a dynamic or running state. This means that the tester can only use the application. They do not have the source code for the application. This can be used to test if the application behaves as needed or if it can be used maliciously by a bad actor.

Static Application Security Testing (SAST) is a type of test where the application is static or still. That means the application is not in a running state, so what the test has knowledge of and access to is the source code.

Vulnerability scanning is a test that is run on systems to ensure that the systems are properly hardened, and there are not any known vulnerabilities in the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

The software development team is working with the information security team through the Software Development Lifecycle (SDLC). The information security manager is concerned that the team is rushing through the phase of the lifecycle where the most technical mistakes could be made. Which phase is that?

A. Testing
B. Requirements
C. Development
D. Planning

A

C. Development

Explanation:
During the development or coding phase of the SDLC, the plans and requirements are turned into an executable programming language. As this is the phase where coding takes place, it is most likely the place where technical mistakes would be made.

Technical mistakes could be made in the planning or requirements phase, although more architectural problems are likely to occur.

Testing is technical and mistakes can be made during testing, but it is more likely that the testing is not as complete as needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A cloud service provider is building a new data center to provide options for companies that are looking for private cloud services. They are working to determine the size of datacenter that they want to build. The Uptime Institute created the Data Center Site Infrastructure Tier Standard Topology. With this standard, they created a few levels of data centers. The cloud provider has a goal of reaching tier three.

How is that characterized in general?

A. Basic Capacity
B. Redundant Capacity Components
C. Concurrently Maintainable
D. Fault Tolerance

A

C. Concurrently Maintainable

Explanation:
Correct answer: Concurrently Maintainable

The Uptime Institute publishes one of the most widely used standards on data center tiers and topologies. The standard is based on four tiers, which include:

Tier I: Basic Capacity
Tier II: Redundant Capacity Components
Tier III: Concurrently Maintainable
Tier IV: Fault Tolerance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Which of the following involves tracking known issues and having documented solutions or workarounds?

A. Problem Management
B. Continuity Management
C. Service Level Management
D. Availability Management

A

A. Problem Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Which of the following is NOT a common way for organizations to gather information about a cloud service provider’s (CSP’s) operations for their risk assessment?

A. On-Site Audits
B. Service Level Agreements (SLAs)
C. Audit Reports
D. Policy Review

A

A. On-Site Audits

Explanation:
A risk assessment of a CSP’s services should be based on the organization’s policies, SLAs, audit reports, and similar information. This will provide insight into the potential risks that the vendor has addressed and the controls that they have in place for doing so.

Few CSPs will allow customers to perform their own audits of the provider’s facilities.

26
Q

The “CIA triad” is MOST closely related to which of the following cloud considerations?

A. Privacy
B. Regulatory Oversight
C. Security
D. Governance

A

C. Security

Explanation:
When deploying cloud infrastructure, organizations must keep various security-related considerations in mind, including:

Security: Data and applications hosted in the cloud must be secured just like in on-prem environments. Three key considerations are the CIA triad of confidentiality, integrity, and availability.
Privacy: Data hosted in the cloud should be properly protected to ensure that unauthorized users can’t access the data of customers, employees, and other third parties.
Governance: An organization’s cloud infrastructure is subject to various laws, regulations, corporate policies, and other requirements. Governance manages cloud operations in a way that ensures compliance with these various constraints.
Auditability: Cloud computing outsources the management of a portion of an organization’s IT infrastructure to a third party. A key contractual clause is ensuring that the cloud customer can audit (directly or indirectly) the cloud provider to ensure compliance with contractual, legal, and regulatory obligations.
Regulatory Oversight: An organization’s responsibility for complying with various regulations (PCI DSS, GDPR, etc.) also extends to its use of third-party services. Cloud customers need to be able to ensure that cloud providers are compliant with applicable laws and regulations.
27
Q

Which of the following areas is always entirely the CSP’s responsibility regardless of the cloud service model used?

A. Infrastructure
B. Virtual networking
C. Databases
D. Storage

A

A. Infrastructure

Explanation:
The Cloud Service Provider (CSP) is always responsible for managing the infrastructure. The infrastructure includes the servers, routers, switches, firewalls, and so on that make a data center.

The consumer is always responsible for their Governance, Risk management, and Compliance (GRC) and their data.

The operating systems, virtual networking, and storage responsibilities change according to the cloud service model.
Reference:

28
Q

Organizations like yours are looking for guidance on how to meet business objectives while also managing and minimizing the risks that come with implementing cloud computing solutions. Which of the following would be the most helpful?

A. Internet Assigned Numbers Authority (IANA)
B. Open Web Application Security Project (OWASP)
C. Cloud Security Alliance (CSA)
D. Institute of Electrical and Electronics Engineers (IEEE)

A

C. Cloud Security Alliance (CSA)

Explanation:
The Cloud Security Alliance (CSA) is an organization that offers guidance to organizations deploying a cloud environment. They provide support to cloud providers and customers to enable trust in the cloud. This includes the Cloud Controls Matrix (CCM) and the Enterprise Architecture [formerly the Trusted Cloud Initiative (TCI)].

OWASP is a group working to improve the security of applications.

IANA is a global organization that is responsible for the assignment of Internet Protocol (IP) addresses.

IEEE is an association for electrical and electronics engineers.

29
Q

In which of the following cloud deployment models is the cloud provider responsible for the operating systems and hosting environment, while the customer is responsible for deploying their applications within the provided platform infrastructure?

A. Infrastructure as a Service (IaaS)
B. Communication as a Service (CaaS)
C. Platform as a Service (PaaS)
D. Software as a Service (SaaS)

A

C. Platform as a Service (PaaS)

Explanation:
In a PaaS cloud deployment model, the cloud provider manages and maintains the operating system and hosting environment, while the customer is only responsible for deploying their applications within the given platform.

In SoaS, the customer is responsible for their data, but the provider is responsible for the software/application, Operating Systems (OS), and everything else within and below there.

In IaaS, the customer is responsible for the OSs that they bring to the cloud, which includes their servers, virtual desktops, databases, routers, firewalls, switches, etc. and everything above the OS. The provider is responsible for the hypervisor and everything below that.

CaaS is probably a SaaS deployment. So, the customer is responsible for their data (calls, chats, recordings, etc.), and the provider is responsible for the application and everything below.

30
Q

The Uptime Institute publishes one of the most widely used standards on data center tiers and topologies. At which tier is the data center required to have equipment that is concurrently maintainable?

A. Three
B. One
C. Two
D. Four

A

A. Three

Explanation:
Three

Correct answer: Three

The Uptime Institute publishes one of the most widely used standards on data center tiers and topologies. The standard is based on four tiers, which include:

Tier I: Basic Capacity
Tier II: Redundant Capacity Components
Tier III: Concurrently Maintainable. This means that it is unnecessary to shut down equipment to replace hardware elements. It is possible to swap out line cards, for example, without taking the server offline. The term for this is hot-swappable.
Tier IV: Fault Tolerance
31
Q

Donie is working for a corporation that is about to undergo an audit. The auditor knows that the corporation is subject to the Federal Information Security Management Act (FISMA). Which type of corporation is Donie employed by?

A. Healthcare
B. Government agency
C. Retail
D. Banking

A

B. Government agency

Explanation:
The organization is a government agency. Government agencies are affected by the Federal Information Security Management Act (FISMA). The law defines a comprehensive framework to protect government information, operations, and assets against natural or human-made threats. It requires that government agencies conduct vulnerability scans.

None of the other organizations are affected by FISMA.

Healthcare in the U.S. must be in compliance with the Health Insurance Portability and Accountability Act (HIPAA).

Banking is subject to Basel III in Europe.

Retail must comply with the Payment Card Industry Data Security Standard (PCI DSS).

32
Q

Which of the following data classification labels might be used to determine which regulations and laws apply to the data?

A. Sensitivity
B. Ownership
C. Type
D. Criticality

A

C. Type

Explanation:
Data owners are responsible for data classification, and data is classified based on organizational policies. Some of the criteria commonly used for data classification include:

Type: Specifies the type of data, including whether it has personally identifiable information (PII), intellectual property (IP), or other sensitive data protected by corporate policy or various laws.
Sensitivity: Sensitivity refers to the potential results if data is disclosed to an unauthorized party. The Unclassified, Confidential, Secret, and Top Secret labels used by the U.S. government are an example of sensitivity-based classifications.
Ownership: Identifies who owns the data if the data is shared across multiple organizations, departments, etc.
Jurisdiction: The location where data is collected, processed, or stored may impact which regulations apply to it. For example, GDPR protects the data of EU citizens.
Criticality: Criticality refers to how important data is to an organization’s operations. lity
33
Q

Which of the following techniques replaces sensitive data with non-sensitive characters in certain contexts?

A. Tokenization
B. Hashing
C. Masking
D. Encryption

A

B. Hashing

Explanation:
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:

Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules.
Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions.
Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number.
Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data.
Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
34
Q

Odelia is working for a manufacturing business that has implemented a lot of Internet of Things (IoT) technology in their manufacturing plant. They are now considering a move to the cloud for all their server, application, and database technology. While doing a risk assessment, they looked at conditions such as the regulatory changes and their compliance requirements combined with their industry sector.

With that information, what type of risk are they looking at?

A. Residual risk
B. Risk appetite
C. Risk capacity
D. Inherent risk

A

D. Inherent risk

Explanation:
Inherent risk refers to the level of risk that exists within an organization’s operations or activities without considering the mitigating effects of controls or risk management measures. It represents the potential for loss or negative impact on the organization’s objectives in the absence of any controls or risk mitigation efforts.

Residual risk refers to the level of risk that remains after an organization has implemented controls or risk mitigation measures to address the inherent risk. It represents the risk that still exists despite the implementation of risk management strategies and control mechanisms.

Risk appetite refers to the level of risk that an entity is willing to accept or tolerate in pursuit of its objectives. It represents the organization’s willingness to take risks to achieve its strategic goals and objectives. The entity is sometimes defined as senior management, and sometimes it is defined as the corporation.

Risk capacity refers to the maximum level of risk that an organization is willing and able to take or absorb without significantly jeopardizing its ability to achieve its objectives. It represents the organization’s tolerance for risk and its ability to withstand potential negative impacts.

35
Q

Which of the following is NOT a common way for organizations to gather information about a cloud service provider’s (CSP’s) operations for their risk assessment?

A. Policy Review
B. Service Level Agreements (SLAs)
C. On-Site Audits
D. Audit Reports

A

C. On-Site Audits

36
Q

An information security manager, Asali, is working for a manufacturing company as their manager for the Disaster Recovery team. Given the supply chain issues the company has experienced in the last five years, they are working to figure out how to prevent the same problems from happening in the future. Asali and her team are assessing the primary business line to determine the worst potential problems in supply. She is calculating how much the failure in one supply chain could cost the company to be able to balance how much money they need to spend on a solution.

How should she calculate the total cost of this particular failure per year?

A. Annual Rate of Occurrence x Single Loss Expectancy (ARO x SLE)
B. Asset Value x Exposure Factor (AV x EF)
C. Mean Time to Repair x Single Loss Expectancy (MTR x SLE)
D. Annual Rate of Occurrence x Service Delivery Objective (ARO x SDO)

A

A. Annual Rate of Occurrence x Single Loss Expectancy (ARO x SLE)

Explanation:
Correct answer: Annual Rate of Occurrence x Single Loss Expectancy (ARO x SLE)

To find annual loss expectancy, you must first know the values for Annual Rate of Occurrence (ARO) and Single Loss Expectancy (SLE). The equation used to find Annual Loss Expectancy (ALE) is SLE X ARO = ALE.

To calculate the SLE, you take the AV x EF.

The SDO is the percentage of service functionality that needs to exist at the alternate site that a company fails to in the event of a disaster. It is not used in an equation.

The MTR or MTTR is the average (mean) time to do the work to repair something that is broken. It, too, is not used in any equations.

It is possible that you need to know these equations for the test, but it is unlikely that you will have to actually do any math. It is also good to know terms like MTTR and SDO.

37
Q

A cloud architect is helping to design and build a new data center. She knows that there are many institutions that create standards that govern the physical design of data centers. Of the following, which is NOT an institution that creates standards governing the physical design of data centers?

A. International Data Center Authority (IDCA)
B. Uptime Institute
C. National Fire Protection Association (NFPA)
D. ITIL

A

D. ITIL

Explanation:
ITIL, formerly an acronym for Information Technology Infrastructure Library, provides detailed practices for IT service management. These practices focus on aligning IT services with the needs of business instead of creating data center design and building standards.

The Uptime Institute, the National Fire Protection Association (NFPA), and the International Data Center Authority (IDCA) are all institutions that create standards used to govern the design and building of data centers.
Reference:

(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 229-230.

The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 213, 219, 221.

38
Q

A cloud service provider offers high resiliency guarantees by performing data backup across its sites. Which of the following security threats does this practice introduce?

A. Unauthorized Access
B. Regulatory Non-Compliance
C. Improper Disposal
D. Unauthorized Provisioning

A

B. Regulatory Non-Compliance

Explanation:
Data storage in the cloud faces various potential threats, including:

Unauthorized Access: Cloud customers should implement access controls to prevent unauthorized users from accessing data. Also, a cloud service provider (CSP) should implement controls to prevent data leakage in multitenant environments.
Unauthorized Provisioning: The ease of setting up cloud data storage may lead to shadow IT, where cloud resources are provisioned outside of the oversight of the IT department. This can incur additional costs to the organization and creates security and compliance challenges since the security team can’t secure data that they don’t know exists.
Regulatory Non-Compliance: Various regulations mandate security controls and other requirements for certain types of data. A failure to comply with these requirements — by failing to protect data or allowing it to flow outside of jurisdictional boundaries — could result in fines, legal action, or a suspension of the business’s ability to operate. Backup sites can cause regulatory compliance challenges if these backups store customer data outside of an approved jurisdiction or cause additional laws to apply.
Jurisdictional Issues: Different jurisdictions have different laws and regulations regarding data security, usage, and transfer. Many CSPs have locations around the world, which can violate these laws if data is improperly protected or stored in an unauthorized location.
Denial of Service: Cloud environments are publicly accessible and largely accessible via the Internet. This creates the risk of Denial of Service attacks if the CSP does not have adequate protections in place.
Data Corruption or Destruction: Data stored in the cloud can be corrupted or destroyed by accident, malicious intent, or natural disasters.
Theft or Media Loss: CSPs are responsible for the physical security of their data centers. If these security controls fail, an attacker may be able to steal the physical media storing an organization’s data.
Malware: Ransomware and other malware increasingly target cloud environments as well as local storage. Access controls, secure backups, and anti-malware solutions are essential to protecting cloud data against theft or corruption.
Improper Disposal: The CSP is responsible for ensuring that physical media is disposed of correctly at the end of life. Cloud customers can also protect their data by using encryption to make the data stored on a drive unreadable.
39
Q

Which of the following roles maintains, administers, and secures data based on policies and instructions provided to them?

A. Data Steward
B. Data Custodian
C. Data Owner
D. Data Processor

A

B. Data Custodian

Explanation:
There are several roles and responsibilities related to data ownership, including:

Data Owner: The data owner creates or collects the data and is responsible for it.
Data Custodian: A data custodian is responsible for maintaining or administrating the data. This includes securing the data based on instructions from the data owner.
Data Steward: The data steward ensures that the data’s context and meaning are understood and that it is used properly.
Data Processor: A data processor uses the data, including manipulating, storing, or moving it. Cloud providers are data processors.
40
Q

Adriaan works for a large pharmaceutical company as their information security officer. He is working with the cloud data architect to plan the move of a critical server and its associated data. The data will be stored as big data at one of the large public cloud providers. The step that they are currently working on is the movement of the data to the cloud. Since this is actually about one petabyte, it is a great deal of effort to get the data to the cloud servers. They have chosen to use the physical device transfer option.

How should they protect the data in transit from the on-premises data center to the cloud data center?

A. Encrypt the data in transit with Internet Protocol Security (IPSec)
B. Encrypt the data at rest with the Advance Encryption Standard (AES)
C. Encrypt the data in transit with Transport Layer Security (TLS)
D. Encrypt the data in use with Fully Homomorphic Encryption (FHE)

A

B. Encrypt the data at rest with the Advance Encryption Standard (AES)

Explanation:
This is a physical drive being taken physically from the on-prem data center by car/truck/van to the cloud data center. This is data at rest. So, using AES is a great answer.

Data in transit means that it is moving across the network using wireless, wire, or fiber connection. If that was the case here, either TLS or IPSec would be great.

Data in use means that it is being processed. If it was in use and there was an FHE that worked for that application, that would be optimal.

41
Q

Communication is critical and necessary between parties, even more so when it comes to IT cloud services. What role supports selecting, deploying, and managing cloud services to simplify cloud service adoption?

A. Cloud regulator
B. Cloud service broker
C. Cloud product vendor
D. Cloud service provider

A

B. Cloud service broker

Explanation:
Cloud service brokers play a crucial role in simplifying cloud service adoption, enhancing security and compliance, providing value-added services, and enabling organizations to effectively manage and leverage the benefits of cloud computing.

Cloud regulators are governmental or regulatory bodies that oversee and enforce policies, regulations, and standards related to cloud computing and Cloud Service Providers (CSPs). They play a crucial role in ensuring the protection of user data, promoting fair competition, and maintaining the integrity and trustworthiness of cloud services.

Cloud product vendors refer to companies or organizations that develop, market, and provide cloud-based products and services to customers. These vendors offer a wide range of solutions, platforms, and infrastructure to support cloud computing and enable businesses and individuals to leverage the benefits of the cloud.

A Cloud Service Provider (CSP) is a company or organization that offers cloud computing services and resources to individuals, businesses, and other organizations. These providers typically operate large-scale data centers and infrastructure to deliver a range of cloud-based services, such as computing power, storage, networking, databases, and applications.

42
Q

Which of the following is NOT the name of a monitoring service of a major CSP?

A. GCP Operations Suite
B. CloudLog
C. CloudWatch
D. Azure Monitor

A

B. CloudLog

Explanation:
Correct answer: CloudLog

Cloud service providers often offer their own monitoring services. Some of the major ones include:

AWS: CloudWatch
Azure: Azure Monitor
GCP: GCP Operations Suite
43
Q

Which of the following best practices is MOST related to preventing abuse of management functionality?

A. Redundancy
B. Scheduled Downtime and Maintenance
C. Isolated Network and Robust Access Controls
D. Configuration Management and Change Management

A

C. Isolated Network and Robust Access Controls

Explanation:
Some best practices for designing, configuring, and securing cloud environments include:

Redundancy: A cloud environment should not include single points of failure (SPOFs) where the outage of a single component brings down a service. High availability and duplicate systems are important to redundancy and resiliency.
Scheduled Downtime and Maintenance: Cloud systems should have scheduled maintenance windows to allow patching and other maintenance to be performed. This may require a rotating maintenance window to avoid downtime.
Isolated Network and Robust Access Controls: Access to the management plane should be isolated using access controls and other solutions. Ideally, this will involve the use of VPNs, encryption, and least privilege access controls.
Configuration Management and Change Management: Systems should have defined, hardened default configurations, ideally using infrastructure as code (IaC). Changes should only be made via a formal change management process.
Logging and Monitoring: Cloud environments should have continuous logging and monitoring, and vulnerability scans should be performed regularly.
44
Q

Walker is working for a consulting firm that provides cloud consultant services. He is on a contract where the customer is asking for assistance with a particular topic. The Customer/ Cloud Customer (CC) needs to ensure high availability of their systems to be compliant with a particular regulation. What do they need to build into their environment?

A. Redundancy
B. Multitenancy
C. Load balancing
D. High availability

A

A. Redundancy

Explanation:
Redundancy in the cloud ensures that critical services and resources remain accessible and operational even in the event of hardware failures, software glitches, or other unforeseen incidents. By distributing workloads across redundant resources, organizations can maintain service continuity and minimize downtime. Redundancy in the cloud refers to the practice of creating duplicate or backup resources, services, or infrastructure components to ensure high availability and fault tolerance. It is a critical aspect of cloud architecture that aims to minimize the impact of failures and disruptions by providing backup systems or components that can seamlessly take over in case of an outage.

Load balancing is a technique used in computer networking and distributed systems to efficiently distribute incoming network traffic across multiple servers or resources. The primary goal of load balancing is to optimize resource utilization, enhance performance, and ensure high availability and scalability. Load balancing helps to evenly distribute the workload among servers, preventing any individual server from becoming overloaded or overwhelmed. This is part of what is needed to provide high availability, but it is not a complete picture, so the more generic answer of redundancy is a better answer to the question.

High availability refers to a system’s or infrastructure’s ability to remain operational and accessible for an extended period with minimal downtime or disruptions. It involves implementing measures to ensure continuous availability of services, applications, and resources, even in the face of hardware failures, software glitches, network issues, or other unforeseen incidents. High availability is what the question says they are trying to achieve, not the answer.

Cloud multitenancy is an architectural approach in which a cloud computing environment enables multiple tenants or customers to share the same cloud infrastructure, services, and resources while maintaining logical isolation and security. In a cloud multitenant model, the cloud service provider hosts and manages a single instance of the software or application, which is then accessed and utilized by multiple tenants. However, this does not provide high availability.

45
Q

A financial organization is going to hire another company to do some testing. They are not going to give any special knowledge of their cloud Infrastructure as a Service (IaaS) environment to the company for testing. Instead, they are going to test using the same techniques, toolsets, and methodologies that an actual bad actor would use to try to actively attempt to attack and compromise the IaaS.

What type of test is being described here and what conditions should be met before testing?

A. Static Application Security Testing (SAST) with approval from the cloud provider
B. Vulnerability scan with permission and assistance from the cloud provider
C. Penetration test with permission and approval from the cloud provider
D. Penetration test with permission and assistance from the cloud provider

A

C. Penetration test with permission and approval from the cloud provider

Explanation:
During a penetration test, the tester is trying to actively break into the live systems. This is meant to simulate a real-life scenario, and therefore, the tester will use the same techniques, methodologies, and toolsets that an actual attacker would use to compromise a system. As this is an IaaS permission, approval from the cloud provider is necessary. Their assistance is not needed.

During Static Application Security Testing (SAST), the tester has knowledge of and access to the source code, and all testing is done in an offline manner.

Vulnerability scans are usually done by an organization to ensure that their systems are hardened against known vulnerabilities. It assesses the environment, looking for unpatched systems, open ports, or any other vulnerabilities based on the systems that are in place.

46
Q

A large software development company knows that the advent of quantum cryptography will challenge our current cryptographic tools, algorithms, implementations, and software. This corporation is looking for a source of information to help them secure their software and the customers’ data into this new future.

Where could they turn to for information?
A. National Institute of Standards and Technology (NIST)
B. Software Assurance Forum for Excellence in Code (SAFECode)
C. Open Web Application Security Project (OWASP)
D. International Standards Organization / International Electrotechnical Committee (ISO/IEC 27034)

A

B. Software Assurance Forum for Excellence in Code (SAFECode)

Explanation:
SAFECode is a global nonprofit organization. It works to bring the leaders and technical experts together with the goal of promoting effective secure software programs. One of the concepts that they are addressing is Crypto Agility. The point with Crypto Agility is the ability of a system or organization to adapt and transition to different cryptographic algorithms or protocols as needed. It encompasses the capability to efficiently and securely switch from one cryptographic algorithm or key management system to another, ensuring the continued confidentiality, integrity, and availability of data and communications.

OWASP is a nonprofit organization dedicated to improving the security of software applications and the web. OWASP provides resources, tools, and knowledge to help individuals and organizations understand, identify, and mitigate security risks associated with web applications.

ISO/IEC 27034 is an international standard that provides guidelines and best practices for implementing and managing Application Security. It focuses specifically on the protection of applications throughout their lifecycle, from the design and development stages to deployment, operation, maintenance, and disposal.

The National Institute of Standards and Technology (NIST) is a federal agency within the United States Department of Commerce. NIST’s mission is to promote innovation and industrial competitiveness by advancing science, standards, and technology in various fields, including cybersecurity, measurement, and information technology.

47
Q

Which of the following cloud audit mechanisms is designed to identify anomalies or trends that could point to events of interest?

A. Access Control
B. Correlation
C. Packet Capture
D. Log Collection

A

B. Correlation

Explanation:
Three essential audit mechanisms in cloud environments include:

Log Collection: Log files contain useful information about events that can be used for auditing and threat detection. In cloud environments, it is important to identify useful log files and collect this information for analysis. However, data overload is a common issue with log management, so it is important to collect only what is necessary and useful.
Correlation: Individual log files provide a partial picture of what is going on in a system. Correlation looks at relationships between multiple log files and events to identify potential trends or anomalies that could point to a security incident.
Packet Capture: Packet capture tools collect the traffic flowing over a network. This is often only possible in the cloud in an IaaS environment or using a vendor-provided network mirroring capability.

Access controls are important but not one of the three core audit mechanisms in cloud environments.

48
Q

Which of the following risks is an Infrastructure as a Service offering exposed to the LEAST compared to other service models?

A. Lack of Required Expertise
B. External Threats
C. Virtualization
D. Personnel Threats

A

C. Virtualization

Explanation:
In the Infrastructure as a Service (IaaS) model, the cloud customer controls most of their infrastructure stack. However, some potential risks in an IaaS model include:

Personnel Threats: The provider’s employees have access to the physical infrastructure hosting customers’ environments. Negligent or malicious employees could cause harm to the customer.
External Threats: Malware, denial of service (DoS), and other attacks can impact an organization’s systems regardless of the cloud model.
Lack of Required Expertise: With IaaS, an organization is remotely managing systems in an environment defined by the provider. Without sufficient expertise in system management or knowledge of the provider’s environment and relevant security settings, the organization may have misconfigurations or other issues that place it at risk.

Virtualization applies less to IaaS than other models since less of the infrastructure stack is virtualized

49
Q

A cloud administrator needs to make use of the cloud component that can create, stop, and start virtual machines as well as provision them with the needed resources such as memory, storage, and CPU. What cloud component can be used to do all the above items?

A. Management plane
B. Transport Layer Security (TLS)
C. Secure Shell (SSH)
D. Remote Desktop Protocol (RDP)

A

A. Management plane

Explanation:
The management plane in a cloud environment can be used to create, stop, and start virtual machines as well as provision the virtual machines with the needed resources. Because the management plane has access to all the virtual machines from a high level, it’s very important that security measures are taken to prevent unauthorized access to the management plane.

SSH and RDP are two protocols used by administrators to connect to servers, routers, switches, and so on for configuration purposes and are possible answers. However, the question asks about a cloud component, so management plane is a better answer. SSH and RDP have been in use by administrators for a very long time, longer than our current version of cloud that we have today.

TLS is a protocol that could be used to secure the management plane, but it is encryption only and the question is looking for something that can start and stop virtual machines. TLS cannot do that, nor SSH or RDP.

50
Q

A publicly traded marketing organization is alerted that a regulatory agency is initiating an investigation against it. The investigation was initiated due to suspicion and accusation of mishandling of corporate finances. The marketing organization has resisted providing all the data that the regulatory agency is asking for. As a result, the regulatory agency is in need of something to tell the organization that they must protect the requested data from destruction until the issue is resolved in the courts.

What tool do they need to use to ensure the data is protected?

A. Attestation
B. Data retention
C. Digital forensics
D. Legal hold

A

D. Legal hold

Explanation:
When an organization is told that a regulatory body is commencing an inquiry against it, a legal hold should be immediately imposed. The organization must pause all data deletion actions relevant to the investigation until the matter is resolved. A legal hold has significant ramifications for data retention.

Data retention should involve a policy with requirements for a business to store and protect data for a specific period of time, either a minimum or a maximum amount of time. This is a normal process for the business. A legal hold is a legal requirement to store data until resolved in the courts.

Digital forensics would be the process of collecting and analyzing data to uncover the facts of a possible mishandling of corporate finances.

Attestation is to swear to something. Someone testifying in court is attesting to facts as they know them to be. That could happen as a result of this investigation.

51
Q

Which of the following is a benefit of using a private cloud over a hybrid, community, or public cloud deployment?

A. Security
B. Most scalable
C. Easier setup
D. Less expensive

A

A. Security

Explanation:
The private cloud deployment model is the most secure cloud deployment model. However, private clouds do not offer an easier setup, less expense, or more scalability than the other cloud deployment methods.

52
Q

Your organization is considering using a data rights management solution that provides replication restrictions. Which of the following is the MOST accurate description of this functionality?

A. Data is secure no matter where it is stored
B. The illicit or unauthorized copying of data is prohibited
C. Dates and time-limitations can be applied
D. Permissions can be modified after a document has been shared

A

B. The illicit or unauthorized copying of data is prohibited

Explanation:
Replication restrictions ensure that no unauthorized or unlawful copying of protected data occurs.

Dates and time-limitations are exactly that. It allows the company to control when and how long someone can access a particular file.

The company that controls the content can modify the level of access someone has, even after the document has been shared.

The security mechanisms persist with the document no matter where the data is stored.

53
Q

HTML, XML, and JSON are examples of formats used to organize which of the following types of data?

A. Structured
B. Unstructured
C. Mostly structured
D. Semi-structured

A

D. Semi-structured

Explanation:
The complexity of data discovery depends on the type of data being analyzed. Data is commonly classified into one of three categories:

Structured: Structured data has a clear, consistent format. Data in a database is a classic example of structured data where all data is labeled using columns. Data discovery is easiest with structured data because the data discovery tool just needs to understand the structure of the database and the context to identify sensitive data.
Unstructured Data: Unstructured data is at the other extreme from structured data and includes data where no underlying structure exists. Documents, emails, photos, and similar files are examples of unstructured data. Data discovery in unstructured data is more complex because the tool needs to identify data of interest completely on its own.
Semi-Structured Data: Semi-structured data falls between structured and unstructured data, having some internal structure but not to the same degree as a database. HTML, XML, and JSON are examples of semi-structured data formats that use tags to define the function of a particular piece of data.
54
Q

Which of the following cloud characteristics is a benefit of leasing virtualized cloud infrastructure rather than physical devices in an on-prem data center?

A. On-Demand Self-Service
B. Rapid Elasticity and Scalability
C. Metered Service
D. Broad Network Access

A

B. Rapid Elasticity and Scalability

Explanation:
The six common characteristics of cloud computing include:

Broad Network Access: Cloud services are widely available over the network, whether using web browsers, secure shell (SSH), or other protocols.
On-Demand Self-Service: Cloud customers can redesign their cloud infrastructure at need, leasing additional storage or processing power or specialized components and gaining access to them on-demand.
Resource Pooling: Cloud customers lease resources from a shared pool maintained by the cloud provider at need. This enables the cloud provider to take advantage of economies of scale by spreading infrastructure costs over multiple cloud customers.
Rapid Elasticity and Scalability: Cloud customers can expand or contract their cloud footprint at need, much faster than would be possible if they were using physical infrastructure.
Measured or Metered Service: Cloud providers measure their customers’ usage of the cloud and bill them for the resources that they use.
Multitenancy: Public cloud environments are multitenant, meaning that multiple different cloud customers share the same underlying infrastructure
55
Q

At which stage of the IAM process does the system determine whether a user should be granted access to a particular resource?

A. Accountability
B. Identification
C. Authorization
D. Authentication

A

C. Authorization

Explanation:
Identity and Access Management (IAM) services have four main practices, including:

Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering.
Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in.
Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this.
Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.
56
Q

Ewoud has entered the chat

A
57
Q

Dafina is configuring the components needed for their Platform as a Service (PaaS) components that will allow her corporation to securely store their sensitive and critical data in the cloud. As the corporation is very large, they need a way to centrally store and protect the encryption keys.

What should she configure to allow for centralized storage and secure sharing of the cryptographic keys?

A. Key Management Integrity Protocol (KMIP)
B. Advanced Encryption Standard (AES)
C. Client-side key management
D. Server-side key management

A

A. Key Management Integrity Protocol (KMIP)

Explanation:
Key Management Interoperability Protocol (KMIP) is a widely adopted industry standard for managing cryptographic keys and related objects across different cryptographic systems and platforms. It provides a standardized protocol for key lifecycle management, including key generation, storage, distribution, and deletion. The “sharing” of keys in the question is the critical word that ends up with this protocol as the best answer to the question.

Client-side key management refers to the practice of managing encryption keys on the client side of a system or application. It involves generating, storing, and securely managing cryptographic keys within the client’s environment to ensure the confidentiality, integrity, and accessibility of sensitive data.

Server-side key management refers to the practice of managing encryption keys on the server side of a system or application. In this approach, cryptographic keys used for encryption and decryption operations are generated, stored, and managed within the server environment. Server-side key management ensures the security and integrity of sensitive data stored on the server.

Advanced Encryption Standard (AES) is a symmetric encryption algorithm widely used for securing sensitive data. It is a symmetric key algorithm, which means the same key is used for both encryption and decryption processes. This is the algorithm that will use the keys, but it is not involved in transmission of the keys.

58
Q

Which of the following is a major difference between public and private cloud environments?

A. On-Demand Self-Service
B. Multitenancy
C. Broad Network Access
D. Resource Pooling

A

B. Multitenancy

Explanation:
The six common characteristics of cloud computing include:

Broad Network Access: Cloud services are widely available over the network, whether using web browsers, secure shell (SSH), or other protocols.
On-Demand Self-Service: Cloud customers can redesign their cloud infrastructure at need, leasing additional storage or processing power or specialized components and gaining access to them on-demand.
Resource Pooling: Cloud customers lease resources from a shared pool maintained by the cloud provider at need. This enables the cloud provider to take advantage of economies of scale by spreading infrastructure costs over multiple cloud customers.
Rapid Elasticity and Scalability: Cloud customers can expand or contract their cloud footprint at need, much faster than would be possible if they were using physical infrastructure.
Measured or Metered Service: Cloud providers measure their customers’ usage of the cloud and bill them for the resources that they use.
Multitenancy: Public cloud environments are multitenant, meaning that multiple different cloud customers share the same underlying infrastructure. Private cloud environments are single-tenant environments used by a single organization.
59
Q

Which cloud computing role delivers value by aggregating services from many vendors, integrating them with an organization’s current infrastructure, and customizing services that a Cloud Service Provider (CSP) cannot provide?

A. Cloud service partner
B. Regulator
C. Cloud service auditor
D. Cloud service broker

A

D. Cloud service broker

Explanation:
A cloud service broker, also known as a cloud broker or cloud services intermediary, is an entity or organization that acts as an intermediary between cloud service providers and cloud service consumers. The primary role of a cloud service broker is to assist customers in selecting, integrating, and managing cloud services from multiple providers to meet their specific needs.

A cloud service partner, also known as a Cloud Service Provider partner or CSP partner, is a company or organization that collaborates with a cloud service provider to deliver cloud-based solutions and services to customers. The partnership between the cloud service provider and the partner is established to leverage each other’s expertise, resources, and capabilities to provide comprehensive cloud solutions to meet the specific needs of customers. Partners is a bit more generic—it can include brokers and auditors as well as others.

A cloud auditor, also known as a cloud service auditor or cloud security auditor, is a professional or organization that performs independent assessments and evaluations of cloud service providers to ensure compliance, security, and operational integrity. The primary role of a cloud auditor is to evaluate and validate the effectiveness of a cloud service provider’s controls, processes, and policies to provide assurance to customers or stakeholders.

A regulator is a governmental or regulatory body responsible for overseeing and enforcing laws, regulations, and standards pertaining to the use, security, and privacy of cloud computing services. These regulators play a crucial role in ensuring compliance, data protection, and fair practices in the cloud computing industry.

60
Q

Sadie needs to protect the information within her business. She has been tasked with protecting data that is traversing the network from the server to the client. What could she use?

A. Domain Name System (DNS)
B. Fibre Channel
C. Obfuscation
D. Data Leak Prevention (DLP)

A

D. Data Leak Prevention (DLP)

Explanation:
Obfuscation is to obscure and conceal. Methods of obfuscation include encryption, tokenization, and masking. [Transport Layer Security (TLS) is another option.]

DLP is a tool to control the transmission of data, so it’s not used inappropriately. It can also be used to analyze data in a server to ensure that all the data that is there is okay being there. It does not quite meet the need of protecting the data from the server to the client. This would be the next best answer.

DNS is a name to an IP address lookup. It does not protect any transmission of data across a network.

Fibre channel is a SAN technology. It does not have security by itself. Encryption on the SAN can be done through the Fibre Channel Security Protocol (FC-SP-2) by using Encapsulating Security Protocol (ESP).

61
Q
A