Pocket Prep 17 Flashcards

1
Q

A cloud administrator needs to ensure that data has been completely removed from cloud servers after data migration. The company has an Infrastructure as a Service (IaaS) noSQL (Structured Query Language) server on a public cloud provider. Which data sanitization technique can be used in this cloud environment successfully?

A. A contract that has a clause that the Cloud Service Provider (CSP) physically degausses all drives that held customer data combined with the customer performing an overwrite of their data
B. A contract that has a clause that the Cloud Service Provider (CSP) physically destroys all drives that held customer data combined with the customer performing an overwrite of their data
C. A contract that has a clause that the Cloud Service Provider (CSP) physically shreds all drives that held customer data combined with cryptographic erasure of the database encryption key
D. A contract that has a clause that the Cloud Service Provider (CSP) performs 11 overwrites on all drives that held customer data combined with proper erasure of the database encryption key

A

C. A contract that has a clause that the Cloud Service Provider (CSP) physically shreds all drives that held customer data combined with cryptographic erasure of the database encryption key

Explanation:
In a cloud environment, where the customer cannot access or control the physical hardware, sanitization methods such as incineration, destruction, and degaussing are not an option. As a result, it is critical to ensure that the CSP handles drives securely. This should be discussed and addressed through the use of the contract.

If given a choice, physical destruction, preferably shredding, is better than 11 overwrites of the data. It would be OK if those two things were combined.

Degaussing can only be done on magnetic Hard Disk Drives (HDD). There is no guarantee that the drives the customer’s data is on would be HDD versus Solid State Drives (SSD). So shredding is the better choice.

Even with an IaaS implementation, it is not possible for the customer to perform an overwrite of their data. The way that the cloud will store the data makes that impossible. If the data were overwritten in the customer’s view, that would mean that the first data was deleted, and then new data is added. It would not be written to the same sectors on the drives. A deletion also only removes pointers to the data. It does not actually remove the data from the drives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Taking out an insurance policy is an example of which of the following risk treatment strategies?

A. Transference
B. Avoidance
C. Mitigation
D. Acceptance

A

A. Transference

Explanation:
Risk treatment refers to the ways that an organization manages potential risks. There are a few different risk treatment strategies, including:

Avoidance: The organization chooses not to engage in risky activity. This creates potential opportunity costs for the organization.
Mitigation: The organization places controls in place that reduce or eliminate the likelihood or impact of the risk. Any risk that is left over after the security controls are in place is called residual risk.
Transference: The organization transfers the risk to a third party. Insurance is a prime example of risk transference.
Acceptance: The organization takes no action to manage the risk. Risk acceptance depends on the organization’s risk appetite or the amount of risk that it is willing to accept.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following data security techniques is commonly used to ensure data integrity?

A. Encryption
B. Tokenization
C. Masking
D. Hashing

A

D. Hashing

Explanation:
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:

Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules.
Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions.
Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number.
Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data.
Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following operational controls and standards defines how the organization will respond to a business-disrupting event?

A. Information Security Management
B. Availability Management
C. Service Level Management
D. Continuity Management

A

D. Continuity Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Jonas is working to design their new web application’s encryption requirements. The current decision he and his team are making is which encryption protocol to use. Which protocol would you recommend?

A. Transport Layer Security (TLS)
B. Secure Shell (SSH)
C. Advanced Encryption Standard (AES)
D. Internet Protocol Security (IPSec)

A

A. Transport Layer Security (TLS)

Explanation:
Transport Layer Security (TLS) is a protocol that was designed for web applications. It is most commonly used to encrypt Hyper Text Transfer Protocol (HTTP).

Secure Shell is most commonly used to encrypt administrator and operator traffic, as they configure and manage network attached devices.

AES is an encryption algorithm that is commonly used in TLS, SSH, and IPSec. The key word here that makes this the wrong answer is algorithm. The question is asking for a protocol.

IPSec is commonly used to connect sites to each other, for example, a router on-prem to a router in the cloud.

TLS, SSH, and IPSec can be used in other places. However, the only clue in the question is web application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Dion is working with the operation team to deploy security tools within the cloud. They are looking for something that could detect, identify, isolate, and analyze an attack by distracting them. What would you recommend?

A. Honeypot
B. Network Security Group (NSG)
C. Firewall
D. Intrustion Detection System (IDS)

A

A. Honeypot

Explanation:
A honeypot consists of a computer, data, or a network site that appears to be part of a network but is actually isolated and monitored and that seems to contain information or a resource of value to attackers.

What makes honeypot a better answer than IDS is the final part to the question: “by distracting them.” An IDS could detect and identify the attack, but the bad actor would not know it was there and be distracted by it. It is a tool that only monitors traffic.

A firewall might distract the bad actor but not in the same sense as the question indicates. The bad actor might take some time to explore the firewall, but a firewall is a real device. It is not advisable to add a firewall with the intention of distracting the bad actor, unless it was part of a honeypot.

An NSG is effectively a firewalled group in the cloud. So the statement above about firewalls applies the same to the NSG.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of the following data and media sanitization methods is MOST effective in the cloud?

A. Physical destruction
B. Cryptographic erasure
C. Overwriting
D. Degaussing

A

B. Cryptographic erasure

Explanation:
When disposing of potentially sensitive data, organizations can use a few different data and media sanitization techniques, including:

Overwriting: Overwriting involves writing random data or all 0’s or all 1’s over sensitive data. This may be less effective in the cloud if the customer can guarantee access to certain regions of memory on the underlying server.
Cryptographic Erasure: Cryptographic erasure involves destroying the encryption keys used to protect sensitive data. This can easily be accomplished in the cloud by deleting keys from the key management system (KMS).

Physical destruction of media and degaussing are not options in the cloud because the customer lacks access to and control over the physical media used to store data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Willa is working for an insurance company as an information security manager. They have recently started using a Platform as a Service (PaaS) implementation for their database that contains their current customers’ personal data. She has created a new privacy policy that will appear on their website to explain their practices to their customers.

What principle of the Privacy Management Framerwork (PMF) [formerly the Generally Accepted Privacy Principles (GAPP)] is she addressing?

A. Agreement, notice, and communication
B. Disclosure to third parties
C. Monitoring and enforcement
D. Use, retention, and disposal

A

A. Agreement, notice, and communication

Explanation:
The PMF was developed by the American Institute of Certified Public Accountants and the Canadian Institute for Chartered Accountants. It includes nine key privacy principles as listed below:

Management
Agreement, notice, and communication
Collection and creation
Use, retention, and disposal
Access
Disclosure to third parties
Security for privacy
Data integrity and quality
Monitoring and enforcement

Agreement, notice, and communication is what Willa is doing by creating the privacy policy and adding it to the website for the customers to be able to view. What the privacy policy should state is their practices regarding use, retention, disposal, disclosure to third parties, monitoring, and enforcement.

The use, retention, and disposal elements should be clear as to what the business will be using the data for, how long they will be storing that data, and when and how they dispose of that data.

Disclosure to third parties involves selling the data or making it available to business partners.

Monitoring involves logging and reviewing the logs of people who access the data and what it is used for. Enforcement includes the proper removal of the data when the retention period is reached.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Ellie has been working with her team of information security professionals at a major financial institution on their data retention plan. They are required to retain customer data for seven years after an account has been closed. What phase of the data lifecycle are they addressing?

A. Use phase
B. Share phase
C. Archive phase
D. Destroy phase

A

C. Archive phase

Explanation:
The archive phase is when data is removed from the system and moved to long-term storage. In many cases, archived data is stored offsite for disaster recovery purposes.

The destroy phase would occur at the end of the seven years. Data should be properly destroyed at that time.

If data is exchanged with someone else, that would be the share phase.

If a user is looking at the data at any point, that is considered the use phase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

From a legal perspective, what is the MAIN factor that differentiates a cloud environment from a traditional data center?

A. Multitenancy
B. Self-service
C. Repudiation
D. Rapid elasticity

A

A. Multitenancy

Explanation:
Multitenancy is the main factor, from a legal perspective, which differentiates a cloud environment from a traditional data center. Multitenancy is a concept in cloud computing in which multiple cloud customers may be housed in the same cloud environment and share the same resources. Because of this, the cloud provider has legal obligations to all cloud customers housed on its hardware. If a server is ever seized from a cloud provider by law enforcement, it will likely contain assets from many different customers.

Rapid elasticity allows the system to utilize more resources as needed. That includes CPU, memory, network speed, etc.

Repudiation is the ability for a user to deny that they did something.

Self-service is the web portal that customers can use to setup and manage virtual machines of many different types.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The information security manager is working with the cloud deployment team as they prepare to move their data center to the cloud. An important part of their plan is how they are going to get out of the cloud. They would like to reduce the risk of vendor lock-in. What cloud shared consideration should the administrator be looking for?

A. Portability
B. Availability
C. Reversibility
D. Interoperability

A

C. Reversibility

Explanation:
Reversibility is defined in ISO/IEC 17788 as the “process for cloud service customers to retrieve their cloud service customer data and application artifacts and for the cloud service provider to delete all cloud service customer data as well as contractually specified cloud service derived data after an agreed period.” Based on this definition, reversibility is the best fit for this scenario.

Interoperability is defined in ISO/IEC 17788 as the “ability of two or more systems or applications to exchange information and to mutually use the information that has been exchanged.” That is not the correct answer because they are planning on how to get out.

Portability is defined in ISO/IEC 17788 as the “ability to easily transfer data from one system to another without being required to re-enter the data.” This is not the correct answer because they are planning on how to get out of the cloud.

Availability is defined in ISO/IEC 17788 as the “property of being accessible and usable upon demand by an authorized entity.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following emerging technologies REDUCES the amount of computation performed on cloud servers?

A. Blockchain
B. Internet of Things
C. Edge Computing
D. Artificial Intelligence

A

C. Edge Computing

Explanation:
Cloud computing is closely related to many emerging technologies. Some examples include:

Machine Learning and Artificial Intelligence (ML/AI): Machine learning is a subset of AI and includes algorithms that are designed to learn from data and build models to identify trends, perform classifications, and other tasks. Cloud computing is linked to the rise of ML/AI because it provides the computing power needed to train the models used by ML/AI and operate these technologies at scale.
Blockchain: Blockchain technology creates an immutable digital ledger in a decentralized fashion. It is used to support cryptocurrencies, track ownership of assets, and implement various other functions without relying on a centralized authority or single point of failure. Cloud computing is related to blockchain because many of the nodes used to maintain and operate blockchain networks run on cloud computing platforms.
Internet of Things (IoT): IoT systems include smart devices that can perform data collection or interact with their environments. These devices often have poor security and rely on cloud-based servers to process collected data and issue commands back to the IoT systems (which have limited computational power, etc.).
Edge and Fog Computing: Edge and fog computing move computations from centralized servers to devices at the network edge, enabling faster responses and less usage of bandwidth and computational power by cloud servers. Edge computing performs computing on IoT devices, while fog computing uses gateways at the edge to collect data from these devices and perform computation there.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which of the following types of testing verifies that a module fits properly into the system as a whole?

A. Usability Testing
B. Integration Testing
C. Regression Testing
D. Unit Testing

A

B. Integration Testing

Explanation:
Functional testing is used to verify that software meets the requirements defined in the first phase of the SDLC. Examples of functional testing include:

Unit Testing: Unit tests verify that a single component (function, module, etc.) of the software works as intended.
Integration Testing: Integration testing verifies that the individual components of the software fit together correctly and that their interfaces work as designed.
Usability Testing: Usability testing verifies that the software meets users’ needs and provides a good user experience.
Regression Testing: Regression testing is performed after changes are made to the software and verifies that the changes haven’t introduced bugs or broken functionality.

Non-functional testing tests the quality of the software and verifies that it provides necessary functionality not explicitly listed in requirements. Load and stress testing or verifying that sensitive data is properly secured and encrypted are examples of non-functional testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A corporation is working with their lawyers to ensure that they have accounted for the laws that they must be in compliance with. The corporation is located in Brazil, but a lot of their customers are in the European Union. When collecting and storing personal data regarding their customers, which law must they be in compliance with?

A. Health Information Portability and Accountability Act (HIPAA)
B. General Data Protection Regulation (GDPR)
C. Payment Card Industry Data Security Standard (PCI-DSS)
D. Asia Pacific Economic Cooperation (APEC)

A

B. General Data Protection Regulation (GDPR)

Explanation:
The General Data Protection Regulation (GDPR) is the European Union’s (EU) law that demands that personal data of natural persons within the EU is protected when collected and stored by corporations. That includes any corporation around the planet.

HIPAA is a United States of America (USA) law that requires that Protected Health Information (PHI) is protected.

APEC is an agreement among 21 member economies around the Pacific Ocean that is designed to promote free trade.

PCI-DSS is the standard from payment card providers that requires a certain level of information security present around systems that hold and process credit card data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Dana is working for a cloud provider responsible for architecting solutions for their customer’s Platform as a Service (PaaS) server-based options. As she and her team develop new server deployment options, they want to ensure that the virtual machine files are configured with the best possible options to satisfy their customers’ needs.

These configurations should be documented in a:

A. Baseline procedures
B. Security procedures
C. Security baseline
D. Security policy

A

C. Security baseline

Explanation:
Security baselines contain the configuration information for systems. This includes both physical and virtual. They should reflect the best practices that fit specific systems the best.

A security policy could be either a corporate level policy or the configuration within a firewall, which is commonly referred to as policies. Either way, it does not match the needs of the question, which is the configuration of a server.

Security procedures are the step-by-step documented procedure to complete a process or a configuration or something else of that sort.

Baseline procedures is not the best answer here. A baseline is the configuration. The procedure is how you do something. The term baseline procedure is not common security language.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Kody has been working with technicians in the Security Operation Center (SOC) to resolve an issue that has just occurred. Three of the cloud providers’ biggest customers have experienced a Denial of Service (DoS). It appears that there is an issue with the configuration of the Dynamic Optimization (DO) functionality. Kody is worried that this could occur again in the future, so she wants to uncover the root cause in their investigation.

What would this process be called?

A. Change management
B. Incident management
C. Problem management
D. Release and deployment management

A

C. Problem management

Explanation:
The focus of problem management is to identify and analyze potential issues in an organization to determine the root cause of that issue. Problem management is responsible for implementing processes to prevent these issues from occurring in the future.

Incident management is the initial response to the denial of service. This is what Kody was initially working on with the technicians. This is not the right answer because the question moves on to uncovering the root cause.

Release and deployment management is the process of releasing new hardware, software, or functions into the production environment.

Change management is the process of controlling any changes to the production environment; however, this definition does not seem to explain change management, although that is the description within ITIL. It covers many different scenarios.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which of the following types of data is the HARDEST to perform data discovery on?

A. Unstructured
B. Structured
C. Loosely structured
D. Semi-structured

A

A. Unstructured

Explanation:
The complexity of data discovery depends on the type of data being analyzed. Data is commonly classified into one of three categories:

Structured: Structured data has a clear, consistent format. Data in a database is a classic example of structured data where all data is labeled using columns. Data discovery is easiest with structured data because the data discovery tool just needs to understand the structure of the database and the context to identify sensitive data.
Unstructured Data: Unstructured data is at the other extreme from structured data and includes data where no underlying structure exists. Documents, emails, photos, and similar files are examples of unstructured data. Data discovery in unstructured data is more complex because the tool needs to identify data of interest completely on its own.
Semi-Structured Data: Semi-structured data falls between structured and unstructured data, having some internal structure but not to the same degree as a database. HTML, XML, and JSON are examples of semi-structured data formats that use tags to define the function of a particular piece of data.

Loosely structured is not a common classification for data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Certain data will require more advanced security controls in addition to traditional security controls. This may include extra access controls lists placed on the data or having additional permission requirements to access the data, especially when that data is the Intellectual Property (IP) of the organization.

This extension of normal data protection is known as which of the following?

A. Data rights management (DRM)
B. Threat and vulnerability management
C. Identify Access Management (IAM)
D. Infrastructure and access management

A

A. Data rights management (DRM)

Explanation:
Data Rights Management (DRM), also known as Information Rights Management (IRM), is an extension of normal data protection. Normal controls would include encryption, hashes, and access rights. In DRM, advanced security controls such as extra ACLs and constraints such as print capability, or inability, are placed onto the data.

Infrastructure and access management would be access to the data center. Infrastructure is the actual physical routers, switches, servers, and so on.

IAM involves controlling access by all the users to all the servers, applications, and so on. IAM also includes access, for example, by contractors and vendors. It is a very big topic in comparison to the question, which is looking for protection of the data.

Threat and vulnerability management includes risk assessment, analysis, and mitigations. It is not about controlling access, although that would be a topic of concern at some point in the discussion on threat management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Load balancing and redundancy are solutions designed to address which of the following?

A. Interoperability
B. Resiliency
C. Reversibility
D. Availability

A

B. Resiliency

Explanation:
Some important cloud considerations have to do with its effects on operations. These include:

Availability: The data and applications that an organization hosts in the cloud must be available to provide value to the company. Contracts with cloud providers commonly include service level agreements (SLAs) mandating that the service is available a certain percentage of the time.
Resiliency: Resiliency refers to the ability of a system to weather disruptions. Resiliency in the cloud may include the use of redundancy and load balancing to avoid single points of failure.
Performance: Cloud contracts also often include SLAs regarding performance. This ensures that the cloud-based services can maintain an acceptable level of operations even under heavy load.
Maintenance and Versioning: Maintenance and versioning help to manage the process of changing software and other systems. Updates should only be made via clear, well-defined processes.
Reversibility: Reversibility refers to the ability to recover from a change that went wrong. For example, how difficult it is to restore original operations after a transition to an outsourced service.
Portability: Different cloud providers have different infrastructures and may do things in different ways. If an organization’s cloud environment relies too much on a provider’s unique implementation or the provider doesn’t offer easy export, the company may be stuck with that provider due to vendor lock-in.
Interoperability: With multi-cloud environments, an organization may have data and services hosted in different providers’ environments. In this case, it is important to ensure that these platforms and the applications hosted on them are capable of interoperating.
Outsourcing: Using cloud environments requires handing over control of a portion of an organization’s infrastructure to a third party, which introduces operational and security concerns.

Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Which of the following regulations deals with law enforcement’s access to data that may be located in data centers in other jurisdictions?

A. GLBA
B. US CLOUD Act
C. SCA
D. SOX

A

B. US CLOUD Act

Explanation:
A company may be subject to various regulations that mandate certain controls be in place to protect customers’ sensitive data or ensure regulatory transparency. Some examples of regulations that can affect cloud infrastructure include:

General Data Protection Regulation (GDPR): GDPR is a regulation protecting the personal data of EU citizens. It defines required security controls for their data, export controls, and rights for data subjects.
US CLOUD Act: The US CLOUD Act creates a framework for handling cross-border data requests from cloud providers. The US law enforcement and their counterparts in countries with similar laws can request data hosted in a data center in a different country.
Privacy Shield: Privacy Shield is a program designed to bring the US into partial compliance with GDPR and allow US companies to transfer EU citizen data outside of the US. The main reason that the US is not GDPR compliant is that federal agencies have unrestricted access to non-citizens’ data.
Gramm-Leach-Bliley Act (GLBA): GLBA requires financial services organizations to disclose to customers how they use those customers’ personal data.
Stored Communications Act of 1986 (SCA): SCA provides privacy protection for the electronic communications (email, etc.) of US citizens.
Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) Act: HIPAA and HITECH are US regulations that protect the protected health information (PHI) that patients give to medical providers.
Payment Card Industry Data Security Standard (PCI DSS): PCI DSS is a standard defined by major payment card brands to secure payment data and protect against fraud.
Sarbanes Oxley (SOX): SOX is a US regulation that applies to publicly-traded companies and requires annual disclosures to protect investors.
North American Electric Reliability Corporation/Critical Infrastructure Protection (NERC/CIP): NERC/CIP are regulations designed to protect the power grid in the US and Canada by ensuring that power providers have certain controls in place.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Which of the following threat models focuses on identifying intersections between an organization’s attack surface and an attacker’s capabilities?

A. ATASM
B. STRIDE
C. DREAD
D. PASTA

A

A. ATASM

Explanation:
Several different threat models can be used in the cloud. Common examples include:

STRIDE: STRIDE was developed by Microsoft and identifies threats based on their effects/attributes. Its acronym stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege.
DREAD: DREAD was also created by Microsoft but is no longer in common use. It classifies risk based on Damage, Reproducibility, Exploitability, Affected Users, and Discoverability.
ATASM: ATASM stands for Architecture, Threats, Attack Surfaces, and Mitigations and was developed by Brook Schoenfield. It focuses on understanding an organization’s attack surfaces and potential threats and how these two would intersect.
PASTA: PASTA is the Process for Attack Simulation and Threat Analysis. It is a seven-stage framework that tries to look at infrastructure and ap
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Which of the following measures the LONGEST amount of time that a system can be down before causing significant harm to the organization?

A. RSL
B. MTD
C. RPO
D. RTO

A

B. MTD

Explanation:
A business continuity and disaster recovery (BC/DR) plan uses various business requirements and metrics, including:

Recovery Time Objective (RTO): The RTO is the amount of time that an organization is willing to have a particular system be down. This should be less than the maximum tolerable downtime (MTD), which is the maximum amount of time that a system can be down before causing significant harm to the business.
Recovery Point Objective (RPO): The RPO measures the maximum amount of data that the company is willing to lose due to an event. Typically, this is based on the age of the last backup when the system is restored to normal operations.
Recovery Service Level (RSL): The RSL measures the percentage of compute resources needed to keep production environments running while shutting down development, testing, etc.

Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Rise is working for a corporation as their cloud architect. He is designing how the Platform as a Service (PaaS) deployment will be used to store sensitive data for one particular application. He is designing a trust zone for that data to be handled inside of. Which of the following BEST defines a trust zone?

A. Sets of rules that define which employees have access to which resources
B. The ability to share pooled resources among different cloud customers
C., Virtual tunnels that connect resources at different locations
D. Physical, logical, or virtual boundaries around network resources

A

D. Physical, logical, or virtual boundaries around network resources

Explanation:
A cloud-based trust zone is a secure environment created within a cloud infrastructure where only authorized users or systems are allowed to access resources and data. This trust zone is typically created by configuring security measures such as firewalls, access controls, and encryption methods to ensure that only trusted sources can gain access to the data and applications within the zone. The goal of a cloud-based trust zone is to create a secure and reliable environment for sensitive data or critical applications by isolating them from potential threats and unauthorized access. This helps to ensure the confidentiality, integrity, and availability of the resources and data within the trust zone.

A virtual tunnel connecting to another location may be something that needs to be added, but it is not part of describing the zone itself.

Rules that define which employees have access to which resources are something that is needed by a business. This is Identity and Access Management (IAM). It should include information about the resources in a trust zone, but it does not define the actual zone.

The ability to share pooled resources is part of the definition of cloud. It is the opposite of a trust zone. Because resources are shared, many companies are very worried about using the cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A cloud information security manager is building the policies and associated documents for handling cloud assets. She is currently detailing how assets will be understood or listed so that access can be controlled, alerts can be created, and billing can be tracked. What tool allows for this?

A. Value
B. Key
C. Identifier
D. Tags

A

D. Tags

Explanation:
Tags are pervasive in cloud deployments. It is crucial that a plan is built for the corporation on how to tag assets. If it is not done consistently, it is not helpful. A tag is made up of two pieces, a key or name and a value. Key here is not the cryptographic key for encryption and decryption, but it is a word in English that was chosen by some to use here. It is really a name.

You can think of the tag as a type of identifier, but the tool needed to manage assets is called a tag.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

As part of the risk management process, an information security professional has been asked to perform an assessment of hard values. The values that the corporation is looking for involve understanding how much money a specific threat could cost the business. To understand that, they need to be clear about how many times each specific threat that they analyze could happen each year.

Which type of assessment has this engineer been asked to perform?

A. Bow tie analysis
B. Quantitative assessment
C. Cost-benefit analysis
D. Qualitative assessment

A

B. Quantitative assessment

Explanation:
The two main types of assessment used in the risk management process are quantitative assessments and qualitative assessments. Quantitative assessments use values such as Single Loss Expectancy (SLE), Annual Loss Expectancy (ALE), and Annual Rate of Occurrence (ARO) for a numeric approach. The ARO is how many times they believe a specific threat could occur in a year. The SLE calculates how much a single incident would cost.

Qualitative assessments are nonnumerical assessments.

From that information, it is possible to do a cost-benefit analysis comparing the cost of the controls and the benefits gained by reducing the likelihood or impact of the threats.

A bow tie analysis creates a visual display of proactive and reactive controls.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

As the information security manager working in a neurological doctor’s office, you are accountable for the security of medical records. Which types of data are you safeguarding?

A. Personal data
B. Credit and debit card data
C. Personally Identifiable Information (PII)
D. Protected Health Information (PHI)

A

D. Protected Health Information (PHI)

Explanation:
You are safeguarding Protected Health Information (PHI) that may be contained within the medical records you are accountable for. These can be in the form of lab reports, visit summaries, or other types of medical records.

Personally Identifiable Information (PII) is unique to an individual, such as a social security number or phone number. Personal data is effectively the same. It is the term used within the European Union’s General Data Protection Regulation (EU GDPR).

Credit and debit card data is just what it says—it is the data relevant to the cards in our wallets (the names, addresses, card numbers, expiration dates, and so on).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Traditional encryption methods may become obsolete as the cloud’s computing power and innovative technology improve optimization issues. What kind of advanced technology is potentially capable of defeating today’s encryption methods?

A. Artificial intelligence
B. Machine learning
C. Blockchain
D. Quantum computing

A

D. Quantum computing

Explanation:
Quantum computing is capable of solving problems that traditional computers are incapable of solving. When quantum computing becomes widely accessible to the general public, it will almost certainly be via the cloud due to the substantial processing resources necessary to do quantum calculations.

A side note: The encryption we have today will likely be broken, especially algorithms such as RSA and Diffie-Hellman. NIST began a competition in 2016 to get ahead of this and design encryption algorithms that can be used in the age of quantum computers safely. For information about this, refer to NIST’s website (csrc) and look for post quantum cryptoography and post quantum cryptography standardization.

Machine learning is the ability we now have for computers to be able to process a lot of data and provide us with information. It could be that they aid us in verifying a hypothesis, or they determine the idea that we need to address, or can address.

Machine learning is arguably a subset of Artificial Intelligence (AI). We keep making advances in technology that are getting us closer to true AI. We have robots that can navigate terrain all on their own, and we have chatGPT that can answer questions as if it is thinking on its own rather than just citing or quoting a source.

Blockchains give us the ability to track something, such as cryptocurrency, with an immutable or unchangeable record.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

We have been working with different Artificial Intelligence (AI) methods for years. While we may not be at a true AI just yet, there have been several great advances in this technology. Which method has the software working to understand, interpret, and generate text that seems to be from a live human?

A. Natural Language Processing (NLP)
B. Bayesian Networks
C. Machine Learning (ML)
D. Deep Learning (DL)

A

A. Natural Language Processing (NLP)

Explanation:
Natural Language Processing (NLP) focuses on enabling computers to understand, interpret, and generate human language. NLP methods involve techniques such as text classification, sentiment analysis, named entity recognition, machine translation, and question-answering systems.

AI methods, such as the one above (NLP) and the three below (the other answer options) refer to the various techniques and approaches used in the field of Artificial Intelligence to solve problems, make predictions, and perform tasks that typically require human intelligence. These methods encompass a broad range of algorithms, models, and methodologies that enable machines to learn, reason, and make decisions autonomously.

Machine Learning (ML) is a subset of AI that focuses on designing algorithms and models that allow computers to learn from data without being explicitly programmed. ML methods include supervised learning, unsupervised learning, and reinforcement learning, where algorithms learn patterns and make predictions based on training examples or feedback.

Deep Learning (DL) is a subfield of ML that involves training artificial neural networks with multiple layers to recognize patterns and extract complex representations from data. DL methods, such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), have been particularly successful in image recognition, natural language processing, and other complex tasks.

Bayesian networks are probabilistic graphical models that represent relationships among variables using a directed acyclic graph. They apply Bayesian inference to update beliefs and make predictions based on observed evidence and prior knowledge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

An organization has built and deployed an Infrastructure as a Service (IaaS) cloud environment. This company does have several laws and regulations that they must be compliant with. One of which is the European Unions (EU) General Data Protection Regulation (GDPR). To be able to respond as quickly as possible, they have hired a group of information security administrators and operators to focus solely on dealing with security issues that arise within the organization. The group will be centralized at the headquarters location. The group is responsible for monitoring the logs in the Security Information Event Manager (SIEM), responding to incidents and analyzing threats that arise.

What would this group be called?

A. Cloud provider
B. Regulator
C. Network Operations Center (NOC)
D. Security Operations Center (SOC)

A

D. Security Operations Center (SOC)

Explanation:
A Security Operations Center (SOC) is a group of individuals who focus solely on the monitoring, reporting, and handling of any security issues for an organization. SOC operators and administrators will typically be responsible for monitoring the logs within a SIEM if there is one in place. SOCs are usually staffed 24/7 to ensure that someone is available in the event of a security incident.

A Network Operations Center (NOC) is responsible for managing the network. This includes temperature, humidity, hardware, operating systems, and software. They would take care of equipment that has broken, network connections that have dropped, users who cannot connect due to network issues, etc. Some of what they find might be passed to the Security Operations Center (SOC).

Regulators are not going to monitor a company’s network, logs, or SIEM. They might need to be contacted, though, depending on the actual incidents that the SOC is dealing with.

Cloud providers have their own NOC and SOC. They would not normally manage an organization’s environment. If the cloud is a private or community cloud, it would be possible that the cloud provider is monitoring and managing all that the NOC and SOC manage. However, the question does not specify either of those conditions, so the assumption to be made is that this is a customer of a public cloud provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Sa’id is working on configuring the cloud environment for his company. He works for a multinational bank that has offices in the USA, India, and Europe primarily. They have been working within their own datacenters and are not migrating to a public cloud provider. As the number of attacks continues to rise and the number of laws they must be in compliance with increases, he is looking for a security tool to add to the cloud environment. They are building an Infrastructure as a Service (IaaS) environment and have already added their first Network Security Group (NSG). Now he is looking for the next tool to add that would give them information regarding any suspicious activity about a particular cluster of servers.

Which tool would work the best for that?

A. Honeypot
B. Network Intrusion Prevention System (NIPS)
C. Network Intrusion Detection System (NIDS)
D. Host Intrusion Detection System (HIDS)

A

C. Network Intrusion Detection System (NIDS)

Explanation:
A Network Intrusion Detection System (NIDS) analyzes all the traffic on the network and detects possible intrusions. It can send an alert out to administrators to investigate.

A Host Intrusion Detection System (HIDS) runs on a single host and analyzes all inbound and outbound traffic for that host to detect possible intrusions. Since the question specifies a cluster of servers, the NIDS is a better choice. It is possible to add HIDS to all the clusters; it is just not what the question is driving at.

A Network Intrusion Prevention System (NIPS) works in the same manner as an NIDS, but it also has the capability to prevent attacks rather than just detect them. This is not the best answer because the question is looking for information about intruders.

A honeypot is an isolated system used to trick a bad actor into believing that it is a production system. This should distract them long enough for the Security Operations Center (SOC) to detect the bad actor’s presence and take action to remove them from the systems and network.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

A corporation is looking for a way to improve their software development. They use Agile Application Security as their main methodology. They know that they need to improve the testing and analysis of their products. They also know that they need to improve the planning of the application’s security as too many things have been overlooked and only caught well into the development process.

What tool can they use for this?
A. Source code review
B. International Standards Organization / Internationa. Electrotechnical Committee (ISO/IEC) 27034
C. Application Security Verification Standard (ASVS)
D. Closed box testing

A

C. Application Security Verification Standard (ASVS)

Explanation:
Application Security Verification Standard (ASVS) is from OWASP. It can be used in many different ways, one of which is as a driver for Agile Application Security. It can also be used to replace off-the-shelf secure coding checklists, for secure development training, as a guide for automated unit and integration tests as well as for its primary use of being a list of application security requirements or tests that can be used by anyone building, designing, developing, or buying secure applications.

ISO/IEC 27034 provides guidelines and best practices for implementing and managing Application Security. It focuses specifically on the protection of applications throughout their lifecycle, from the design and development stages to deployment, operation, maintenance, and disposal.

Closed box testing is a software testing approach where the tester evaluates the functionality of a software application without having access to its internal structure, code, or implementation details. The tester focuses solely on the inputs and outputs of the software, treating it as a “black box,” where the internal workings are not known or visible. The term black box is falling out of favor, as it is insenitive. Included here only because it may still be in the (ISC)2 database of questions somewhere.

Source code review, also known as code review or static code analysis, is a process of systematically examining the source code of a software application to identify and address potential issues, vulnerabilities, and areas for improvement. It involves manual or automated analysis of the codebase to ensure its quality, maintainability, security, and adherence to coding standards.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

FedRAMP-compliant cloud environments offered by a cloud services provider are MOST likely to be an example of which of the following?

A. Multi-Cloud
B. Community Cloud
C. Hybrid Cloud
D. Public Cloud

A

B. Community Cloud

Explanation:
Cloud services are available under a few different deployment models, including:

Private Cloud: In private clouds, the cloud customer builds their own cloud in-house or has a provider do so for them. Private clouds have dedicated servers, making them more secure but also more expensive.
Public Cloud: Public clouds are multi-tenant environments where multiple cloud customers share the same infrastructure managed by a third-party provider.
Hybrid Cloud: Hybrid cloud deployments mix both public and private cloud infrastructure. This allows data and applications to be hosted on the cloud that makes the most sense for them.
Multi-Cloud: Multi-cloud environments use cloud services from multiple different cloud providers. This enables customers to take advantage of price differences or optimizations offered by different providers.
Community Cloud: A community cloud is essentially a private cloud used by a group of related organizations rather than a single organization. It could be operated by that group or a third party, such as FedRAMP-compliant cloud environments operated by cloud service providers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

When configuring a new hypervisor, the cloud administrator forgot to change the default administrative credentials. Which type of vulnerability listed on the Open Web Application Security Project (OWASP) Top 10 is this an example of?

A. XML external entities
B. Security misconfiguration
C. Cross-site scripting
D. Insecure deserialization

A

B. Security misconfiguration

Explanation:
A security misconfiguration occurs whenever systems or applications are configured in a way that makes them insecure. Systems regularly come preconfigured with default administrative credentials. These default credentials are generally easy to find online, so the failure to change them makes it possible for an attacker to gain access. This is an example of a security misconfiguration.

Insecure deserialization is a security vulnerability that arises when an application blindly trusts and processes data received in a serialized format without properly validating or sanitizing it. It can lead to serious consequences, including remote code execution, data tampering, or denial of service attacks.

Cross-Site Scripting (XSS) is a type of security vulnerability that occurs when a web application does not properly validate and sanitize user-supplied input, allowing malicious code to be injected into web pages viewed by other users. It is a prevalent attack vector that can have severe consequences if exploited.

XML External Entity (XXE) refers to a security vulnerability that occurs when an application parses XML input without proper validation, allowing an attacker to include external entities and potentially exploit the system. It can lead to sensitive information disclosure, denial of service attacks, or even remote code execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Which of the following can be used on the network to stop attacks automatically when a pattern of packets that is indicative of an attack has been detected?

A. eXtensible Markup Language (XML) firewall
B. Intrustion Prevention System (IPS)
C. Intrustion Detection System (IDS)
D. Database Integrity Monitor (DIM)

A

B. Intrustion Prevention System (IPS)

Explanation:
An Intrusion Prevention System (IPS) is placed at the network level. It analyzes all traffic on the network in the same way as an IDS. However, rather than simply alerting administrators when an intrusion is detected, it can actually stop and block the malicious traffic and prevent an attack from occurring automatically.

A firewall is used to allow wanted traffic and block everything else. A firewall should block by default. This would stop attacks, except the question is asking for a device that can see that the traffic is specifically an attack. That is an IPS.

A DIM is used to monitor user activity within a DataBase (DB) but from outside. This would make it much harder for the bad actor to know that the device is there. A bad actor would know that there are logs within the DB and try to delete or alter them. The DIM logs would not be seen by the bad actor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Alexis, the information security manager for a retail company, is looking to find the best cloud solutions to meet their needs. When assessing the different cloud providers, Alexis and her team request the auditor’s reports from the cloud providers. They request the Service Organization Controls (SOC) 1 and 2 reports. These reports are generated after an audit company has completed their third-party audit.

What standard did the cloud providers likely follow when performing the audits?

A. Generally Accepted Privacy Principles (GAPP)
B. Statement of Standards for Attestation Encagement (SSAE)
C. Statement of Auditing Standards (SAS)
D. International Standards Organization (ISO) 27050

A

B. Statement of Standards for Attestation Encagement (SSAE)

Explanation:
The Statement on Standards for Attestation Engagements (SSAE) is a set of standards defined by the American Institute of Certified Public Accountants (AICPA) to be used when creating SOC reports.

Statement on Auditing Standards (SAS) reports have been replaced by SSAE reports. This is the predecessor to SSAE.

Generally Accepted Privacy Principles (GAPP) is not a reporting standard. This has been replaced by the Privacy Management Framework (PMF). The PMF states the things that a company should do to protect personal data. If you are familiar with the requirements of the EU’s General Data Protection Regulation (GDPR) requirements, you would recognize those ideas in the PMF.

ISO 27050 is a standard for eDiscovery. eDiscovery is critical once there has been an incident, and forensics must be performed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Amelia works for a medium-sized company as their lead information security manager. She has been working with the development and operations teams on their new application that they are building. They are building an application that will interact with their customers through the use of an Application Programming Interface (API). Due to the nature of the application, it has been decided that they will use SOAP.

That means that the data must be formatted using which of the following?

A. Java Script Object Notation (JSON)
B. Coffee Script Object Notation (CSON)
C. YAML (YAML Ain’t Markup Language)
D. eXtensible Markup Language (XML)

A

D. eXtensible Markup Language (XML)

Explanation:
The SOAP only permits the use of XML-formatted data, while REpresentational State Transfer (REST) allows for the use of a variety of data formats, including both XML and JSON. SOAP is most commonly used when the use of REST is not possible.

XML, JSON, YAML, CSON are all data formats.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

An attack on confidentiality would fall under which letter of the STRIDE acronym for cybersecurity threat modeling?

A. S
B. T
C. D
D. I

A

D. I

Explanation:
Microsoft’s STRIDE threat model defines threats based on their effects, including:

Spoofing: The attacker pretends to be someone else
Tampering: The attacker damages data integrity
Repudiation: The attacker can deny that they took some action that they did take
Information Disclosure: The attacker gains unauthorized access to sensitive data, harming confidentiality
Denial of Service: The attacker can harm the availability of a service
Elevation of Privilege: The attacker can access resources that they shouldn’t be able to access
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Julez has been tasked with updating the data governance policy for his corporation, a bank. He is currently addressing the requirements that need to be defined for how long data should be stored into the future. What stage of the cloud data lifecycle is he addressing?

A. Use
B. Archive
C. Store
D. Share

A

B. Archive

Explanation:
The archive phase fits the best because the question says “stored into the future,” which implies archival.

Store would be the second best answer and is something that needs to be done as soon as the data is created. That initial storage location could actually be where the data is when the bank is looking for it somewhere in the future, but again, into the future implies archival.

The use phase is when a user is utilizing the data—just as you are right now reading this.

Share is when it is passed from one user to another, inside or outside the business.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Which of the following SaaS risks is MOST related to how SaaS offerings are made available to customers?

A. Persistent Backdoors
B. Web Application Security
C. Virtualization
D. Proprietary Formats

A

B. Web Application Security

Explanation:
A Software as a Service (SaaS) environment has all of the risks that IaaS and PaaS environments have, as well as new risks of its own. Some risks unique to SaaS include:

Proprietary Formats: With SaaS, a customer is using a vendor-provided solution. This may use proprietary formats that are incompatible with other software or create a risk of vendor lock-in if the organization’s systems are built around these formats.
Virtualization: SaaS uses even more virtualized environments than PaaS, increasing the potential for VM escapes, information bleed, and similar threats.
Web Application Security: Most SaaS offerings are web applications with a provided application programming interface (API). Both web apps and APIs have potential vulnerabilities and security risks that could exist in these solutions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

A software development corporation has built an Infrastructure as a Service (IaaS) environment for their software developers to use when building their products. When a virtual machine is running, the software developer will use that platform to build and test their code. The running machines require a type of storage that allows the operating system the ability to store temp files and use as a swap space.

What type of storage is used for that?

A. Structured
B. Object
C. Volume
D. Ephemeral

A

D. Ephemeral

Explanation:
Cloud storage comes in many shapes and flavors. The storage used by virtual machines to temporarily store files and to use for swap files is called ephemeral. Ephemeral means temporal or fleeting. It will disappear when the virtual machine shuts down. It is not for persistent storage.

Persistent storage includes structured, object, volume, unstructured, block, etc.

Each cloud service model uses a different method of storage as shown below:

Software as a Service (SaaS): content and file storage, information storage and management
Platform as a Service (PaaS): structured, unstructured, or block and blob
Infrastructure as a Service (IaaS): volume, object

Structured is confusing because it is used to describe a type of data (databases) and a type of data storage. They are not the same thing. Structured storage is a specific allocation of storage space. Block storage is a type of structured storage. It allocates space in units of space (e.g., 16KB).

A volume is analogous to a C:/ directory on a computer. It is often allocated using block storage.

Objects are files. Object storage would be a flat file system. Objects could have metadata attached to them.

Objects are stored in volumes or blocks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

A cloud information security manager needs to ensure their organization is aware of all ten key principles of the Privacy Management Framework (PMF). Which of the following is the principle that is addressed through input validation and hashing?

A. Collection and creation
B. Data integrity and quality
C. Management
D. Security for privacy

A

B. Data integrity and quality

Explanation:
The PMF replaced GAPP in 2009. The ISC2 outline lists the GAPP document, but it would be good to either know the current one or both. Both of these documents are from the American Institute for Certified Public Accountants (AICPA).

Input validation, sanitization, hashing, integrity check, syntactic and semantic checks, and more address this concern.

The PMF has nine components:

Management
Agreement, notice, and communication
Collection and creation
Use, retention, and disposal
Access
Disclosure to third parties
Security for privacy
Data integrity and quality
Monitoring and enforcement

Generally Accepted Privacy Principles (GAPP) include 10 key privacy principles:

Management
Notice
Choice and consent
Collection
Use, retention, and disposal
Access
Disclosure to third parties
Security for privacy
Quality
Monitoring and enforcement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Amber is building a new spreadsheet in a Software as a Service (SaaS) environment. As she is working from her computer, what security control can be implemented within the create phase?

A. Intrusion Detection System (IDS)
B. Data Loss Prevention (DLP)
C. Encryption
D. Firewall

A

C. Encryption

Explanation:
The create phase is an ideal time to implement technologies such as Transport Layer Security (TLS) when the data is inputted or imported. The client-server connection should be protected through encryption.

DLP is a tool that is good if the user is sending data to an inappropriate place, such as sending a credit card number through email or storing information on an inappropriate server. It is not a great match when a user is creating a spreadsheet in SaaS.

Firewalls are used to either block or allow traffic. It could be based on blocking or allowing a layer 7 command such as get or put in File Transfer Protocol (FTP) or a lower layer address or port number, for example.

IDS devices are on the watch for intruders. This is a user doing their work. It would not detect and log events such as creating a spreadsheet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Which of the following statements about type 1 hypervisors is TRUE?
A. Due to it being software based, it’s more vulnerable to an attack from someone using software exploits than with a type 2 hypervisor
B. Due to it being tied to the physical hardware of the machine, it’s more vulnerable to an attack from someone using software exploits than with a type 2 hypervisor
C. Due to it being software based, it’s less vulnerable to an attack from someone injecting malicious code than with a type 2 hypervisor
D. Due to being tied to the physical hardware of the machine, there are fewer lines of code for an attacker to inject malicious code into than with a type 2 hypervisor

A

D. Due to being tied to the physical hardware of the machine, there are fewer lines of code for an attacker to inject malicious code into than with a type 2 hypervisor

Explanation:
Correct answer: Due to being tied to the physical hardware of the machine, there are fewer lines of code for an attacker to inject malicious code into than with a type 2 hypervisor

Type 1 hypervisors are known as bare-metal hypervisors because they run directly on the physical hardware of the machine, and they are not software based like type 2 hypervisors. Because they are tied to the hardware, they are considered the operating system of the server and should be designed as thinly as possible. That means fewer lines of code. With fewer lines, it is harder for a bad actor to inject malicious code.

A type 2 is on top of a full operating system. There are many more opportunities to inject malicious code.

44
Q

A cloud architect is looking for a way to ensure that data is protected when the engineers in the Research and Development (R&D) department send information from the engineers testing the new product to the research engineers to improve that product to achieve the design goals.

Which technology can be utilized to accomplish this?

A. Dynamic Application Security Testing (DAST)
B. Runtime Application Security Protection (RASP)
C. Transport Layer Security (TLS)
D. Secure Shell

A

C. Transport Layer Security (TLS)

Explanation:
Data is at risk during the share phase of the data lifecycle. There are many tools that can be used in this phase, such as Data Leak Prevention (DLP), Secure Shell (SSH), and Transport Layer Security (TLS). The question does not tell us how they are sharing the data within the department, so from the list of possible answers, TLS is the best because it can be used in many different scenarios.

SSH is most likely used by cloud administrators and operators to manage virtual machines or other activities.

DAST is an application testing methodology. The question is about the share phase of data shared. There is no connection between those two topics.

RASP is something that can be added to software to help it protect itself. In the question, we are trying to share data, not protect the application from attacks.

45
Q

A cloud data center is being built by a new Cloud Service Provider (CSP). The CSP wants to build a data center that has a level of resilience that will classify it as a Tier III. At which tier is it expected to add generators to backup the power supply?

A. Tier II
B. Tier IV
C. Tier I
D. Tier III

A

C. Tier I

Explanation:
Generators are added to the requirements from the lowest level, Tier I.

Tier II and above also require those generators to be there. Tier I and II also require Uninterruptible Power Supply (UPS) units.

Tier III requires a redundant distribution path for the data.

Tier IV requires several independent and physically isolated power supplies.

46
Q

Maxwell is developing a Data Loss Prevention (DLP) strategy. He is working with a pharmaceutical company that needs to control their sensitive content, in particular the formulas their chemists have created in Research and Development (R&D). Which component of a DLP solution will a great deal of work have to be done to begin to protect that content?

A. Monitoring
B. Enforcement
C. Encryption
D. Discovery

A

D. Discovery

Explanation:
The discovery component or phase of DLP involves the categorization and classification of data so that the DLP tool will be able to identify the R&D formulas that are in storage or in transit. Without careful work in this phase, the DLP tool will not be able to monitor successfully.

Monitoring involves the analysis of data or content to discover when that sensitive data is someplace it should not be (in storage or transit).

Enforcement would then involve the tool taking the action of deleting the content, encrypting it, or another action as defined.

Encryption is not a component or phase of DLP. It is the scrambling of data so that it cannot be read. The question is asking where the work is done to begin to protect content.

The CSA SecaaS Category 2 document is a good read on DLP.

47
Q

The OWASP Top 10 lists XML external entities (XXE) on their current list of security vulnerabilities. Which of the following is an example of XXE?

A. A malicious actor is able to send untrusted data to a user’s browser without going through any validation
B. A parser is poorly configured, which allows a bad actor to gain access to sensitive data
C. A website is not using proper input validation on their data fields of their application
D. An application is not performing any validation on the browser tokens used to access the application

A

B. A parser is poorly configured, which allows a bad actor to gain access to sensitive data

Explanation:
XML external entities attacks occur when the application parses XML input. If the parser is weakly configured, it is possible that the bad actor can cause trouble if the XML input contains a reference to an external entity. An entity is a storage unit of some kind according to the standard. With this attack, they could do all kinds of different things, such as gaining access to the etc/shadow file that contains the users’ passwords, which is sensitive data.

A website not using proper input validation could be an SQL injection, command injection, or cross-site scripting.

If the application is not performing validation on tokens, there is a broken access control problem.

A malicious actor able to send entrusted data to the user’s browser also could be a cross-site scripting attack.

48
Q

Abdon and the information security team have been performing a risk analysis on their planned Platform as a Service (PaaS) deployment. They remember the zero-day exploit that occurred when a specific string of characters was logged that allowed a Remote Code Execution (RCE). Where is this vulnerability most applicable?

A. Metastructure
B. Infrastructure
C. Infostructure
D. Applistructure

A

D. Applistructure

Explanation:
The applistructure is the software in the cloud. That is where insecure software development is a particular concern. So the error that was log4j that called Log4Shell that resulted in a RCE simply through logging a certain string was a software development issue. The application is that layer.

The infostructure is data storage (Info - Informatation - data). It is the storage area network, block storage, file storage, object storage, and any other storage types.

The metastructure is the virtualization. It is the virtual machines, servers, routers, switches, firewalls, etc. It is the virtualized data center.

The infrastructure is the physical environment—the servers, routers, switches, firewalls, etc., that make up the physical data center.

There is not much in the official books about these. Okay, none. Please read the security guidance 4.0 document from the Cloud Security Alliance (4.0 as of May 2023).

49
Q

Angela works for a cloud provider. Her job is to create virtual machines and save the images for the customers to use in their Platform as a Service (PaaS) offering. When Angela is accessing the server and creating the virtual machines, she is using which of the following?

A. Secure Shell
B. Hypervisor
C. Data plane
D. Management plane

A

D. Management plane

Explanation:
The management plane allows for cloud providers to manage all the hosts from a centralized location instead of needing to log into each individual server when needing to perform tasks. The management plane is sometimes called the control plane. Or you could say the control plane is sometimes called the management plane.

Confusion comes from two sources: 1) The use of these terms in traditional networking/ data centers and 2) The use of control or controller plane in Software Defined Networking (SDN).

A plane is effectively a way to distinguish what the bits that are flowing on the network interface are used for. The data plane is the bits/frames/packets that are user traffic. The management plane is the bits/frames/packets that are the traffic from the administrator to the GUI in the cloud that allows for the administrators to perform their tasks. These tasks include building virtual machines, configuring the firewalls, establishing or configuring Identity and Access Management (IAM) accounts, and so on.

Virtual machines are constructed on or through a hypervisor. It is the software that enables the creation of the VMs. This is not the best answer because the question is about accessing.

Secure Shell is an OSI model layer 5 protocol that encrypts traffic and is most commonly used by administrators for remote access to switches, routers, servers, and so on. This could be how Angela’s connection is secured. However, the question is more than accessing. She is also creating virtual machines. which clarifies that we are talking about the cloud (it is a cloud exam, after all), but the management plane connects to the management console and the GUI (or command line) for configuring cloud services of all kinds.
Reference:

50
Q

Which phase of the audit process involves the following:

Walkthrough and risk assessments
Interviewing staff on procedures
Audit test work with physical and architectural assessments
Consistent criteria with SLA and contracts

A. Field work
B. Reporting
C. Follow-up
D. Planning

A

A. Field work

Explanation:
Generally, an audit process consists of four stages. Doing walkthroughs and risk assessments, interviewing personnel about procedures, conducting audit test work, and checking that criteria are compatible with SLAs and contracts are all part of the audit process’ field work phase. Field work is essentially the actual audit work.

The planning phase involves the objective and scope definition.

The reporting phase occurs after field work. This is the creation of the audit findings document that will be given to those who hired the auditors.

Follow-up would occur after the reporting phase. Conversations about the findings and possible reassessment occur in follow-up.

51
Q

Which technology provides a distributed and secure data management solution that leverages the cloud while maintaining data privacy and control?

A. Hybrid
B. Private
C. Public
D. Consortium

A

B. Private

Explanation:
There are four types of blockchain: private, public, consortium, and hybrid.

Private blockchains are restricted to a specific group of participants who are granted access and permission to the network. They are typically used within organizations or consortia where participants trust each other and require more control over the network. Private blockchains offer higher transaction speeds and privacy but sacrifice decentralization compared to public blockchains.

Public blockchains, such as Bitcoin and Ethereum, are open to anyone and allow anyone to participate in the network, verify transactions, and create new blocks. They are decentralized and provide a high level of transparency and security. Public blockchains use consensus mechanisms, such as Proof of Work (PoW) or Proof of Stake (PoS), to validate transactions and secure the network.

Consortium blockchains are a hybrid of public and private blockchains. They are operated by a consortium or a group of organizations that have a shared interest in a particular industry or use case. Consortium blockchains provide a controlled and permissioned environment, while still allowing multiple entities to participate in the consensus and decision-making process.

Permissioned blockchains require users to have permission to join and participate in the network. They are typically used in enterprise settings where access control and governance are critical. Permissioned blockchains offer faster transaction speeds and are more scalable than public blockchains, but they sacrifice some decentralization and censorship resistance.

The hybrid blockchain approach allows organizations to leverage the benefits of decentralization, transparency, and immutability from public blockchains while maintaining control, privacy, and scalability through private components. It offers a flexible solution that can cater to specific business requirements and regulatory considerations.

52
Q

Cloud providers that are at tier 3 must have multiple and independent power feeds to ensure redundancy. What else is needed in case of a power failure on one of the power feeds?

A. Generator and second power feed
B. Generator and Uninterruptible Power Supply (UPS)
C. Third power feed and a generator
D. Second power feed and Uninterruptible Power Supply (UPS)

A

B. Generator and Uninterruptible Power Supply (UPS)

Explanation:
Cloud providers will need to have multiple independent power feeds in case a power feed goes down. In addition, they will also typically have a generator or battery backup (UPS) to serve in the meantime when a power feed goes out.

The answers that contain “second power feed” are not correct because that already exists in the question with the word “multiple.” It is not necessary to have a third power feed. It may not be a bad idea, but it is not required.
Reference:

53
Q

What protocol is similar to HTML but is stricter in its formatting requirement and is commonly used for data exchange?

A. Java Script Object Notation (JSON)
B. eXtensible Markup Langugue (XML)
C. Binary JSON (BSON)
D. REpresentation State Transfer (REST) API

A

B. eXtensible Markup Langugue (XML)

Explanation:
eXtensible Markup Language (XML) is a standard information exchange format that employs tags to define data that is similar to Hyper Text Markup Language (HTML).

JSON is a lightweight data exchange protocol that is commonly used today, but it is not similar in structure to HTML. BSON is a binary form of JSON. REST APIs typically use JSON, although it can use XML to request information and receive responses.

54
Q

An organization combines offerings from multiple cloud providers into a package customized to a customer’s needs. Which of the following roles BEST describes this company?

A. Cloud Service Broker
B. Cloud Service Provider
C. Cloud Service Partner
D. Cloud Customer

A

A. Cloud Service Broker

Explanation:
Some of the important roles and responsibilities in cloud computing include:

Cloud Service Provider: The cloud service provider offers cloud services to a third party. They are responsible for operating their infrastructure and meeting service level agreements (SLAs).
Cloud Customer: The cloud customer uses cloud services. They are responsible for the portion of the cloud infrastructure stack under their control.
Cloud Service Partners: Cloud service partners are distinct from the cloud service provider but offer a related service. For example, a cloud service partner may offer add-on security services to secure an organization’s cloud infrastructure.
Cloud Service Brokers: A cloud service broker may combine services from several different cloud providers and customize them into packages that meet a customer’s needs and integrate with their environment.
Regulators: Regulators ensure that organizations — and their cloud infrastructures — are compliant with applicable laws and regulations. The global nature of the cloud can make regulatory and jurisdictional issues more complex.
55
Q

A cloud information security manager is working on developing an audit scope and must define restrictions for that audit scope. They work at a large business that must be in compliance with a variety of laws and regulations from around the world. The corporation has a variety of cloud solutions that they use from a couple of different large cloud providers.

What is a critical element that they needed to complete before moving into the cloud?

A. Understand who the third party is for the provider’s audits
B. Ensure auditability is in the contract
C. View the provider’s latest audit report
D. Understand the provider’s views on audits

A

B. Ensure auditability is in the contract

Explanation:
Before moving into cloud services, the contract should be created and analyzed. One of the critical elements within the contract is the auditability of the provider.

ISO/IEC 17788 defines auditability as the capability of collecting and making available necessary evidential information related to the operation and use of a cloud service for the purpose of conducting an audit.

Understanding the provider’s views on audits can be helpful in this process but more important is to have something in the contract.

Seeing the latest audit reports is good, but that is from the past. Before moving into the cloud, something needs to be in the contract for the future.

Who the third part auditor is can be useful, but just because they have been used in the past, does not define who will be the auditor in the future.

56
Q

Paricia works for a manufacturing company as their primary information security manager. They are now planning their move into the cloud to take advantage of the new technologies that are easy to implement into a virtual data center. One of the most important elements for them is the change in the responsibility model. If they build their own data center, there are many responsibilities that are now the cloud provider’s responsibility.

What is the breakdown of who is responsible for what?

A. The customer is responsible for configuring the virtual routers and switches, the cloud provider is responsible for the physical routers and switches
B. The customer is responsible for the virtual routers and physical switches, the cloud provider is responsible for the physical routers and virtual switches
C. The customer is responsible for the virtual switches and servers, the cloud provider is responsible for the physical storage and virtual routers
D. The customer is responsible for the virtual servers and databases, the cloud provider is responsible for physical and virtual network devices

A

A. The customer is responsible for configuring the virtual routers and switches, the cloud provider is responsible for the physical routers and switches

Explanation:
The shared responsibility model for the IaaS environment allows the customer to build a virtual data center, which means that the customer brings the operating systems with them that create the virtual routers, switches, servers, and all the security appliances, firewalls, intrusion detection systems, and so on. The cloud provider is responsible for the physical network, including the routers, switches, security appliances, and the servers with the hypervisors.

With IaaS, the customer determines their data storage systems or Storage Area Networks (SAN) as well as the data structures of databases or data lakes and so on. Everything virtual is the customer’s responsibility and the physical is the provider’s responsibility.
Reference:

57
Q

Oya and her risk assessment team are working on preparing to perform their annual assessment of the risks that their cloud data center could experience. What is the correct order of risk management steps?

A. Prepare, assess, categorize, select, implement, authorize, monitor
B. Prepare, categorize, select, implement, assess, authorize, monitor
C. Assess, authorize, prepare, categorize, select, implement, monitor
D. Authorize, prepare, assess, categorize, select, implement, monitor

A

B. Prepare, categorize, select, implement, assess, authorize, monitor

Explanation:
The National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) lists the correct order of the risk management steps as the following: prepare, categorize, select, implement, assess, authorize, and monitor. The prepare phase is where Oya and her team are. They are getting into the process of analyzing the risks for the cloud data center. Then they will categorize the risks and threats based on the impact they could have on the organization. The select phase is when controls are selected to reduce the likelihood or impact of the threats.

If there are new controls or simply new settings that need to be configured, this is done in the implement phase. When the assess phase is active, the team is looking to see if the controls are in place and working properly. The authorize phase is when senior management is informed of all that can be found, determined, chosen, and analyzed, and they authorize their business to have their production environments configured in this new way. Lastly, there is ongoing monitoring that is performed.

58
Q

Rafferty just configured the server-based Platform as a Service (PaaS) that they are using for their company, a government contractor. The server will be used to perform computations related to customer actions on their e-commerce website. He is concerned that they may not have enough CPU and memory allocated to them when they need it.

What should he do?

A. Set a limit to make sure that the service will work correctly
B. Make sure that the server has available share space
C. Ensure a reservation is made at the minimum level needed
D. Ensure the limits will not cause any problems with the service

A

C. Ensure a reservation is made at the minimum level needed

Explanation:
A minimum resource that is granted to a cloud customer within a cloud environment is known as a reservation. With a reservation, the cloud customer should always have, at the minimum, the amount of resources needed to power and operate any of their services.

On the flip side, limits are the opposite of reservations. A limit is the maximum utilization of memory or processing allowed for a cloud customer. It is a good idea to set to control costs, especially on a new service.

The share space is what is available for any customer to utilize. The cloud works on a first come, first served approach.

59
Q

Brocky has been working with a project team analyzing the risks that could occur as this project progresses. The analysis that their team has been performing used descriptive information rather than financial numbers. Which type of assessment have they been performing?

A. Qualitative assessment
B. Fault tree analysis
C. Root cause analysis
D. Quantitative assessment

A

A. Qualitative assessment

Explanation:
There are two main assessment types that can be done for assessing risk: qualitative assessments and quantitative assessments. While quantitative assessments are data driven, focusing on items such as Single Loss Expectancy (SLE), Annual Rate of Occurrence (ARO), and Annual Loss Expectancy (ALE), qualitative assessments are descriptive in nature and not data driven.

Fault tree analysis is actually a combination of quantitative and qualitative assessments. The question is looking for something that is not financial and that would be the quantitative. So this is more than what the question is about.

Root cause analysis is what is done in problem management from ITIL. Root cause analysis analyzes why some bad event has happened so that the root cause can be found and fixed so that it does not happen again.

60
Q

Albas has been working with a team that is performing Dynamic Application Security Testing (DAST) against a specific web application. They have been able to successfully alter the information being parsed by the application and have gained access to the shadow password file. What attack have they performed against this application?

A. XML external entities attack
B. Injection attack
C. Broken access control attack
D. Cross-site scripting attack

A

A. XML external entities attack

Explanation:
XML external entities attacks occur when the application parses XML input. The question does not say XML because it cannot, according to the rules ISC2 must follow, since XML is in the actual name of the attack. If the parser is weakly configured, it is possible that the bad actor can cause trouble if the XML input contains a reference to an external entity. An entity is a storage unit of some kind according to the standard.

Cross-site scripting occurs when a bad actor is able to inject a malicious script into a trusted website. It is usually a browser side script. This is a type of injection attack.

Broken access control can be exploited in many ways. It could be a failure in setting up the account with least privilege. Or it could be a flaw that allows the access control mechanism to be bypassed.

Injection attacks occur when the bad actor is able to send a command of some kind, unchecked, by the application. SQL or operating system commands are two of the most common.

61
Q

Which of the following is MOST relevant to an organization’s network of applications and APIs in the cloud?

A. User Access
B. Service Access
C. Physical Access
D. Privilege Access

A

B. Service Access

Explanation:
Key components of an identity and access management (IAM) policy in the cloud include:

User Access: User access refers to managing the access and permissions that individual users have within a cloud environment. This can use the cloud provider’s IAM system or a federated system that uses the customer’s IAM system to manage access to cloud services, systems, and other resources.
Privilege Access: Privileged accounts have more access and control in the cloud, potentially including management of cloud security controls. These can be controlled in the same way as user accounts but should also include stronger access security controls, such as mandatory multi-factor authentication (MFA) and greater monitoring.
Service Access: Service accounts are used by applications that need access to various resources. Cloud environments commonly rely heavily on microservices and APIs, making managing service access essential in the cloud.

Physical access to cloud servers is the responsibility of the cloud service provider, not the customer.

62
Q

Which of the following agreements manages a SPECIFIC project covered under an overarching agreement?

A. MSA
B. NDA
C. SOW
D. SLA

A

C. SOW

Explanation:
Two organizations working together may have various agreements and contracts in place to manage their risks. Some of the common types include:

Master Service Agreement (MSA): An MSA is an over-arching contract for all the work performed between the two organizations.
Statement of Work (SOW): Each new project under the MSA is defined using an SOW.
Service Level Agreement (SLA): An SLA defines the conditions of service that the vendor guarantees to the customer. If the vendor fails to meet these terms, they may be forced to pay some penalty.
Non-Disclosure Agreement (NDA): An NDA is designed to protect confidential information that one or both parties share with the other.
63
Q

An information security manager is weighing their options for protecting the organization’s external-facing applications from SQL injection, cross-site scripting, and cross-site forgery attacks. What type of solution has the IT manager selected to protect the external-facing applications?

A. eXtensible Markup Language (XML) gateway
B. Web Application Firewall (WAF)
C. Application Programming Interface (API) gateway
D. Intrusion Prevention System (IPS)

A

B. Web Application Firewall (WAF)

Explanation:
A Web Application Firewall (WAF) specifically addresses attacks on applications and external services. A WAF can assist in defending against SQL injection, cross-site scripting (XSS), and Cross-Site Request Forgery (CSRF) attacks.

API gateways analyze and monitor SOAP and ReST traffic. This includes XML and JavaScript Object Notation (JSON).

XML gateways focus on XML traffic.

IPS traffic watches for intrusions. It would not see XSS or CSRF attacks within the web traffic.

64
Q

Which of the following risk treatment strategies requires the LARGEST risk appetite?

A. Transference
B. Avoidance
C. Mitigation
D. Acceptance

A

D. Acceptance

Explanation:
Risk treatment refers to the ways that an organization manages potential risks. There are a few different risk treatment strategies, including:

Avoidance: The organization chooses not to engage in risky activity. This creates potential opportunity costs for the organization.
Mitigation: The organization places controls in place that reduce or eliminate the likelihood or impact of the risk. Any risk that is left over after the security controls are in place is called residual risk.
Transference: The organization transfers the risk to a third party. Insurance is a prime example of risk transference.
Acceptance: The organization takes no action to manage the risk. Risk acceptance depends on the organization’s risk appetite or the amount of risk that it is willing to accept.
65
Q

Which of the following involves identifying sensitive data in an organization to ensure that it is properly protected?

A. Data labeling
B. Data dispersion
C. Data flow diagram
D. Data mapping

A

D. Data mapping

Explanation:
Data dispersion is when data is distributed across multiple locations to improve resiliency. Overlapping coverage makes it possible to reconstruct data if a portion of it is lost.

A data flow diagram (DFD) maps how data flows between an organization’s various locations and applications. This helps to maintain data visibility and implement effective access controls and regulatory compliance.

Data mapping identifies data requiring protection within an organization. This helps to ensure that the data is properly protected wherever it is used.

Data labeling contains metadata describing important features of the data. For example, data labels could include information about ownership, classification, limitations on use or distribution, and when the data was created and should be disposed of.

66
Q

An information security professional, Harley, has been working with the software developers on a new Software as a Service (SaaS) offering. The system will process credit cards on a repeated basis for their customers. Compliance with the Payment Card Industry Data Security Standard (PCI DSS) is a requirement. They want to minimize their potential exposure to a data breach.

Which of the following can help to minimize their exposure?

A. Anonymization
B. Tokenization
C. Obfuscation
D. Encryption

A

B. Tokenization

Explanation:
Tokenization is a method used to protect data without needing to go through the process of encryption. In tokenization, an application is used to replace confidential data with an arbitrary value (known as a token). If the company stores only the token and the bank has the database to be able to convert the token back to the actual card number, it minimizes their exposure. If you do not have or store the credit card, it cannot be breached.

Obfuscation includes a variety of techniques to hide/obscure/confuse the data. It is useful to protect source code from reverse engineering. If you are protecting data, encryption is probably a better choice.

Encryption involves the ability to decrypt. So, if the credit cards are stored in an encrypted fashion and they are stolen, the bad actor can work on discovering the key and obtaining the data. This would be a good thing to do if the card numbers are being held or transmitted. Tokenization removes the card number so it minimizes exposure more.

Anonymization would not work here. Anonymization is to permanently remove the personal data, and it is needed to be able to process credit card charges for a customer. Without this information, it is very difficult to track customer orders.

67
Q

The software development team is working with the information security team through the Software Development Lifecycle (SDLC). The information security manager is concerned that the team is rushing through the phase of the lifecycle where the most technical mistakes could be made. Which phase is that?

A. Development
B. Testing
C. Requirements
D. Planning

A

A. Development

Explanation:
During the development or coding phase of the SDLC, the plans and requirements are turned into an executable programming language. As this is the phase where coding takes place, it is most likely the place where technical mistakes would be made.

Technical mistakes could be made in the planning or requirements phase, although more architectural problems are likely to occur.

Testing is technical and mistakes can be made during testing, but it is more likely that the testing is not as complete as needed.
Reference:

68
Q

Grace has been setting up a Data Loss Prevention (DLP) tool within her business to protect their corporate data further. What phases of the cloud data lifecycle does DLP protect?

A. Share, Store, and Destroy
B. Share, Store, and Archive
C. Share, Use, and Archive
D. Use, Store, and Archive

A

B. Share, Store, and Archive

Explanation:
DLP tools traditionally protected data in transit, which would be the share phase. However, today they can be used to protect data at rest, which will be the store and archive phases. They can be used to scan servers to look for data that should not be there as well as analyzing the data in transit out of the corporation.

DLP tools are notoriously difficult to setup because it is hard to tell the tool what data is sensitive and should not be on a particular server or in a particular data stream. So, the first phase of DLP is discovery.

69
Q

Dezso and his team are planning on moving to the cloud in a Platform as a Service (PaaS) implementation. As they are evaluating the cloud vendors that they have to choose from, they are concerned about vendor lock-in. What would cause vendor lock in?

A. Overly expensive hardware
B. Proprietary requirements
C. Undocumented software
D. Poorly written Service Level Agreements (SLA)

A

B. Proprietary requirements

Explanation:
Vendor lock-in occurs when an organization is unable to leave the vendor. The most common reason for vendor lock-in would be proprietary formats for how data is stored. It is possible that some consider contracts that prevent a customer from leaving to be vendor lock-in as well. The proprietary requirements make it very expensive, difficult, and burdensome to move to a new provider.

Undocumented software occurs all the time. The biggest problem with that is that it is hard to understand how it works.

Poorly written SLAs would not cause lock-in. They are a problem. The SLAs specify the level of service that the customer can and should expect to receive from the cloud provider. If they are not well defined, the customer may not get the service they need, such as enough bandwidth.

Overly expensive hardware does not cause lock-in. It might lock money into the wrong products, but that is not vendor lock-in. That’s poor financial management.

70
Q

Which of the following risks common to all cloud service models is MOST related to natural disasters?

A. General Technology Risks
B. Downtime
C. Compliance
D. Data Center Location

A

D. Data Center Location

Explanation:
Cloud computing risks can depend on the cloud service model used. Some risks common to all cloud services include:

Data Center Location: The location of a CSP’s data center may impact its exposure to natural disasters or the risk of regulatory issues. Cloud customers should verify that a CSP’s locations are resilient against applicable natural disasters and consider potential regulatory issues.
Downtime: If a CSP’s network provider is down, then its services are unavailable to its customers. CSPs should use multivendor network connectivity to improve network resiliency.
Compliance: Certain types of data are protected by law and may have mandatory security controls or jurisdictional limitations. These restrictions may affect the choice of a cloud service model or CSP.
General Technology Risks: CSPs are a big target for attackers, who might exploit vulnerabilities or design flaws to attack CSPs and their customers.
71
Q

Resource pooling in cloud environments has the MOST significant impact on which of the following?

A. Problem Management
B. Continuity Management
C. Capacity Management
D. Availability Management

A

C. Capacity Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
72
Q

A cloud architect needs to ensure a seamless transition to the Disaster Recovery (DR) site given a disaster. What must the architect have in place to accomplish this?

A. Redundant internet providers
B. Numerous hypervisors
C. Web Application Programming Interface (API)
D. Failover mechanism

A

D. Failover mechanism

Explanation:
A failover mechanism must be in place for there to be a seamless transition between the primary site and the DR site if there’s a disaster.

It may be necessary to have multiple internet providers to be able to access the public cloud, which is a good thing to include in the design of a corporation’s network and cloud architecture. The question, though, points to a seamless transition, which is a closer match to a failover mechanism.

Numerous hypervisors will not ensure a smooth transition to the DR site. It may be necessary to have different types of hypervisors in a cloud data center but that does not ensure the transition between sites.

A Web API is a piece of software that enables functionality between websites.

73
Q

U-Jin has been tasked with figuring out how the company should protect the personal information that they have collected about their customers. He knows that they have to be compliant with a couple of different laws from around the world due to the location of their customers.

Under the Payment Card Industry - Data Security Standard (PCI DSS), which of the following requires when data must be encrypted?

A. Data in use
B. Data in transit
C. Data in storage
D. Data at rest

A

B. Data in transit

Explanation:
The PCI DSS requires that data be encrypted when it is in transit across public networks.

Data must be protected when it is being stored. PCI-DSS does not say that it must be encrypted within the 12 requirements. When it is at rest or in storage, it would be good to encrypt as well as establish and control anyone’s access through Identity and Access Management (IAM). Encrypting data in use is just emerging as an option but is certainly not a requirement, as it is not available in almost all situations today.
Reference:

74
Q

Which of the following BEST describes the types of applications that create risk in a cloud environment?

A. Every piece of software in the environment
B. Small utility scripts
C. Software with administrator privileges
D. Full application suites

A

A. Every piece of software in the environment

Explanation:
Any piece of software, from major software suites to small utility scripts, can have possible vulnerabilities. This means that every program and every piece of software in the environment carries an inherent amount of risk with it. Any software that is installed in a cloud environment should be properly vetted and regularly audited

75
Q

Which framework, developed by the International Data Center Authority (IDCA), covers all aspects of data center design, including cabling, location, connectivity, and security?

A. HITRUST
B. Infinity Paradigm
C. OCTAVE
D. Risk Management Framework

A

B. Infinity Paradigm

Explanation:
The International Data Center Authority (IDCA)) is responsible for developing the Infinity Paradigm, which is a framework intended to be used for operations and data center design. The Infinity Paradigm covers aspects of data center design, which include location, cabling, security, connectivity, and much more.

Risk Management Framework (RMF) is defined by NIST as “a process that integrates security, privacy, and cyber supply chain risk management activities into the system development life cycle. The risk-based approach to control selection and specification considers effectiveness, efficiency, and constraints due to applicable laws, directives, Executive Orders, policies, standards, or regulations.”

The Health Information Trust Alliance (HITRUST) is a non-profit organization. They are best known for developing the HITRUST Common Security Framework (CSF), in collaboration with healthcare, technology, and information security organizations around the world. It aligns standards from ISO, NIST, PCI, and regulations like HIPAA.

The Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) is a software threat modeling technique by Carnegie Mellon University that was developed for the US Department of Defense (DoD).

76
Q

Of the following, which comes EARLIEST in the cloud data lifecycle?

A. Use
B. Archive
C. Store
D. Share

A

C. Store

Explanation:
The cloud data lifecycle has six phases, including:

Create: Data is created or generated. Data classification, labeling, and marking should occur in this phase.
Store: Data is placed in cloud storage. Data should be encrypted in transit and at rest using encryption and access controls.
Use: The data is retrieved from storage to be processed or used. Mapping and securing data flows becomes relevant in this stage.
Share: Access to the data is shared with other users. This sharing should be managed by access controls and should include restrictions on sharing based on legal and jurisdictional requirements. For example, the GDPR limits the sharing of EU citizens’ data.
Archive: Data no longer in active use is placed in long-term storage. Policies for data archiving should include considerations about legal data retention and deletion requirements and the rotation of encryption keys used to protect long-lived sensitive data.
Destroy: Data is permanently deleted. This should be accomplished using secure methods such as cryptographic erasure/crypto shredding.

Reference:

77
Q

Winta is using a program to create a spreadsheet after having collected information regarding the sales cycle that the business has just completed. What phase of the cloud data lifecycle is occurring?

A. Store
B. Archive
C. Share
D. Create

A

D. Create

Explanation:
Generating a new spreadsheet is the create phase of the data lifecycle. Create is the generation of new data/voice/video in any manner. The Cloud Security Alliance (CSA) also indicates that the create phase is when data is modified. Not everyone agrees with that last sentence, but this is an exam that is a joint venture between the CSA and (ISC)2, so it is good to know that is what they say in the guidance 4.0 document.

As soon as the data is created, it needs to be moved to persistent storage (hard disk drive or solid state drive).

If that spreadsheet is moved into long-term storage for future reference, then, if needed, that would be the archive phase.

Sending the spreadsheet to the boss for their review (or to anyone else) would be the share phase.

78
Q

A bad actor coerces an application to send a crafted request to an unexpected destination. The result is that it fetches a remote resource for that bad actor. How can this be prevented?

A. Enable HTTP redirection
B. Enforce “deny by default” firewall policies
C. Identity and Access Management
D. Send raw responses to clients

A

B. Enforce “deny by default” firewall policies

Explanation:
This is a description of Server-Side Request Forgery (SSRF). There are a few things that can be done to prevent this, and the setting “deny by default” in the firewall policies or network access control rules help. Segmenting remote resources and validating client supplied input are two others.

Enabling HTTP redirection is wrong because it needs to be disabled.

Sending raw responses to clients is also backward. You do not want to send raw responses to clients.

Identity and Access Management (IAM) does not help. This attack has the bad actor coercing the application, so this attack is not getting into the application. SSRF occurs once the bad actor has access to the application.

The Open Web Application Security Project (OWASP) Top 10 is a regularly updated report of the top 10 vulnerabilities that affect web applications. It is good to be familiar with the top 10. Both the 2017 list and the 2021 list. Just because there is a new list, that does not mean that all old questions using the 2017 list have been removed from the database of questions at (ISC)2. It is good to recognize the threat in a scenario and know what the fixes or prevention techniques are for each. It is recommended to google OWASP Top 10 and dig into each to prepare for the test.

79
Q

Aditya has been working with software developers on building a new piece of software that will be used within their business to store data. While the data is at rest, it must be protected. Of the following, what could be used for this purpose?

A. Anonymization
B. Tokenization
C. Obfuscation
D. De-identification

A

C. Obfuscation

Explanation:
The best answer to this question would be encryption, but that is not an option here. It is possible to say that encryption is actually a method of obfuscation. There are other ways to obfuscate data, such as transmitting or storing the data, like storing it base 64 rather than base 16. Obfuscation can also be used within the application to obfuscate the code and make it harder to reverse engineer.

Tokenization is when data is replaced with a different value. The data can be returned to the original value. This is useful for storing a credit card number in your phone or watch but not so useful for storing data in a business.

Anonymization is removing direct and indirect identifiers. There is no mention of Personally Identifiable Information (PII) in the question though. De-identification is removing the direct identifiers only. Again, there is no mention of PII in the question. So, both of these two options are not useful here.

80
Q

Through the International Standard Organization/International Electrotechnical Commisoon (ISO/IEC) 15408-1:2009, what does an EAL2 score tell us about the organization’s security practices and results?

A. It has been methodically tested and checked
B. It has been functionally tested
C. It has a formally verified design and has been tested
D. It has been structurally tested

A

D. It has been structurally tested

Explanation:
ISO 15408 is known as the common criteria. It is a testing criteria for security products to ensure fair and even testing when performed in different labs in different countries for similar products.

The possible Evaluation Assurance Level (EAL) scores are as follows:

EAL1 - Functionally tested
EAL2 - Structurally tested
EAL3 - Methodically tested and checked
EAL4- Methodically designed, tested, and reviewed
EAL5 - Semi-formally designed and tested
EAL6 - Semi-formally verified design and tested
EAL7 - Formally verified design and tested

Although this is a very simple question, it is worth noting that this is information that could be useful to know for the test.

81
Q

A medical corporation is going to use lab results, test results, and other data to determine the effectiveness of one of their vaccines. Since the US Health Information Portability and Accountability Act (HIPAA) demands that medical data be protected, the corporation will remove all direct identifiers from the records to protect the patients. Because some of the information may be relevant, considered indirect identifiers, they are going to leave that in place.

Which of the following is this called?

A. Tokenization
B. Anonymization
C. Encryption
D. De-identification

A

D. De-identification

Explanation:
Data de-identification is the process of removing direct identifiers. Bill 64 in Quebec, Canada, defines this quite clearly.

Anonymization is removing the direct and indirect identifiers.

Tokenization is to replace the data with another value. Commonly used for services like ApplePay, GooglePay, and PayPal. The token can be removed, and the original value put back in its place, unlike de-identification and anonymization, which are permanent removal methods.

Encryption effectively obscures or obfuscates the data. This can also be undone with decryption

82
Q

Which of the following describes how an organization plans to restore normal operations after a disruptive incident?

A. DRP
B. BIA
C. COOP
D. BCP

A

A. DRP

Explanation:
A business continuity plan (BCP) sustains operations during a disruptive event, such as a natural disaster or network outage. It can also be called a continuity of operations plan (COOP).

A disaster recovery plan (DRP) works to restore the organization to normal operations after such an event has occurred.

The decision of what needs to be included in a business continuity plan is determined by a business impact assessment (BIA), which determines what is necessary for the business to function vs. “nice to have.”

83
Q

Leodis has been working on the setup of a new application. They have been trying to decide how to determine who the users are and what permissions they should be given, if any. There are several protocols available to make this happen in a cloud environment. Which protocol allows the communication of the users’ permissions?

A. OAuth (Open Authorization)
B. Kerberos
C. Open Identification (OpenID)
D. Web Services Federation (WS-Federation)

A

A. OAuth (Open Authorization)

Explanation:
Open Authorization (OAuth) is an open standard protocol that allows secure authorization and delegation of user permissions between different applications or services. It provides a framework for users to grant limited access to their resources on one website or application to another website or application without sharing their login credentials.

OpenID is an open standard and decentralized authentication protocol that allows users to authenticate themselves on multiple websites or applications using a single set of credentials. It provides a convenient and secure way for users to log in to various websites without the need to create and remember separate usernames and passwords for each site.

Kerberos is a network authentication protocol that provides secure authentication for client-server communication over an insecure network. It was developed by MIT and has become an industry-standard protocol for authentication in many systems and applications. This is used, or has been used, for LAN environments, not the cloud.

Web Services Federation (WS-Federation) is an industry-standard protocol that provides a framework for identity federation and Single Sign-On (SSO) across different web services and security domains. It is based on XML and relies on other web service standards, such as Simple Object Access Protocol (SOAP), to enable secure communication and identity exchange between participating entities.

84
Q

Sadi is an information security professional working with a cloud architect. They are working through the server deployment choices for her corporation into the public cloud environment. The server that they are trying to deploy is a Structured Query Language (SQL) database. The critical configuration control that Sadi is interested in is control over the server’s configurations. There are many benefits she expects to get by moving the SQL database from their data center into the cloud, such as better physical security. Sadi is not worried about having continuous uptime, but the bigger concern is flexible growth options.

Which of the following host types matches Sadi’s needs the closest?

A. Bastion host
B. Clustered hosts
C. Redundant host
D. Standalone host

A

D. Standalone host

Explanation:
A standalone host is essentially a virtual machine within the cloud provider’s environment. It would not be connected to other hosts, as is both clustered and redundant hosts.

A cluster of servers are all continuously working. If one of them fails, the others are already there processing data. The customer’s connection could be transferred over, and the customer would never know that there was a failure. They are considered active-active.

A redundant host has a backup. However, the backup does not process data actively until the primary host fails. They are considered active-passive.

Since the question says that Sadi is not that worried with continuous uptime, both the cluster and the redundant hosts are more than is necessary.

A bastion host is hardened. It is a host that has been properly setup and configured to limit any exposure to a bad actor. Bastion is from the French, meaning hardened and fortified. It is a very good idea to make this effort if the host is going to be accessible by the Internet, such as a webserver in the DeMilitarized Zone (DMZ).

85
Q

A cloud architect is looking to add an additional layer of security to their cloud network in the event a hacker gains a foothold into their network. The cloud architect wants to add an additional control that filters traffic just before it reaches the virtual systems by filtering port numbers. What is this control known as?

A. Virtualization
B. Security Groups (SG)
C. Software Defined Network (SDN)
D. Micro-segmentation

A

B. Security Groups (SG)

Explanation:
Security groups are effectively port-based firewalls. They are added as an extra layer of security in front of virtual systems. It is good to add Operating System (OS)-based firewalls, but more layers are critical to protect systems and data.

Micro-segmentation is a method of creating very small virtualized environments. This does not match the question because it does not specify a small, isolated environment for single systems.

An SDN is a method of improving switch technology by adding a controller that makes the forwarding decisions, which can be configured to make policy-based decisions as well.

Virtualization is the basic technology that enables the creation of virtual machines.

86
Q

A breach occurred at a doctor’s office in which information about a patient’s medical history and treatment were stolen. What type of data has been stolen?

A. Protected Health Information (PHI)
B. Payment card data
C. Personally Identifiable Information (PII)
D. Personal data

A

A. Protected Health Information (PHI)

Explanation:
PHI, which stands for Protected Health Information, includes a wide spectrum of data about an individual and their health. Medical history, treatment, lab results, demographic information, and health insurance information are all considered to be PHI.

Personal data is information about a person and is fairly interchangeable with PII. Examples of PII are name, address, and phone number.

Payment card data is the card number, expiration date, name, address, limit, etc.
Reference:

87
Q

Which of the following is NOT one of the main risks that needs to be assessed during the Business Impact Assessment (BIA) phase of developing a Disaster Recovery (DR) plan?

A. Budgetary constraints applied by management
B. Load capacity at the disaster recovery site
C. Legal and contractual issues from failures
D. Migration of services to the alternate site

A

A. Budgetary constraints applied by management

Explanation:
As with any new system or plan being implemented, it’s important to assess the risks of the changes. Budgetary constraints are not a main risk when developing a DR plan.

The main risks associated with developing a BCDR plan include the load capacity at the BCDR site, migration of services, and legal or contractual issues.

88
Q

An organization has decided that the best course of action to handle a specific risk is to obtain an insurance policy. The insurance policy will cover any financial costs of a successful risk exploit. Which type of risk response is this an example of?

A. Risk avoidance
B.Risk mitigation
C. Risk acceptance
D. Risk transference

A

D. Risk transference

Explanation:
When an organization obtains an insurance policy to cover the financial burden of a successful risk exploit, this is known as risk transference or risk sharing. It’s important to note that with risk transference, only the financial losses would be covered by the policy, but it would not do anything to cover the loss of reputation the organization might face.

Risk avoidance is when a decision is made to not engage in, or to stop engaging in, risky behavior.

Risk mitigation or risk reduction is when controls are put in place to reduce the chance of a threat being realized or to minimize the impact of it once it does happen.

Risk acceptance always needs to be done because no matter how much of the other three options are done, risk cannot be eliminated. Who accepts the risk, though, is something that a business needs to carefully consider.

89
Q

In which of the following types of testing does the tester have NO knowledge of the target?

A. White-box
B. Gray-box
C. Black-box
D. Clear-box

A

C. Black-box

Explanation:
Software testing can be classified as one of a few different types, including:

White-box: In white-box or clear-box testing, the tester has full access to the software and its source code and documentation. Static application security testing (SAST) is an example of this technique.
Gray-box: The tester has partial knowledge of and access to the software. For example, they may have access to user documentation and high-level architectural information.
Black-box: In this test, the attacker has no specialized knowledge or access. Dynamic application security testing (DAST) is an example of this form of testing.
90
Q

Careful design and filtering is important to avoid information overload for which of the following cloud audit mechanisms?

A. Log Collection
B. Correlation
C. Packet Capture
D. Access Controls

A

A. Log Collection

Explanation:
Three essential audit mechanisms in cloud environments include:

Log Collection: Log files contain useful information about events that can be used for auditing and threat detection. In cloud environments, it is important to identify useful log files and collect this information for analysis. However, data overload is a common issue with log management, so it is important to collect only what is necessary and useful.
Correlation: Individual log files provide a partial picture of what is going on in a system. Correlation looks at relationships between multiple log files and events to identify potential trends or anomalies that could point to a security incident.
Packet Capture: Packet capture tools collect the traffic flowing over a network. This is often only possible in the cloud in an IaaS environment or using a vendor-provided network mirroring capability.

Access controls are important but not one of the three core audit mechanisms in cloud environments.

91
Q

Which of the following is a privacy framework built around five key principles?

A. PIA
B. ISO 27018
C. PCI DSS
D. GAPP

A

B. ISO 27018

Explanation:
ISO 27018 is part of the ISO 27000 suite of standards. It provides privacy protection built around five key principles:

Consent: A cloud provider can only use a customer’s data for marketing with their explicit consent, and this consent can’t be required to use the service.
Control: The cloud customer has full control over how their data is used by the cloud provider.
Transparency: The cloud provider must inform cloud customers of where their data is stored and any subcontractors that have access to it.
Communication: The cloud provider should perform auditing and report any potential incidents to their customers.
Audit: Cloud providers should undergo annual external audits.
92
Q

Orlando has been able to determine that they are experiencing a lot of shadow IT. However, he is unsure of the tool that could be used to determine where the users are connecting to. What tool is designed to assist with this process?

A. Data Leak Prevention (DLP)
B. Cloud broker
C. Cloud Access Security Broker (CASB)
D. Cloud Posture Manager (CPM)

A

C. Cloud Access Security Broker (CASB)

Explanation:
CASBs were originally designed to determine where the users were connecting and using shadow IT. Shadow IT is technology that the users have signed up for (in the cloud) that did not go through the regular acquisition procedures. Today, they can do additional things like DLP.

DLP is designed to determine if a user has just sent (or is trying to send) data someplace it should not go or in a format it should not be in (e.g., not encrypted). It is not designed to determine what web addresses the users are accessing.

Cloud brokers are people/companies that help cloud customers and cloud providers in their negotiations.

CPM tools are even newer. Sometimes called Cloud Security Posture Manager (CSPM), these tools are designed to determine all the paths a user can take to get access to particular resources. It is normal in the cloud that there can be multiple paths to a piece of data. Also, it is normal to assume the role of one of the devices or applications temporarily, which could give a user more access than they should have.

93
Q

Leelo works for a corporation that assists both cloud service providers (CSP) and cloud service customers (CSC). They assist in the negotiation of services as well as the management of those services. They also have some of their own software to help with this management.

What term is used to describe an individual or organization that serves as an intermediary between cloud customers and a cloud service provider?

A. Cloud service broker
B. Cloud service partner
C. Cloud service auditor
D.Cloud service user

A

A. Cloud service broker

Explanation:
A cloud service broker is an individual or organization which serves as the go-between or intermediary between cloud customers and cloud service providers. Brokers can negotiate and manage the services between the customer and the provider. They do have some of their own software to help with this management.

Cloud service auditors are the auditors who go into the cloud service provider’s datacenter as the third party to verify their controls.

Cloud service users are the customers of the cloud provider. This would include the end user as well as the corporations that they work for.

A cloud service partner is a company that helps either the customer or the partner. It is the more generic term that can include the brokers and the auditors.

94
Q

Which of the following common contractual terms might include requiring the service provider to provide an annual SOC 2 report?

A. Access to Cloud/Data
B. Metrics
C. Compliance
D. Assurance

A

D. Assurance

Explanation:
A contract between a customer and a vendor can have various terms. Some of the most common include:

Right to Audit: CSPs rarely allow customers to perform their own audits, but contracts commonly include acceptance of a third-party audit in the form of a SOC 2 or ISO 27001 certification.
Metrics: The contract may define metrics used to measure the service provided and assess compliance with service level agreements (SLAs).
Definitions: Contracts will define various relevant terms (security, privacy, breach notification requirements, etc.) to ensure a common understanding between the two parties.
Termination: The contract will define the terms by which it may be ended, including failure to provide service, failure to pay, a set duration, or with a certain amount of notice.
Litigation: Contracts may include litigation terms such as requiring arbitration rather than a trial in court.
Assurance: Assurance requirements set expectations for both parties. For example, the provider may be required to provide an annual SOC 2 audit report to demonstrate the effectiveness of its controls.
Compliance: Cloud providers will need to have controls in place and undergo audits to ensure that their systems meet the compliance requirements of regulations and standards that apply to their customers.
Access to Cloud/Data: Contracts may ensure access to services and data to protect a customer against vendor lock-in.
95
Q

Jamarcus is looking for a security control that can be used to protect a database within their Platform as a Service (PaaS). What they are concerned about within this business is that the data must be protected. It cannot be viewed by anyone that is not approved, and it cannot be sent anywhere it should not be.

What tool can accomplish this?

A. Identity and Access Management (IAM)
B. Data Loss Prevention (DLP)
C. Transport Layer Security (TLS)
D. Federated identification

A

B. Data Loss Prevention (DLP)

Explanation:
Data loss prevention refers to a set of controls and practices put in place to ensure that data is only accessible to those authorized to access it. DLP also protects data from being lost or improperly used.

IAM is used to control what someone has access to and with what permissions. It is not used to control where data is sent.

TLS is used to encrypt data in transit so that it is not visible to someone who should not be able to see it. It does not control where data can be sent.

Federated identification is another way to control who has access to something. It is not used to control where data is sent.

So, the only tool here that does everything needed by the question is DLP.
Reference:

96
Q

A Platform as a Service (PaaS) provider knows that their potential customers need to have a level of confidence in their security. A cloud auditor has done a thorough audit of the provider’s environment using the Statement of Standards for Attestation Engagements (SSAE) 18 methodology.

Which audit report can they now provide to the general public?

A. Service Organization Control (SOC) 2 Type II
B. Service Organization Control (SOC) 2 Type i
C. Service Organization Control (SOC) 1 Type ii
D. Service Organization Control (SOC) 3

A

D. Service Organization Control (SOC) 3

Explanation:
SOC 3 reports are meant to be consumed and reviewed by the general public. SOC 3 allows for a much wider audience than the other reports listed. SOC 3 reports are meant to instill confidence that the organization’s systems are secure.

A SOC 3 report is the result of first doing a SOC 2 audit within the business. The SOC 3 is the public level document that indicates that the cloud auditor was there and is leaving some level of seal of approval.

A SOC 2 Type II would be the preferred report to have as a customer, but the service provider does not have to release that. That is the point of the SOC 3.

A SOC 2 Type i is a report that indicates the status of the controls and their design efficiency. A SOC 2 Type II shows that the controls are actually in use for a period of time of which there is no minimum. A SOC 2 audit looks at security controls. A SOC 1 audit looks at controls that can have a financial impact on the customer.

97
Q

VM escape attacks are MOST closely related to which of the following virtualization security considerations?

A. Serverless Technology
B. Hypervisor Security
C. Ephemeral Security
D. Container Security

A

B. Hypervisor Security

Explanation:
Some important security considerations related to virtualization include:

Hypervisor Security: The primary virtualization security concern is isolation or ensuring that different VMs can’t affect each other or read each other’s data. VM escape attacks occur when a malicious VM exploits a vulnerability in the hypervisor or virtualization platform to accomplish this.
Container Security: Containers are self-contained packages that include an application and all of the dependencies that it needs to run. Containers improve portability but have security concerns around poor access control and container misconfigurations.
Ephemeral Computing: Ephemeral computing is a major benefit of virtualization, where resources can be spun up and destroyed at need. This enables greater agility and reduces the risk that sensitive data or resources will be lying around abandoned.
Serverless Technology: Serverless applications are deployed in environments managed by the cloud service provider. Outsourcing server management can make serverless systems more secure, but it also means that organizations can’t deploy traditional security solutions that require an underlying OS to operate.
98
Q

An engineer is using DREAD for threat modeling. Which is the correct algorithm when using DREAD to determine the quantitative value for risk and threats?
A. RISK_DREAD = (Damage - Reproducibility + End users affected - Awareness + Discoverability) / 10
B. RISK_DREAD = (Damage + Restoration Time+ Exploitability + Affected Users + Discoverability) / 5
C. RISK_DREAD = (Damage - Recoverability + Exploitability + Affected Users + Discoverability) / 10
D. RISK_DREAD = (Damage + Reproducibility + Exploitability + Affected Users + Discoverability) / 5

A

D. RISK_DREAD = (Damage + Reproducibility + Exploitability + Affected Users + Discoverability) / 5

Explanation:
Correct answer: RISK_DREAD = (Damage + Reproducibility + Exploitability + Affected Users + Discoverability) / 5

DREAD looks at the categories of damage potential, reproducibility, exploitability, affected users, and discoverability. Risk is given a value of 0 to 10 in each category, with 10 being the highest risk value. The algorithm used in DREAD is RISK_DREAD = (Damage + Reproducibility + Exploitability + Affected Users + Discoverability) / 5

99
Q

Which of the following relates to an organization’s efforts to operate its cloud infrastructure in a way that complies with applicable laws and regulations?

A. Auditability
B. Security
C. Governance
D. Privacy

A

C. Governance

Explanation:
When deploying cloud infrastructure, organizations must keep various security-related considerations in mind, including:

Security: Data and applications hosted in the cloud must be secured just like in on-prem environments. Three key considerations are the CIA triad of confidentiality, integrity, and availability.
Privacy: Data hosted in the cloud should be properly protected to ensure that unauthorized users can’t access the data of customers, employees, and other third parties.
Governance: An organization’s cloud infrastructure is subject to various laws, regulations, corporate policies, and other requirements. Governance manages cloud operations in a way that ensures compliance with these various constraints.
Auditability: Cloud computing outsources the management of a portion of an organization’s IT infrastructure to a third party. A key contractual clause is ensuring that the cloud customer can audit (directly or indirectly) the cloud provider to ensure compliance with contractual, legal, and regulatory obligations.
Regulatory Oversight: An organization’s responsibility for complying with various regulations (PCI DSS, GDPR, etc.) also extends to its use of third-party services. Cloud customers need to be able to ensure that cloud providers are compliant with applicable laws and regulations.
100
Q

You work for a real estate company who are defining the protection mechanisms they are going to use for the Platform as a Service (PaaS) deployment that will store data in the cloud. They will be using block storage technology. That technology will hold their pre-sales documents. In what access model is the owner responsible for defining the restrictions on a per-document basis?

A. Role-based Access Control (RBAC)
B. Discretionary Access Control (DAC)
C. Mandatory Access Control (MAC)
D. Non-discretionary Access Control (NDAC)

A

B. Discretionary Access Control (DAC)

Explanation:
The owner of a document is responsible for defining the limits on a per-document basis under a Discretionary Access Control (DAC) model. This entails manually configuring sharing for documents that contain user authentication information for a database. It is up to the owner’s discretion to grant access to someone or not.

MAC is a very secure environment that is commonly used around a country’s sensitive documents (e.g., military top secret files). Access is controlled based on the classification of the data, the user’s clearance level, and their need to know.

NDAC is defined by the U.S. government as another name for MAC.

RBAC is an access control model that defines access based on the user’s role within the organization.

101
Q

A cloud architect is designing a cloud solution that will ensure that incidents are managed as effectively as possible. Logs will be generated by everything from virtual machines to firewalls. They are going to create a sink to send those logs to a centralized location. What tool can then be used to analyze those logs?

A. Security Information and Event Manager (SIEM)
B. A Service Organization Controls (SOC) 2 audit
C. File Integrity Monitor (FIM)
D. Vulnerability assessment

A

A. Security Information and Event Manager (SIEM)

Explanation:
A Security Information and Event Manager (SIEM) system collects and correlates logs from various sources on the network (servers, firewalls, etc.). SIEM systems also provide a great way for administrators to troubleshoot incidents.

A vulnerability assessment is used to assess deployed systems to determine if there are potential vulnerabilities that need to be patched, among other uses.

A SOC 2 audit is usually performed by external auditors, in particular for cloud providers, so that cloud customers understand the level of security they can expect from the provider.

A FIM is used to monitor user activity within file servers but external to the server so that it is more difficult for bad actors to compromise that information.
Reference:

102
Q

An audit must have parameters to ensure the efforts are focused on relevant areas that can be effectively audited. Setting these parameters for an audit is commonly known as which of the following?

A. Audit scope restrictions
B. Audit remediation
C. Audit objectives
D. Audit policy

A

A. Audit scope restrictions

Explanation:
Audit scope restrictions refer to the process of defining parameters for an audit. The rationale for audit scope restrictions is that audits are costly and often require the involvement of highly skilled content experts. Additionally, system auditing can impair system performance, and in some situations necessitate the shutdown of production systems. Carefully crafted scope constraints can help ensure that production systems are not harmed.

Audit objectives would cover the reason for the audit and what they want to know as a result of the audit.

Audit remediation could be the recommendations that the auditor provides after the audit assessment is complete. This would be based on any of the findings that the auditor has. A finding is something that the auditor finds that does not match the requirements based on the objectives of the audit.

The policy would contain management’s goals and objectives on the topic of audits.

103
Q

Which of the following terms is LEAST related to the others?

A. Resiliency
B. IaC
C. Clustering
D. HA

A

B. IaC

Explanation:
Clustering is commonly used as part of high availability (HA) schemes for resiliency and redundancy. IaC is for configuration management.

104
Q

Which of the following is a US regulation designed to protect investors by requiring publicly-traded companies to make annual financial disclosures?

A. SCA
B. GLBA
C. PCI DSS
D. SOX

A

D. SOX

Explanation:
A company may be subject to various regulations that mandate certain controls be in place to protect customers’ sensitive data or ensure regulatory transparency. Some examples of regulations that can affect cloud infrastructure include:

General Data Protection Regulation (GDPR): GDPR is a regulation protecting the personal data of EU citizens. It defines required security controls for their data, export controls, and rights for data subjects.
US CLOUD Act: The US CLOUD Act creates a framework for handling cross-border data requests from cloud providers. The US law enforcement and their counterparts in countries with similar laws can request data hosted in a data center in a different country.
Privacy Shield: Privacy Shield is a program designed to bring the US into partial compliance with GDPR and allow US companies to transfer EU citizen data outside of the US. The main reason that the US is not GDPR compliant is that federal agencies have unrestricted access to non-citizens’ data.
Gramm-Leach-Bliley Act (GLBA): GLBA requires financial services organizations to disclose to customers how they use those customers’ personal data.
Stored Communications Act of 1986 (SCA): SCA provides privacy protection for the electronic communications (email, etc.) of US citizens.
Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) Act: HIPAA and HITECH are US regulations that protect the protected health information (PHI) that patients give to medical providers.
Payment Card Industry Data Security Standard (PCI DSS): PCI DSS is a standard defined by major payment card brands to secure payment data and protect against fraud.
Sarbanes Oxley (SOX): SOX is a US regulation that applies to publicly-traded companies and requires annual disclosures to protect investors.
North American Electric Reliability Corporation/Critical Infrastructure Protection (NERC/CIP): NERC/CIP are regulations designed to protect the power grid in the US and Canada by ensuring that power providers have certain controls in place.
105
Q

Licensing is a concern that is MOST related to which of the following?

A. Open Source Software
B. API Security
C. Supply Chain Security
D. Third-Party Software

A

D. Third-Party Software

Explanation:
Some important considerations for secure software development in the cloud include:

API Security: In the cloud, the use of microservices and APIs is common. API security best practices include identifying all APIs, performing regular vulnerability scanning, and implementing access controls to manage access to the APIs.
Supply Chain Security: An attacker may be able to access an organization’s systems via access provided to a partner or vendor, or a failure of a provider’s systems may place an organization’s security at risk. Companies should assess their vendors’ security and ability to provide services via SOC2 and ISO 27001 certifications.
Third-Party Software: Third-party software may contain vulnerabilities or malicious functionality introduced by an attacker. Also, the use of third-party software is often managed via licensing, with whose terms an organization must comply. Visibility into the use of third-party software is essential for security and legal compliance.
Open Source Software: Most software uses third-party and open-source libraries and components, which can include malicious functionality or vulnerabilities. Developers should use software composition analysis (SCA) tools to build a software bill of materials (SBOM) to identify any potential vulnerabilities in components used by their applications.
106
Q

Jada is currently vetting the tokenization process of her organization’s cloud provider. They are using this tokenization process to protect payment card data that will be tied to their own internally created application. What is one risk that Jada should ensure is limited during the tokenization process?

A. Price changes
B. Vendor lock-in
C. File type changes
D. Service Level Agreement (SLA) modifications

A

B. Vendor lock-in

Explanation:
Vendor lock-in is a scenario in which a cloud customer is tied and dependent on one cloud provider without the ability to move to another provider. Cloud customers should ensure that anything done with the cloud provider will not cause this type of vendor lock-in. If there is anything in how the tokenization is performed that locks them into that format after they adapt their internal application, it could prevent them moving easily to a different vendor in the future if needed.

Price changes are annoying but not a security risk. It is a financial risk. The focus here is information security.

SLA modifications can be annoying or helpful. It depends on what is being modified, why, and how. So, it’s not as critical a risk as vendor lock-in.

File type changes could be a problem somewhere, but it is not a potential problem here. The lock-in potential problem is not the change of the data file type. The problem is how the data is converted to a token and then back again.