Pocket Prep 15 Flashcards

1
Q

Wyatt has just been made aware that they have discovered a problem that they need to fix. They have a server that will allow a Uniform Resource Identifier (URI) of “file:///etc/passwd” to be entered successfully on one of their websites. What is this an example of?

A. Injection
B. Insecure design
C. Cryptographic failure
D. Security Misconfiguration

A

D. Security Misconfiguration

Explanation:
This is an example of CWE-611: Improper Restriction of XML External Entity Reference. The OWASP top 10 merged XML External Entitites into Security Misconfiguration in 2021.

Injection would be something like adding a Structured Query Language (SQL) command in the URL. This is asking for a specific file folder and file on the computer, which is not the same.

Insecure design could be the problem here, but that is a bit removed from an actual flow. This is all about doing threat modeling and following security design patterns and principles.

Cryptographic failure was known on the last OWASP list as Sensitive Data Exposure. That is the problem. The cause is a failure to encrypt the data or properly encrypt the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Antonia has recently been hired by a cancer treatment facility. One of the first training programs that she is required to go through at the office is related to the protection of individually identifiable health information. Which law is this related to and which country does it apply to?

A. Health Insurance Portability and Accountability Act (HIPAA), Canada
B. Gramm-Leach-Bliley Act (GLBA), USA
C. Health Insurance Portability and Accountability Act (HIPAA), USA
D. General Data Protection Regulation (GDPR), Germany

A

C. Health Insurance Portability and Accountability Act (HIPAA), USA

Explanation:
The Health Insurance Portability and Accountability Act (HIPAA) is concerned with the security controls and confidentiality of Protected Health Information (PHI). It’s vital that anyone working in any healthcare facility be aware of HIPAA regulations.

The Gramm-Leach-Bliley Act, officially named the Financial Modernization Act of 1999, focuses on PII as it pertains to financial institutions, such as banks.

GDPR is an EU specific regulation that encompasses all organizations in all different industries.

The privacy act of 1988 is an Australian law that requires the protection of personal data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Defining clear, measurable, and usable metrics is a core component of which of the following operational controls and standards?

A. Continuity Management
B. Change Management
C. Information Security Management
D. Continual Service Improvement Management

A

D. Continual Service Improvement Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Holand is working with the US government and is in charge of securing some of the information systems within her agency. What regulation requires her agency to protect the government systems and the data that they hold?

A. Federal Information Security Management Act (FISMA)
B. Health Information Portability and Accountability Act (HIPAA)
C. Sarbanes Oxley (SOX)
D.Privacy Act of 1988

A

A. Federal Information Security Management Act (FISMA)

Explanation:
FISMA was enacted to provide a comprehensive framework for securing federal government information and systems. Its primary objectives are to enhance the security of federal information systems, promote a risk-based approach to information security management, and establish a consistent level of security across federal agencies.

Sarbanes-Oxley Act is a US federal law enacted in 2002 in response to accounting scandals that occurred in several major corporations. The Sarbanes-Oxley Act aims to protect investors and improve the accuracy and reliability of corporate financial disclosures.

HIPAA is a United States federal law enacted in 1996 that establishes privacy and security standards for protecting individuals’ health information. HIPAA applies to various entities, including healthcare providers, health plans, and healthcare clearinghouses as well as their business associates.

The Australian Privacy Act of 1988 is a federal law in Australia that governs the handling of personal information by Australian government agencies and certain private sector organizations. The Act aims to protect the privacy of individuals by regulating the collection, use, disclosure, and storage of their personal information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Eila works for a large government contractor. As their lead information security professional working on the business case for their potential move to the cloud, she knows that it is critical to define and defend her reasons for moving to the cloud. Of the following statements, which is the MOST accurate?

A. Cloud platforms offer increased scalability and performance
B. Traditional data centers and cloud environments have the exact same risks
C. There are no security risks associated with moving to a cloud environment
D. Cloud platforms are always less expensive than on-prem solutions

A

A. Cloud platforms offer increased scalability and performance

Explanation:
Cloud environments are attractive to organizations because they offer increased scalability and performance.

While it’s possible that moving to the cloud can be less expensive than traditional data centers, that is not always the case. Sometimes cloud platforms can come with hidden costs that weren’t initially expected. Cloud platforms come with their own set of security risks and, while some are the same as the risks you’d see in a traditional data center, some are different as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which of the following characteristics of cloud computing enables a cloud provider to operate cost-effectively by distributing costs across multiple cloud customers?

A. On-Demand Self-Service
B. Metered Service
C. Resource Pooling
D. Rapid Elasticity and Scalability

A

C. Resource Pooling

Explanation:
The six common characteristics of cloud computing include:

Broad Network Access: Cloud services are widely available over the network, whether using web browsers, secure shell (SSH), or other protocols.
On-Demand Self-Service: Cloud customers can redesign their cloud infrastructure at need, leasing additional storage or processing power or specialized components and gaining access to them on-demand.
Resource Pooling: Cloud customers lease resources from a shared pool maintained by the cloud provider at need. This enables the cloud provider to take advantage of economies of scale by spreading infrastructure costs over multiple cloud customers.
Rapid Elasticity and Scalability: Cloud customers can expand or contract their cloud footprint at need, much faster than would be possible if they were using physical infrastructure.
Measured or Metered Service: Cloud providers measure their customers’ usage of the cloud and bill them for the resources that they use.
Multitenancy: Public cloud environments are multitenant, meaning that multiple different cloud customers share the same underlying infrastructure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The government’s Unclassified/Confidential/Secret/Top Secret system classifies data based on which of the following?

A. Ownership
B. Type
C. Sensitivity
D. Criticality

A

C. Sensitivity

Explanation:
Data owners are responsible for data classification, and data is classified based on organizational policies. Some of the criteria commonly used for data classification include:

Type: Specifies the type of data, including whether it has personally identifiable information (PII), intellectual property (IP), or other sensitive data protected by corporate policy or various laws.
Sensitivity: Sensitivity refers to the potential results if data is disclosed to an unauthorized party. The Unclassified, Confidential, Secret, and Top Secret labels used by the U.S. government are an example of sensitivity-based classifications.
Ownership: Identifies who owns the data if the data is shared across multiple organizations, departments, etc.
Jurisdiction: The location where data is collected, processed, or stored may impact which regulations apply to it. For example, GDPR protects the data of EU citizens.
Criticality: Criticality refers to how important data is to an organization’s operations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of the following roles is defined as the role that authorizes the processing of personal data according to the European Union (EU) General Data Protection Regulation (GDPR)?

A. Data owner
B. Data processor
C. Data custodian
D. Data controller

A

D. Data controller

Explanation:
The data controller is “a person who alone or jointly with others processes or controls or authorizes the processing of data,” according to the GDPR.

The data processor is “a person who processes data solely on behalf of the controller, excluding the employees of the data controller,” according to the GDPR.

The data owner is defined by the Cloud Security Alliance in their guidance 4.0 as someone responsible for a piece or set of data and then responsible for classifying that piece or set of data. This is also the use of the term within governments, and it has been that way for decades.

The data custodian is in possession of the data and needs to follow the corporate policies regarding its handling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Yair and his team are building a piece of software that will be deployed into their cloud environment. They have a variety of virtual machines from virtual servers to virtual desktops that they use throughout the business. The software that will be deployed needs to be able to run on multiple different operating systems. So, they need something that allows for portability of the application.

Which of the following technologies could they use?

A. Hypervisors
B. Virtual machines
C. Application virtualization
D. Orchestration

A

C. Application virtualization

Explanation:
Application virtualization is a technology that allows applications to run in isolated environments, separate from the underlying operating system and hardware. It enables the delivery of applications to end users without the need for traditional installation or compatibility issues. Instead of installing applications directly on individual machines, they are encapsulated into virtualized packages that can be executed on demand. Examples are Microsoft App-V, VMware ThinApp, and Citrix Virtual Apps.

Hypervisors are software or firmware components that enable the virtualization of physical computer hardware. They allow multiple Virtual Machines (VMs) to run on a single physical machine, effectively abstracting and managing the underlying hardware resources. The VMs are built on a specific hypervisor and only work with that hypervisor. It is a bit more difficult when the question is asking for portability and many operating systems.

Cloud orchestration is the automated management and coordination of multiple cloud resources and services to ensure efficient and optimized delivery of cloud-based applications and workflows. It involves the automation of various tasks, such as provisioning, configuration, deployment, scaling, and monitoring of cloud resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Padma is the information security manager involved with the DevOps teams. Their goal is to create an environment that allows the teams to control the roll out to production of the infrastructure. They are looking for a way that integrates software development techniques that include version control and continuous integration techniques.

What are they looking for?

A. Database as a Service (DBaaS)
B. Identity as a Service (IDaaS)
C. Immutable infrastructure
D. Infrastructure as Code (IaC)

A

D. Infrastructure as Code (IaC)

Explanation:
The use of Infrastructure as Code allows the DevOps team to control the deployment of infrastructure such as virtual servers to production. Control of deployment needs to be very carefully controlled. That is a lesson that has been understood for quite some time. If the infrastructure is defined and stored as files, it is possible to deploy defined and updated systems without worry. IaC integrates version control and continuous integration techniques.

Immutable infrastructure is the idea that the infrastructure is not upgraded or changed. Once a VM is deployed, it stays as it is configured. If it is necessary to upgrade, a new VM is built and deployed. If functional, the traffic can be redirected from the old VM to the new. Mutable architecture is when the deployed VM can be changed, often with orchestration tools such as chef or puppet.

DBaaS is a platform as a Service option from a cloud provider. It could be a SeQueL (SQL) database or noSQL, etc.

IDaaS is a service to identify and authenticate users. Facebook (Meta), Google, and others provide this service using Security Assertion Markup Language (SAML) or other options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which of the Trust Services principles must be included in a Service Organization Controls (SOC) 2 audit?

A. Availability
B. Privacy
C. Security
D. Confidentiality

A

C. Security

Explanation:
The Trust Service Criteria from the American Institute of Certified Public Accountants (AICPA) for the Security Organization Controls (SOC) 2 audit report is made up of five key principles: Availability, Confidentiality, Process integrity, Privacy, and Security. Security is always required as part of a SOC 2 audit. The other four principles are optional.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Royce works for a Cloud Service Provider (CSP). She has been involved with the setup and configuration of the servers in the data center. The hypervisors they have installed allow for the virtualization of servers and desktops for the customer to purchase and use.

If the CSP is selling Infrastructure as a Service (IaaS), what is the breakdown of responsibility under the cloud shared security model?

A. The Cloud Service Provider (CSP) is responsible for the hypervisor and the Cloud Service Customer (CSC) is responsible for the Virtual Machines (VMs)
B. The Cloud Service Customer (CSC) is responsible for the hypervisor and the Cloud Service Provider (CSP) is responsible for the Virtual Machines (VMs)
C. The Cloud Service Customer (CSC) is responsible for the hypervisor and the Cloud Service Customer (CSC) is responsible for the Virtual Machines (VMs)
D. The Cloud Service Provider (CSP) is responsible for the hypervisor and the Cloud Service Provider (CSP) is responsible for the Virtual Machines (VMs)

A

A. The Cloud Service Provider (CSP) is responsible for the hypervisor and the Cloud Service Customer (CSC) is responsible for the Virtual Machines (VMs)

Explanation:
Correct answer: The Cloud Service Provider (CSP) is responsible for the hypervisor and the Cloud Service Customer (CSC) is responsible for the Virtual Machines (VMs)

The shared security model does differ with each cloud provider. However, we can make the assumption that the CSP is responsible for the hypervisor. It is effectively the Operating System (OS) for the servers that Royce is installing.

In IaaS, the customer then buys and brings their OSs with them for the virtual machines. Since the OSs belong to the customer, it is their responsibility to care for them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which of the following tools looks for vulnerabilities in the source code of an application?

A. DAST
B. IAST
C. SCA
D. SAST

A

D. SAST

Explanation:
Static Application Security Testing (SAST): SAST tools inspect the source code of an application for vulnerable code patterns. It can be performed early in the software development lifecycle but can’t catch some vulnerabilities, such as those visible only at runtime.
Dynamic Application Security Testing (DAST): DAST bombards a running application with anomalous inputs or attempted exploits for known vulnerabilities. It has no knowledge of the application’s internals, so it can miss vulnerabilities. However, it is capable of detecting runtime vulnerabilities and configuration errors (unlike SAST).
Interactive Application Security Testing (IAST): IAST places an agent inside an application and monitors its internal state while it is running. This enables it to identify unknown vulnerabilities based on their effects on the application.
Software Composition Analysis (SCA): SCA is used to identify the third-party dependencies included in an application and may generate a software bill of materials (SBOM). This enables the developer to identify vulnerabilities that exist in this third-party code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Properly scaling network and computing resources is an important part of which of the following system and communication protections?

A. Cryptographic Key Establishment and Management
B. Security Function Isolation
C. Denial-of-Service Prevention
D. Boundary Protection

A

C. Denial-of-Service Prevention

Explanation:
NIST SP 800-53, Security and Privacy Controls for Information Systems and Organizations defines 51 security controls for systems and communication protection. Among these are:

Policy and Procedures: Policies and procedures define requirements for system and communication protection and the roles, responsibilities, etc. needed to meet them.
Separation of System and User Functionality: Separating administrative duties from end-user use of a system reduces the risk of a user accidentally or intentionally misconfiguring security settings.
Security Function Isolation: Separating roles related to security (such as configuring encryption and logging) from other roles also implements separation of duties and helps to prevent errors.
Denial-of-Service Prevention: Cloud resources are Internet-accessible, making them a prime target for DoS attacks. These resources should have protections in place to mitigate these attacks as well as allocate sufficient bandwidth and compute resources for various systems.
Boundary Protection: Monitoring and filtering inbound and outbound traffic can help to block inbound threats and stop data exfiltration. Firewalls, routers, and gateways can also be used to isolate and protect critical systems.
Cryptographic Key Establishment and Management: Cryptographic keys are used for various purposes, such as ensuring confidentiality, integrity, authentication, and non-repudiation. They must be securely generated and secured against unauthorized access.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which of the following is NOT an example of a “something you know” factor for MFA?

A. Password
B. PIN
C. Security Question
D. OTP

A

D. OTP

Explanation:
Multi-factor authentication requires a user to provide multiple authentication factors to gain access to their account. These factors must come from two or more of the following categories:

Something You Know: Passwords, security questions, and PINs are examples of knowledge-based factors.
Something You Have: These factors include hardware tokens, smart cards, or smartphones that can receive or generate a one-time password (OTP).
Something You Are: Biometric factors include fingerprints, facial recognition, and similar technologies.

While these are the most common types of MFA factors, others can be used as well. For example, a “somewhere you are” factor could use an IP address or geolocation to determine the likelihood that a request is authentic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Padma works for a financial trading company and is in charge of revising their data retention policy. She knows that it is essential to control how long data is maintained. There are several laws that demand that data will not be in their control longer than it should be. Which phase of the data lifecycle requires the most attention over time?

A. Share
B. Use
C. Archive
D. Destroy

A

C. Archive

Explanation:
The data retention policy will have an effect on the data lifecycle’s archive phase. It is necessary to review the data that is in the archive phase to make sure that data is pulled out and destroyed when necessary.

The other phases do demand that data is protected appropriately. The share phase occurs when data is being shared with someone else. Use phase is when a user is utilizing the data that has been created previously.

The destroy phase would occur when data is pulled out of storage or archival and removed from existence within the business. This is almost the right answer, but it is the archival data that needs to be reviewed. This can take a lot of time and energy to carefully review data properly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which of the following is a system/subsystem product certification?

A. Common Criteria
B. G-Cloud
C. PCI DSS
D. FedRAMP

A

A. Common Criteria

Explanation:
The Common Criteria are system/subsystem product certifications that show the level of testing that a particular system or subsystem has undergone. Cloud service providers may have their environment as a whole verified against certain standards, including:

ISO/IEC 27017 and 27018: The  International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) publishes various standards, including those describing security best practices. ISO 27017 and ISO 27018 describe how the information security management systems and related security controls described in ISO 27001 and 27002 should be implemented in cloud environments and how PII should be protected in the cloud.
PCI DSS: The Payment Card Industry Data Security Standard (PCI DSS) was developed by major credit card brands to protect the personal data of payment card users. This includes securing and maintaining compliance with the underlying infrastructure when using cloud environments.
Government Standards: FedRAMP-compliant offerings and UK G-Cloud are cloud services designed to meet the requirements of the US and UK governments for computing resources.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which of the following schemes relies on a lookup table stored in a secure environment?

A. Tokenization
B. Encryption
C. Hashing
D. Masking

A

A. Tokenization

Explanation:
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:

Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules.
Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions.
Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number.
Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data.
Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which of the following common characteristics of cloud computing enables cloud customers to access cloud resources on an as-needed basis?

A. On-Demand Self-Service
B. Multitenancy
C. Metered Service
D. Broad Network Access

A

A. On-Demand Self-Service

Explanation:
The six common characteristics of cloud computing include:

Broad Network Access: Cloud services are widely available over the network, whether using web browsers, secure shell (SSH), or other protocols.
On-Demand Self-Service: Cloud customers can redesign their cloud infrastructure at need, leasing additional storage or processing power or specialized components and gaining access to them on-demand.
Resource Pooling: Cloud customers lease resources from a shared pool maintained by the cloud provider at need. This enables the cloud provider to take advantage of economies of scale by spreading infrastructure costs over multiple cloud customers.
Rapid Elasticity and Scalability: Cloud customers can expand or contract their cloud footprint at need, much faster than would be possible if they were using physical infrastructure.
Measured or Metered Service: Cloud providers measure their customers’ usage of the cloud and bill them for the resources that they use.
Multitenancy: Public cloud environments are multitenant, meaning that multiple different cloud customers share the same underlying infrastructure.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

During which phase of the TLS process is the connection between the two parties negotiated and established?

A. TLS Functional Protocol
B. TLS Negotiation Protocol
C. TLS Handshake Protocol
D. TLS Record Protocol

A

C. TLS Handshake Protocol

Explanation:
Transport Layer Security (TLS) is broken up into two main phases: TLS Handshake Protocol and TLS Record Protocol. During the TLS Handshake Protocol, the TLS connection between the two parties is negotiated and established.

During the TLS Record Protocol, the actual secure communications method for transmitting data occurs.

It is not called the TLS negotiation protocol, it is the handshake protocol.

TLS functional protocol is not a real phase.

This may be a bit more detail regarding TLS than is needed, but a little extra technical knowledge is useful for this test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Which of the following organizations publishes security standards applicable to any systems used by the federal government and its contractors?

A. International Standards Organization (ISO)
B. Service Organization Controls (SOC)
C. National Institute of Standards and Technology (NIST)
D. Information Systems Audit and Control Association (ISACA)

A

C. National Institute of Standards and Technology (NIST)

Explanation:
The National Institute of Standards and Technology (NIST) is a part of the United States government, which is responsible for publishing security standards applicable to any systems used by the federal government and its contractors although they are available to anyone to use.

SOC is the type of audit report that is the result of SSAE 16/18 or ISAE 3402 audits. ISACA is the company behind the CISM and CISA certifications. They are fundamentally a company of IT auditors although they have expanded greatly over the years. ISO is the international body that creates standards for the world to use.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

As you are drafting your organization’s cloud data destruction policy, which of the following is NOT a consideration that may affect the policy?

A. Data discovery
B. Compliance and governance
C. Retention requirements
D. Business processes

A

A. Data discovery

Explanation:
You should not consider data discovery when determining an organization ‘s data destruction policy. While you may discover data during other stages of the data lifecycle, this is irrelevant at the time of destruction. Data discovery should not be done during the data destruction phase and should have been done much earlier in the data lifecycle. Remember this exam is about the theory of what we should be doing within business, not what often happens in business.

It is necessary to consider the laws that the corporation must be in compliance with when writing the data destruction policy. For example, GDPR says that you can retain the personal data collected by the business for a reasonable period of time. After that point, the data should be destroyed properly.

GDPR saying you can only retain data for a reasonable period of time is addressing data retention requirements.

Business processes should be considered while developing a data destruction policy for any data that is not regulated by law.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In which of the following cloud models will an organization’s infrastructure DEFINITELY NOT be hosted in its own data center?

A. Community Cloud
B. Hybrid Cloud
C. Private Cloud
D. Public Cloud

A

D. Public Cloud

Explanation:
The physical environment where cloud resources are hosted depends on the cloud model in use:

Public Cloud: Public cloud infrastructure will be hosted by the CSP within their own data centers.
Private Cloud: Private clouds are usually hosted by an organization within its own data center. However, third-party CSPs can also offer virtual private cloud (VPC) services.
Community Cloud: In a community cloud, one member of the community hosts the cloud infrastructure in their data center. Third-party CSPs can also host community clouds in an isolated part of their environment.

Hybrid and multi-cloud environments will likely have infrastructure hosted by different organizations. A hybrid cloud combines public and private cloud environments, and a multi-cloud infrastructure uses multiple cloud providers’ services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Having the ability to move data to another cloud provider without having to re-enter it is known as:

A. Interoperability
B. Reversibility
C. Portability
D. Multitenancy

A

C. Portability

Explanation:
The ability to move data between multiple cloud providers is known as cloud data portability, while cloud application portability refers, instead, to the ability to move an application between cloud providers.

Multitenancy is the term used to describe a cloud provider housing multiple customers and/or applications within one server.

Interoperability is the ability of two different systems to be able to exchange and use a piece of data, such as one user creating a Microsoft Word document on a Mac and then another user opening and using that word document on a Microsoft Windows machine.

Reversibility is the ability to retrieve data from a cloud provider upon termination of the contract as well as having the data be removed securely from the cloud provider’s systems.

These terms are defined in ISO 17788, which is the ISO version of NIST 800-145. It is a free document (both are actually), and it would be a great idea to look at both of them.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Alexei works for a Russian bank and knows as an information security professional that they must be careful with the personal information about their customers that they collect and maintain. Which Russian law states that any collecting, processing, or storing of data on Russian citizens must be done from systems that are physically located in the Russian Federation?

A. General Data Protection Regulation (GDPR)
B. Act on the Protection of Personal Information
C. Gramm-Leach-Bliley Act (GLBA)
D. Federal Law 526-FZ

A

D. Federal Law 526-FZ

Explanation:
Russian law 526-FZ was enacted in September of 2015. The law states that any collecting, processing, or storing of personal or private data that pertains to Russian citizens must be done from systems and databases that are physically located within the Russian Federation.

GDPR is a European Union law that requires member countries to have privacy laws that are equal or stronger than GDPR. It also governs collection, processing, protection, and storage of personal data.

GLBA is a U.S. Act that requires financial holding companies to protect the personal data that they have in their possession.

Act on the Protection of Personal Information is a law in Japan that is similar to the requirements of GDPR.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

HIPAA protects which of the following types of private data?

A. Payment Data
B. Personally Identifiable Information
C. Protected Health Information
D. Contractual Private Data

A

C. Protected Health Information

Explanation:
Private data can be classified into a few different categories, including:

Personally Identifiable Information (PII): PII is data that can be used to uniquely identify an individual. Many laws, such as the GDPR and CCPA/CPRA, provide protection for PII.
Protected Health Information (PHI): PHI includes sensitive medical data collected regarding patients by healthcare providers. In the United States, HIPAA regulates the collection, use, and protection of PHI.
Payment Data: Payment data includes sensitive information used to make payments, including credit and debit card numbers, bank account numbers, etc. This information is protected under the Payment Card Industry Data Security Standard (PCI DSS).
Contractual Private Data: Contractual private data is sensitive data that is protected under a contract rather than a law or regulation. For example, intellectual property (IP) covered under a non-disclosure agreement (NDA) is contractual private data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

A Fortune 500 company has just performed a Disaster Recovery (DR) test. They have a variety of people that represent a wide variety of roles and responsibilities. The gathered individuals talked through the steps in order while verifying that the DR document had the needed steps. The team members describe how they would carry out their responsibilities in a certain BC/DR scenario.

Which type of disaster recovery plan testing are they conducting?

A. Full
B. Parallel
C. Tabletop
D. Simulation

A

C. Tabletop

Explanation:
In a tabletop exercise, participants are provided with scenarios and asked to describe how they will carry out their assigned activities in a certain business continuity/disaster recovery scenario. This enables members to comprehend their roles amid a disaster.

A good example of a simulation today is a fire drill. A fire is not started to practice exiting the building. It is simulated. These were more common in the past, especially in on-premise data centers. The exercise would be walking through the data center looking for the spare server or drive in the closet, locating the CD that has the operating system on it, and so on. This is not normal for cloud environments.

A parallel test involves starting the alternate location’s servers and services to see that they function. In a parallel test, the primary business servers and services should not be disturbed.

A full test does shut down the servers and services that the company is actively using so that a failover to the alternate servers and services can be done.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Which of the following solutions helps to reduce the number of passwords that a user needs to maintain?

A. Single Sign-On
B. Multi-Factor Authentication
C. Secrets Management
D. Federated Identity

A

A. Single Sign-On

Explanation:
Identity and Access Management (IAM) is critical to application security. Some important concepts in IAM include:

Federated Identity: Federated identity allows users to use the same identity across multiple organizations. The organizations set up their IAM systems to trust user credentials developed by the other organization.
Single Sign-On (SSO): SSO allows users to use a single login credential for multiple applications and systems. The user authenticates to the SSO provider, and the SSO provider authenticates the user to the apps using it.
Identity Providers (IdPs): IdPs manage a user’s identities for an organization. For example, Google, Facebook, and other organizations offer identity management and SSO services on the Web.
Multi-Factor Authentication (MFA): MFA requires a user to provide multiple authentication factors to log into a system. For example, a user may need to provide a password and a one-time password (OTP) sent to a smartphone or generated by an authenticator app.
Cloud Access Security Broker (CASB): A CASB sits between cloud applications and users and manages access and security enforcement for these applications. All requests go through the CASB, which can perform monitoring and logging and can block requests that violate corporate security policies.
Secrets Management: Secrets include passwords, API keys, SSH keys, digital certificates, and anything that is used to authenticate identity and grant access to a system. Secrets management includes ensuring that secrets are randomly generated and stored securely.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

An organization is concerned about running afoul of GDPR regulations regarding jurisdictional boundaries. Which phase of the cloud data lifecycle are they MOST likely to be at?

A. Archive
B. Create
C. Destroy
D. Share

A

D. Share

Explanation:
The cloud data lifecycle has six phases, including:

Create: Data is created or generated. Data classification, labeling, and marking should occur in this phase.
Store: Data is placed in cloud storage. Data should be encrypted in transit and at rest using encryption and access controls.
Use: The data is retrieved from storage to be processed or used. Mapping and securing data flows becomes relevant in this stage.
Share: Access to the data is shared with other users. This sharing should be managed by access controls and should include restrictions on sharing based on legal and jurisdictional requirements. For example, the GDPR limits the sharing of EU citizens’ data.
Archive: Data no longer in active use is placed in long-term storage. Policies for data archiving should include considerations about legal data retention and deletion requirements and the rotation of encryption keys used to protect long-lived sensitive data.
Destroy: Data is permanently deleted. This should be accomplished using secure methods such as cryptographic erasure/crypto shredding.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

After seeing “Broken Access Control” listed as one of the top vulnerabilities on the OWASP Top 10, a cloud application architect has started looking into options to protect against this. Which of the following could the engineer implement to help protect against broken authentication?

A. Data Leak Prevention (DLP)
B. Multi-Factor Authentication (MFA)
C. Input validation
D. Proper logging

A

B. Multi-Factor Authentication (MFA)

Explanation:
Multi-Factor Authentication (MFA) is an authentication method in which a user is required to provide two or more types of factors proving they are who they claim to be. For example, a user would need both a password and a randomly generated code sent to their smartphone to access an application. MFA factors are broken up into categories such as something you know (passwords, pin), something you are (biometrics), something you have (key card, smartphone), and something you do/are (biometrics or behavioral characteristics).

Input validation is a critical control that can and should be added to many places in many applications. A good rule to follow is to never trust any input. Definitely do not trust input from a user. Many problems can be prevented or stopped by validating the input to include Cross Site Scripting (XSS) and injection attacks.

Logging is necessary to understand what has been happening throughout the technical environment. It is often only with the logs that we know that there has been a compromise which starts the Incident Response (IR) process.

DLP tools are beneficial to add to networks, the cloud, and end systems. They are used to detect traffic heading in the wrong direction or in the wrong format (not encrypted). They can also be used to detect data on servers that should not be there.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

At what phase of the SSDLC does the coding of software components and integration occur?

A. Development
B. Design
C. Operations & Maintenance
D. Deployment

A

A. Development

Explanation:
The development phase entails the coding of software components as well as the integration and construction of the overall solution.

The design phase is the planning of what this software will do, who will use it, what needs to be built into the software, what components are needed, and so on.

The deployment phase is when it is put into use. This is when it is moved into production.

Operations and maintenance will see the software being used on a regular basis. It will be necessary to patch the software as new fixes come out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Which of the following is only a concern if an organization chooses to BUILD a data center rather than rent cloud services?

A. Tenant partitioning
B. Access control
C. Multivendor pathway connectivity
D. Location

A

C. Multivendor pathway connectivity

Explanation:
Multivendor pathway connectivity refers to the use of multiple ISPs with cables routed over different paths to reduce the risk of an outage. This is more of a concern with a data center that an organization owns than one which they are using though the cloud.

Tenant partitioning is only applicable in multitenant environments like public clouds. Location and access control are important in both customer-owned and provider-owned data centers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Bruis has been working with the developers for a new cloud-based application that will operate within their Platform as a Service (PaaS) environment. He has brought the focus of information security to the effort since he is an information security manager. He has been working to ensure that they are planning and developing and assessing the application the best they can as appropriate to the application and the corporation’s needs.

What fundamental cloud application idea does this work represent?

A. Developing collective responsibility
B. Security as a business objective
C. Shared security responsibility
D. Security by design

A

D. Security by design

Explanation:
The Cloud Security Alliance (CSA) and Software Assurance Forum for Excellence in Code (SAFECode) present the idea that there is a collective responsibility to secure applications, as they are developed for use within corporations and the cloud. That responsibility can be broken down into three parts:

Security by design refers to the inclusion of security at every stage of the development process rather than after an application has been released or in reaction to a security exploit or vulnerability. From application feasibility to retirement, security is an integral element of the process. Bruis is the representation of that consistent effort in this question.
Shared security responsibility means that everyone within the corporation and/or the project has a responsibility to pay attention to security as they are doing their work.
Security as a business objective is the idea that an organization should have a compliance-driven approach to security.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Deploying redundant and resilient systems such as load balancers is MOST relevant to an organization’s efforts in which of the following areas?

A. Availability Management
B. Problem Management
C. Service Level Management
D. Capacity Management

A

A. Availability Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Simulations and tabletop exercises are part of which stage of developing a BCP?

A. Creation
B. Testing
C. Implementation
D. Auditing

A

B. Testing

Explanation:
Managing a business continuity/disaster recovery plan (BCP/DRP) has three main stages:

Creation: The creation stage starts with a business impact assessment (BIA) that identifies critical systems and processes and defines what needs to be covered by the plan and how quickly certain actions must be taken. Based on this BIA, the organization can identify critical, important, and support processes and prioritize them effectively. For example, if critical applications can only be accessed via a single sign-on (SSO), then SSO should be restored before them. BCPs are typically created first and then used as a template for prioritizing operations within a DRP.
Implementation: Implementation involves identifying the personnel and resources needed to put the BCP/DRP into place. For example, an organization may take advantage of cloud-based high availability features for critical processes or use redundant systems in an active/active or active/passive configuration (dependent on criticality). Often, decisions on the solution to use depend on a cost-benefit analysis.
Testing: Testing should be performed regularly and should consider a wide range of potential scenarios, including cyberattacks, natural disasters, and outages. Testing can be performed in various ways, including tabletop exercises, simulations, or full tests.

Auditing is not one of the three stages of developing a BCP/DRP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Which sort of testing watches and analyzes application performance while analyzing the code that is in use in real time to identify potential security issues?

A. Runtime Application Self-Protection (RASP)
B. Interactive Application Security Testing (IAST)
C. Static Application Security Testing (SAST)
D. Dynamic Application Security Testing (DAST)

A

B. Interactive Application Security Testing (IAST)

Explanation:
Interactive Application Security Testing (IAST) is a testing technique that has an application active and running and allows the tester to see what code is in use at any specific moment.

Static Application Security Testing (SAST) analyzes the source code. It is static because the application is sitting still on the computer. Dynamic Application Security Testing (DAST) is watching the application in a running condition.

Runtime Application Self-Protection (RASP) is a tool that can be added to software to protect it in real time. It can spot vulnerabilities and prevent real-time attacks.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Many cloud customers have legal requirements to protect data that they place on the cloud provider’s servers. There are some legal responsibilities for the cloud provider to protect that data. Therefore, it is normal for the cloud provider to have their data centers audited using which of the following?

A. Internal auditor
B. External auditor
C. Cloud architect
D. Cloud operators

A

B. External auditor

Explanation:
An external auditor is not employed by the company being audited. An external auditor will often use industry standards such as ISO 27001 and SOC2 to perform an audit of a cloud provider. Due to the legal requirements, this work needs to be done by an independent party. Therefore, internal auditors are not the correct answer here.

Cloud architects design cloud structures, and cloud operators do the daily maintenance and monitoring of the cloud, according to the Cloud Security Alliance (CSA).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Adelita is the cloud administrator, and she is beginning the process of introducing a new information security tool to the business. There is a concern that data is on servers that it should not be on, so they have decided to use a Data Loss Prevention (DLP) system. As she is at the beginning of the implementation, she is at the first stage of DLP. This is a very difficult and critical stage. It must be done carefully and effectively.

What is the FIRST stage of DLP implementation?

A. Monitoring
B. Enforcement
C. Discovery and classification
D. Data de-identification

A

C. Discovery and classification

Explanation:
DLP is made up of three common stages which include discovery and classification, monitoring and, finally, enforcement.

Discovery and classification is the first phase, as the security requirements of the data must be addressed. Data must be understood for the DLP tool to do its job. This is not building a classification scheme within the business as that should already exist. This is teaching the DLP tool about the data that the business has, where it can be, where it can be sent, and in what format.

If data is understood, then monitoring can begin. This is when the DLP tool is able to watch and analyze data. DLP traditionally analyzed traffic that was flowing out of the business so that it could prevent a loss or leak of data. It can now analyze servers to look for data that should not be on a particular machine.

If data is being sent incorrectly or improperly or if it is at rest on a server it should not be on, then enforcement can occur. Enforcement can drop data, stop data, encrypt data, delete data, and so on.

Data de-identification is the removal of direct identifiers. Anonymization is the removal of direct and indirect identifiers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Padma and her team have been updating the information security policies with information related to their new corporate Infrastructure as a Service (IaaS) cloud structure. For the Identity and Access Management (IAM) policy, it is critical to add the right cloud specific details. It is critical that…

A. The policy specifies that the management plane is controlled with multi-factor authentication and that each department has its own distinct login
B. The policy must specify that all users have their accounts set up with multi-factor authentication and that only trusted administrators should be able to log in into the shared corporate account
C. The policy specifies that the primary corporate account is carefully controlled with multi-factor authentication and that each department has its own separate account under corporate accounts
D. The policy must specify that all users are set up with multi-factor authentication for their email accounts and that each network administrator must set up all network equipment with multi-factor authentication as well

A

C. The policy specifies that the primary corporate account is carefully controlled with multi-factor authentication and that each department has its own separate account under corporate accounts

Explanation:
The primary corporate account should be tightly controlled with multi-factor authentication, and it should be with a hardware token. Once that account is set up, then each department or possibly each project has their own sub-account controlled by the primary. That way, if a bad actor accesses one of the sub-accounts, they will not be able to access and destroy all the corporate systems.

The answer that has each department with its own login is not wise. It implies that the department will be sharing a single login. Shared accounts are not wise. For that same reason, the answer with a “shared corporate account” is equally unwise.

The answer that includes “each network administrator…” is an odd answer because the network equipment has multi-factor authentication. Also, the question is about IaaS, which could include virtual network routers, switches and the like, but that answer implies it is actual hardware equipment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

A cloud service provider has published a SOC 2 report. Which of the following cloud considerations is this MOST relevant to?

A. Regulatory Oversight
B. Security
C. Auditability
D. Governance

A

C. Auditability

Explanation:
When deploying cloud infrastructure, organizations must keep various security-related considerations in mind, including:

Security: Data and applications hosted in the cloud must be secured just like in on-prem environments. Three key considerations are the CIA triad of confidentiality, integrity, and availability.
Privacy: Data hosted in the cloud should be properly protected to ensure that unauthorized users can’t access the data of customers, employees, and other third parties.
Governance: An organization’s cloud infrastructure is subject to various laws, regulations, corporate policies, and other requirements. Governance manages cloud operations in a way that ensures compliance with these various constraints.
Auditability: Cloud computing outsources the management of a portion of an organization’s IT infrastructure to a third party. A key contractual clause is ensuring that the cloud customer can audit (directly or indirectly) the cloud provider to ensure compliance with contractual, legal, and regulatory obligations. A SOC 2 report shows that a cloud service provider meets certain requirements regarding the protection of the customer's data.
Regulatory Oversight: An organization’s responsibility for complying with various regulations (PCI DSS, GDPR, etc.) also extends to its use of third-party services. Cloud customers need to be able to ensure that cloud providers are compliant with applicable laws and regulations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

In which cloud service model is the customer’s responsibility limited to correctly configuring settings provided by the cloud provider?

A. IaaS
B. PaaS
C. All service models
D. SaaS

A

D. SaaS

Explanation:
Compute resources include the components that offer memory, CPU, disk, networking, and other services to the customer. In all cases, the cloud service provider (CSP) is responsible for the physical infrastructure providing these services.

However, at the software level, responsibility depends on the cloud service model in use, including:

Infrastructure as a Service (IaaS): In an IaaS environment, the CSP provides and manages the physical components, virtualization software, and networking infrastructure. The customer is responsible for configuring and securing their VMs and the software installed in them.
Platform as a Service (PaaS): In a PaaS environment, the CSP’s responsibility extends to offering and securing the operating systems, database management systems (DBMSs), and other services made available to a customer’s applications. The customer is responsible for properly configuring and using these services and the security of any software that they install or use.
Software as a Service (SaaS): In a SaaS environment, the CSP is responsible for everything except the custom settings made available to the cloud customer. For example, if a cloud storage drive can be set to be publicly accessible, that is the customer’s responsibility, not the CSP’s.
42
Q

During a cyber investigation, Martin was involved in the collection of evidence from the hypervisor’s Virtual Machine Introspection (VMI) capability. Once the memory contents are collected from a running virtual machine, it must be protected from any inappropriate alterations.

What is used as proof that the evidence has not been left unprotected at any point in its history?

A. Chain of custody
B. Incident management
C. Security Operations Center (SOC)
D. E-discovery

A

A. Chain of custody

Explanation:
During an investigation, it’s important that there is a paper trail that can document where evidence was and who was handling it at any given time. This process is known as chain of custody. Chain of custody is crucial in investigations so that evidence is usable in court.

E-discovery is electronic discovery. ISO 27050 defines how e-discovery should be performed. It is the collection of the evidence, not the management of the evidence over time.

Incident management may have started the investigation that used e-discovery to collect the evidence that is protected by a chain of custody.

Incident management may have started because there was an incident detected by the Security Operations Center (SOC).

43
Q

Wahib has noticed that there is an issue with one of the devices that is monitored by the Security Operation Center (SOC). In responding to an Indication of Compromise (IoC), he began to research the systems that may have been compromised. He then noticed that all the event logs on one of their devices had been completely wiped. Since there are no logs to complete the analysis on this one device, he is stuck being able to confirm the IoC as true or false.

According to the STRIDE threat model, which type of threat is this?

A. Data loss
B. Spoofing identity
C. Repudiation
D. Denial of service

A

C. Repudiation

Explanation:
The STRIDE threat model has six threat categories: spoofing identity, tampering with data, repudiation, information disclosure, denial of service, and elevation of privileges.

Repudiation is denial of the truth. If the logs are erased or wiped, then the user or the bad actor can deny that they did something. Keeping accurate and comprehensive logs is vital to an organization, as it can prevent a user from denying that they made a change when they actually did.

A spoofed identity is one that is faked. If someone pretended to be you and logged in as you, then they have access at your level. This could happen because of weak passwords, passwords stored in the clear, etc.

Denial of service is when the user cannot gain access to do their job. Having logs wiped out does not stop a user from doing their job and even doing it well. It just prevents the tracking of problems and issues.

Having the logs wiped out is technically a loss of a particular type of data. Logs are data that need to be protected. But, repudiation is a closer match to the question because the question is about an IoC that cannot be confirmed now. That is a repudiation issue. The logs being wiped is possibly a data loss issue. The key is figuring out what the question is actually asking.

44
Q

If either Structured Query Language (SQL) injection or cross-site scripting vulnerabilities exist within any Software as a Service (SaaS) implementation, customers’ data is at risk. Of the following, what is the BEST method for preventing this type of security risk?

A. Input validation
B. Bounds checking
C. Output validation
D. Data sanitization

A

A. Input validation

Explanation:
Cross-Site Scripting (XSS) occurs on webpages. SQL injection can occur on a webpage or any form that a user fills out that has a SQL database on the backend. Both of these can be discovered or prevented if input validation is done. SQL commands are very recognizable, and the software can be coded to look for and block any inputs from the user that are SQL commands. XSS is also detectable within the HTML of a webpage. If the other page that a user is directed to is a different domain, it can be blocked, or at least notify the user that they are being directed to another site.

Bounds checking is a technique used in computer programming to ensure that an index or pointer accessing an array or data structure remains within the valid range of the data it is accessing. It is primarily used to prevent buffer overflows, array out-of-bounds errors, and other related vulnerabilities that can lead to security vulnerabilities or program crashes.

Output validation, also known as output verification or output validation testing, is a process in software development that involves verifying and validating the correctness, integrity, and quality of the output produced by a system, application, or module.

Data sanitization is the process of removing data from the media in some manner, such as overwrites or physical destruction.

45
Q

Obert is building a private cloud for his corporation with the assistance of many different departments. As they install hypervisors and begin to provision accounts to create virtual machines for different departments, he is wondering if they would be a single tenant or a multi-tenant environment. In a private cloud environment, are there still multiple tenants?

A. No, there would not be. A tenant isolates customers from each other, and there is technically only one customer involved in a private cloud.
B. Yes, there could be. A tenancy isolates data and virtual machines from other tenants. These would be the different users within each department.
C. No, there would not be. A tenant isolates data services from each other. There is only one company involved when you build a private cloud.
D. Yes, there could be. A tenancy isolates data and virtual machines from other tenants. These can be different groups within the organization.

A

D. Yes, there could be. A tenancy isolates data and virtual machines from other tenants. These can be different groups within the organization.

Explanation:
According to ISO/IEC 17788, multi-tenancy allows virtual and physical resources to be allocated so that each tenant does not see the others’ computations, virtual machines, applications, or data. In public clouds, it is normal that there are different customers of the cloud provider. However, in a private cloud, these tenants could/would represent the different groups within the organization.

ISO/IEC 17788 is a free document and a good, quick read for preparation for this test.
Reference:

46
Q

An organization is using VMware ESXI. Which of the following is this an example of?

A. Type 2 hypervisor
B. Software based
C. Type 1 hypervisor
D. Application based

A

C. Type 1 hypervisor

Explanation:
A type 1 hypervisor, also known as a bare-metal hypervisor, runs directly on the machine’s physical hardware unlike type 2 hypervisors that are software-based. VMware ESXI is an example of a type 1 hypervisor.

Vendor specific questions like this will not be in the test. This question is here in case you are unfamiliar with the two types of hypervisors. VMware ESXi is a type 1 and VMWare workstation is a type 2. Reading about these two can help make hypervisors more sense. (Or any other vendors).

A type 1 hypervisor is essentially the operating system for the server it is loaded on. It is often called a bare-metal hypervisor because it is the OS that you load onto the physical server.

A type 2 hypervisor is software based. It relies on a full operating system, such as Mac OS or Windows desktop, to load on top and is most likely found on users’ desktop computers, not the servers in the data center. They could be nested hypervisors that are loaded on to the virtual server in a cloud environment though.

47
Q

Which of the following regulations protects the data of EU citizens and restricts it from being moved outside of certain jurisdictional areas?

A. PCI DSS
B. SOX
C. GDPR
D. HIPAA

A

C. GDPR

Explanation:
A company may be subject to various regulations that mandate certain controls be in place to protect customers’ sensitive data or ensure regulatory transparency. Some examples of regulations that can affect cloud infrastructure include:

General Data Protection Regulation (GDPR): GDPR is a regulation protecting the personal data of EU citizens. It defines required security controls for their data, export controls, and rights for data subjects.
US CLOUD Act: The US CLOUD Act creates a framework for handling cross-border data requests from cloud providers. The US law enforcement and their counterparts in countries with similar laws can request data hosted in a data center in a different country.
Privacy Shield: Privacy Shield is a program designed to bring the US into partial compliance with GDPR and allow US companies to transfer EU citizen data outside of the US. The main reason that the US is not GDPR compliant is that federal agencies have unrestricted access to non-citizens’ data.
Gramm-Leach-Bliley Act (GLBA): GLBA requires financial services organizations to disclose to customers how they use those customers’ personal data.
Stored Communications Act of 1986 (SCA): SCA provides privacy protection for the electronic communications (email, etc.) of US citizens.
Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) Act: HIPAA and HITECH are US regulations that protect the protected health information (PHI) that patients give to medical providers.
Payment Card Industry Data Security Standard (PCI DSS): PCI DSS is a standard defined by major payment card brands to secure payment data and protect against fraud.
Sarbanes Oxley (SOX): SOX is a US regulation that applies to publicly-traded companies and requires annual disclosures to protect investors.
North American Electric Reliability Corporation/Critical Infrastructure Protection (NERC/CIP): NERC/CIP are regulations designed to protect the power grid in the US and Canada by ensuring that power providers have certain controls in place.
48
Q

Blythe has been working for a Fortune 500 healthcare company for many years now. They are beginning to transition from their on-prem data center to a cloud-based solution. She and her team are working to put together information to present to the Board of Directors (BoD) regarding what they can expect from a move to the cloud.

Which of the following statements is most likely true when moving from an on-prem data center to Infrastructure as a Service (IaaS)?

A. A traditional data center will have lower costs on the Operational Expenditures (OpEx) side and higher Capital Expenditures (CapEx)
B. A traditional data center has a more secure operating environment than a cloud environment
C. Moving to the cloud will have a predictable OpEx. However, the security in the cloud is higher.
D. The pricing for cloud computing will be less predictable than that of a traditional data center

A

D. The pricing for cloud computing will be less predictable than that of a traditional data center

Explanation:
A traditional on-prem data center has a higher CapEx, but OpEx is not lower. It is likely the same or higher. The operating environment could be more secure in either environment. The security of the cloud-based IaaS depends on two factors: the security of the cloud provider’s data center and the configurations within the IaaS. It could be more secure in the cloud. The OpEx in the cloud may eventually be predictable, but especially when moving to the cloud, it is not as predictable as some may prefer.

So, with each of those thoughts, that leaves the best answer as “the pricing in the cloud is less predictable than an on-prem data center.”

49
Q

n ITIL, which type of plan would be created to prepare an organization for what needs to be done in the event of a disaster or critical failure?

A. Business Continuity Plan (BCP)
B. Disaster Recovery Plan (DRP)
C. Incident Response Plan (IRP)
D. Continuity management plan

A

D. Continuity management plan

Explanation:
ITIL uses the term continuity management instead of disaster recovery or business continuity. They do use the term incident.

Continuity management is the practice of ensuring that the level of service and availability that is needed by the corporation is maintained in the event of a disaster.

DRP is defined by NIST as “processing critical applications in the event of a major hardware or software failure or destruction of facilities.”

BCP is defined by NIST as ensuring that the “mission/business processes will be sustained during and after a significant disruption.”

Incident management is part of ITIL. They define it as the practice of being prepared to respond to an incident quickly and returning systems to a normal condition.

50
Q

Virtualization hosts, along with which of the following, have Basic Input/Output System (BIOS) settings in place that control hardware configurations as well as security technologies that assist in preventing access to the BIOS?

A. Random Access Memory (RAM)
B. Hardware Security Modules (HSMs)
C. Trusted Platform Modules (TPMs)
D. Secure Shell (SSH)

A

C. Trusted Platform Modules (TPMs)

Explanation:
Trusted Platform Modules (TPMs) and virtualization hosts have BIOS settings in place that control hardware configurations and security technologies to prevent unauthorized access to the BIOS. It’s important to ensure that access to the BIOS is locked down for all systems to prevent unauthorized changes to the systems at the BIOS level. TPMs are designed to store the encryption/decryption key for the hard drive.

Hardware Security Modules (HSMs) are designed to store encryption keys, but they are designed as rack mountable (usually) devices to store keys for servers and the like. HSMs are specialized physical devices designed to provide secure storage, management, and processing of cryptographic keys and sensitive data. They are used to enhance the security of various applications and systems that require encryption, digital signatures, authentication, and other cryptographic operations.

Random Access Memory (RAM), also known as main memory or system memory, is a type of computer memory that is used to temporarily store data and instructions that are actively being accessed by the computer’s processor.

Secure Shell (SSH) is a network protocol that provides secure encrypted communication between two networked devices. It is widely used for remote administration, secure file transfers, and secure command-line access to servers and other networked devices.

51
Q

TEEs are a part of which of the following emerging technologies?

A. Internet of Things
B. Blockchain
C. Confidential Computing
D. Containers

A

C. Confidential Computing

Explanation:
Cloud computing is closely related to many emerging technologies. Some examples include:

Blockchain: Blockchain technology creates an immutable digital ledger in a decentralized fashion. It is used to support cryptocurrencies, track ownership of assets, and implement various other functions without relying on a centralized authority or single point of failure. Cloud computing is related to blockchain because many of the nodes used to maintain and operate blockchain networks run on cloud computing platforms.
Internet of Things (IoT): IoT systems include smart devices that can perform data collection or interact with their environments. These devices often have poor security and rely on cloud-based servers to process collected data and issue commands back to the IoT systems (which have limited computational power, etc.).
Containers: Containerization packages an application along with all of the dependencies that it needs to run in a single package. This container can then be moved to any platform running the container software, including cloud platforms.
Confidential Computing: While data is commonly encrypted at rest and in transit, it is often decrypted while in use, which creates security concerns. Confidential computing involves the use of trusted execution environments (TEEs) that protect and isolate sensitive data from potential threats while in use.
52
Q

An organization is building a new data center. They need to ensure that proper heating and cooling are implemented. What is the recommended minimum and maximum temperature for a data center?

A. 62.2-81.0 degrees F/16-27 degrees C
B. 60.1-75.2 degrees F/15-24 degrees C
C. 64.4-80.6 degrees F/18-27 degrees C
D. 59.5-79.5 degrees F/15-26 degrees C

A

C. 64.4-80.6 degrees F/18-27 degrees C

Explanation:
According to ASHRAE (American Society of Heating, Refrigeration, and Air Conditioning Engineers), the recommended temperature for a data center is a minimum of 64.4 degrees F, and a maximum of 80.6 degrees F. This is 18 - 27 degrees C.

It is possible that you need this for the test. A common question is “Do I need to learn the other measurement standards?” (If I know Fahrenheit, do I have to learn Celsius and vice versa?) If it is on the test, you’ll want to know both measurements.

53
Q

The process of managing and provisioning data centers through machine-readable definition files is called:

A. Infrastructure as a Service (IaaS)
B. Development, Security, Operations (DevSecOps)
C. Infrastructure as Code (IaC)
D. Continuous Integration/Continuous Deployment (CI/CD)

A

C. Infrastructure as Code (IaC)

Explanation:
Think of infrastructure as code as a network with no hardware. The operating systems that are the routers, switches, firewalls, and so on are still there, just all the machines are now virtual. This also allows the images for each of these machines to be stored as golden images. That way, all the machines are running the same, latest, patched, versions of themselves.

Infrastructure as a Service is in a way the same. It is the building of a virtual data center. IaC adds the logic of automation and declarative definitions rather than the option of manual processes, which is possible in IaaS.

CI/CD is the logic of DevOps. It bridges the development and operations together, allowing for incremental code changes.

DevSecOps then adds Security to DevOps for a well-rounded group of teams working together.

54
Q

Thorn is the information security manager working for a company that has a subscription service that allows users to watch popular TV shows. He is looking for a storage technology that would be the most effective. What form of storage is used when content is saved in object storage and then dispersed to multiple geographical hosts to increase internet consumption speed?

A. Storage Area Network (SAN)
B. Software Defined Network (SDN)
C. Software Designed Storage (SDS)
D. Content Delivery Network (CDN)

A

D. Content Delivery Network (CDN)

Explanation:
A Content Delivery Network (CDN) provides globally-distributed object storage, allowing an organization to keep data as close to users as possible. As a result, end users benefit from reduced bandwidth consumption and decreased latency because they can pull from a server closer to their geographic location, an edge server.

SDS allows for the abstraction of the storage that exists within a physical server. Once abstracted, it can be allocated using the software that is the cloud.

SDN is a method of managing a switch-based network in a more efficient and effective manner. It adds a controller to the network that can plan all traffic flows more effectively than the switch can. It also allows administrators to add rules to control traffic flows according to corporate policies.

A SAN is a Local Area Network (LAN) that is comprised of servers that have storage as their primary function.

55
Q

Under which of the following cloud service models does the cloud provider control the LARGEST portion of the infrastructure stack?

A. PaaS
B. SaaS
C. FaaS
D. IaaS

A

B. SaaS

Explanation:
Cloud services are typically provided under three main service models:

Software as a Service (SaaS): Under the SaaS model, the cloud provider offers the customer access to a complete application developed by the cloud provider. Webmail services like Google Workspace and Microsoft 365 are examples of SaaS offerings.
Platform as a Service (PaaS): In a PaaS model, the cloud provider offers the customer a managed environment where they can build and deploy applications. The cloud provider manages compute, data storage, and other services for the application.
Infrastructure as a Service (IaaS): In IaaS, the cloud provider offers an environment where the customer has access to various infrastructure building blocks. AWS, which allows customers to deploy virtual machines (VMs) or use block data storage in the cloud, is an example of an IaaS platform.

Function as a Service (FaaS) is a form of PaaS in which the customer creates individual functions that can run in the cloud. Examples include AWS Lambda, Microsoft Azure Functions, and Google Cloud Functions.

56
Q

Finn is working at a cloud provider, installing new servers as their customer base is growing. What is a Keyboard, Video, Mouse (KVM) used for?

A. As a storage method for cloud hosted servers
B. To prevent attacks from gaining unauthorized access to physical servers
C. To connect a laptop to a physical server
D. As a method for backing up data within a cloud environment

A

C. To connect a laptop to a physical server

Explanation:
A KVM is used to connect a keyboard, mouse, and monitor to physical servers in a data center to provide access, or in today’s terms, a laptop. It’s important in a data center that security measures are put in place to prevent unauthorized access using the KVM.

The laptop, or KVM, is not used for storage or backups or to prevent attacks. It is actually critical to control who can connect to a physical server. If a bad actor is in the data center and connecting to a physical server, they have a great deal of control and could cause all kinds of different problems.

There is some confusion with the acronym KVM because it also stands for Kernel-based Virtual Machine, which is an open source Linux-based hypervisor

57
Q

Ledger is setting up storage within their Infrastructure as a Service (IaaS) environment. He is building a system that will divide the storage into equal-sized pieces. He is doing this so that there will be efficient, fast, and reliable access to the data. What type of storage is this?

A. Block storage
B. Object storage
C. File storage
D. Volume storage

A

A. Block storage

Explanation:
Block storage allocates the storage space into equal-sized pieces. This can hold any type of data, whether it is a file or a database. It is the most common format setup in Storage Area Networks (SANs).

Cloud volume storage allows users to access and manage files and directories using standard file protocols, such as Network File System (NFS) or Server Message Block (SMB). This makes it easy to integrate cloud storage with existing applications and systems that rely on file-based access.

Cloud object storage is designed to scale seamlessly to handle large volumes of data. It can accommodate virtually unlimited storage capacity by distributing data across multiple storage nodes or data centers. This scalability allows users to store and retrieve massive amounts of data without worrying about infrastructure limitations. Objects are files of any kind.

Cloud file storage, also known as object storage, is a type of cloud storage service that enables users to store and manage files and objects in a highly scalable and distributed manner. Unlike traditional file storage systems that organize data in a hierarchical file structure, cloud file storage uses a flat addressable storage model where files are accessed based on unique identifiers.

58
Q

At which point in the cloud data lifecycle does mapping and securing data flows become relevant?

A. Share
B. Use
C. Store
D. Create

A

B. Use

Explanation:
The cloud data lifecycle has six phases, including:

Create: Data is created or generated. Data classification, labeling, and marking should occur in this phase.
Store: Data is placed in cloud storage. Data should be encrypted in transit and at rest using encryption and access controls.
Use: The data is retrieved from storage to be processed or used. Mapping and securing data flows becomes relevant in this stage.
Share: Access to the data is shared with other users. This sharing should be managed by access controls and should include restrictions on sharing based on legal and jurisdictional requirements. For example, the GDPR limits the sharing of EU citizens’ data.
Archive: Data no longer in active use is placed in long-term storage. Policies for data archiving should include considerations about legal data retention and deletion requirements and the rotation of encryption keys used to protect long-lived sensitive data.
Destroy: Data is permanently deleted. This should be accomplished using secure methods such as cryptographic erasure/ crypto shredding
59
Q

You are working with the team leader for a specific software development project. The members of the software development team are geographically dispersed and will work in a variety of time zones. Multiple developers will modify the configuration and source code files.

How does your organization ensure that changes to the source code are tracked and managed carefully?

A. Fuzz testing
B. Software Configuration Management (SCM)
C. Software assurance and validation
D. Static Application Security Testing (SAST)

A

B. Software Configuration Management (SCM)

Explanation:
Correct answer: Software Configuration Management (SCM)

Software Configuration Management (SCM) technologies are used to manage software assets and to ensure that changes are made in a timely and accurate manner. SCM enables changes to be rolled back. At the time of deployment as well as during updates and patches SCM tools are employed.

Software assurance and validation comes close as a possible answer, as it does provide assurance in the source code. But the question is specific to tracking changes, so SCM is a more specific answer.

SAST and fuzz testing are specific types of software testing, which provides assurance of the quality of the source code, but it does not track changes.

60
Q

Which of the following considerations MOST closely relates to ensuring that customers’ personal data is not accessed by unauthorized users?

A. Privacy
B. Security
C. Governance
D. Regulatory Oversight

A

A. Privacy

Explanation:
When deploying cloud infrastructure, organizations must keep various security-related considerations in mind, including:

Security: Data and applications hosted in the cloud must be secured just like in on-prem environments. Three key considerations are the CIA triad of confidentiality, integrity, and availability.
Privacy: Data hosted in the cloud should be properly protected to ensure that unauthorized users can’t access the data of customers, employees, and other third parties.
Governance: An organization’s cloud infrastructure is subject to various laws, regulations, corporate policies, and other requirements. Governance manages cloud operations in a way that ensures compliance with these various constraints.
Auditability: Cloud computing outsources the management of a portion of an organization’s IT infrastructure to a third party. A key contractual clause is ensuring that the cloud customer can audit (directly or indirectly) the cloud provider to ensure compliance with contractual, legal, and regulatory obligations.
Regulatory Oversight: An organization’s responsibility for complying with various regulations (PCI DSS, GDPR, etc.) also extends to its use of third-party services. Cloud customers need to be able to ensure that cloud providers are compliant with applicable laws and regulations.
61
Q

The capacity to independently verify the origin of data with a high degree of assurance is referred to as:

A. Chain of custody
B. Hashing
C. Digital signature
D. Non-repudiation

A

D. Non-repudiation

Explanation:
The capacity to affirm the origin of data with a high degree of assurance is referred to as non-repudiation. This is accomplished through the use of digital signatures and hashing to ensure that data has not been altered in any way. There must also be a high level of trust in the X.509 digital certificates, the storage and protection of the private key used for the signature, and authenticity of the owner of the private key before they use it. If done correctly, it could be used in a court of law to help in the presentation of evidence.

Hashing is a mathematical calculation run against the binary digits within a message/file/movie/etc. It does not alter the data but allows a value that is the result of the calculation, called a message digest, to be sent with the message/file/movie/etc. so that it can be recalculated on the receiving end. If the calculated message matches the value that was received, there is a level of trust that the message/file/movie/etc. has not been changed.

A digital signature is created when something is encrypted with a private key. Usually, it is the message digest that is encrypted with the private key.

A chain of custody must be created and maintained for evidence to ensure that it has not been altered or changed.

62
Q

Itsuki has been working with his team to determine the risks associated with a Software as a Service (SaaS) provider. The question that they have to address next is what environment is this SaaS provider using. They believe that they have purchased their own cloud service from a Platform as a Service (PaaS) provider.

What type of risk does this introduce to the organization?

A. Privacy risk
B. Outsourced risk
C. Legal risk
D. Fourth-party risk

A

D. Fourth-party risk

Explanation:
The term “fourth party” refers to a third-party’s third-party, such as when your vendors outsource service provision to a separate, independent vendor. The risks faced by the organization now include those related to the SaaS solution itself as well as any additional risks posed by the PaaS CSP that the SaaS provider is using.

Outsourced risk is a close term, but the term adopted for use today is actually fourth party.

There may be legal or privacy concerns for this customer, but the question points to the vendor the SaaS provider is using. So the fourth party is an immediate concern here.

63
Q

The organization has deployed a federated single sign-on system (SSO) and is configured to generate tokens for users and send them to the service provider. Which BEST describes this organization’s role?

A. Service Provider (SP)
B. Certificate Authority (CA)
C. Identity Provider (IdP)
D. Domain Registrar

A

C. Identity Provider (IdP)

Explanation:
The organization would act as the identity provider, while the relying party would act as the service provider. The identity provider is the organization that generates tokens for users because it has the ability to authenticate the users. In this scenario, the organization is authenticating their own employees.

The SP is the organization that provides the service that will be used by the users, for example, a sales force.

The CA is used to verify the X.509 certificates. Encryption should be used within the SSO system, but the question doesn’t mention anything encryption related.

A domain registrar is the business that corporations go to to register a domain name, for example, PocketPrep.com.

64
Q

Diedra is responsible as the information security manager for protecting the data that the business owns. As a real estate business, they have an immense number of photos and videos that they have taken over the years of homes that they have helped their customers sell. They also have all the signed contractual documents for the homes that their customers have both bought and sold. They also offer a home improvement service, so there is a large number of photos for inspiration that they can show their customers as they design their dream homes. In addition to all that, there is a large database of their former, current, and potential customers.

If all this was stored together, it would be called which of the following?

A. Personal data
B. Semi-structured data
C. Structured data
D. Unstructured data

A

B. Semi-structured data

Explanation:
Semi-structured data is a combination of both structured and unstructured data in one place.

Unstructured data refers to any data that cannot be qualified as structured data. Unstructured data doesn’t conform to any defined data structures or formats. Examples of unstructured data include emails, pictures, videos, and text files.

Structured data is predictable in format and size. A database is a great example of structured data. Every record (row) has exactly the same attributes (columns) with the same data type in that column. So semi-structured data has an element of prediction, just not all together.

Personal data is information about a natural human being, such as name, address, and phone number. There is some of this in the question because of the customer information, but that is only a small piece of it.

65
Q

A software vendor who sells firewall and intrusion detection software wants to be able to prove to their customers that they have a quality product. Which standard can they use to have their product evaluated that will result in an Evaluation Assurance Level?

A. International Standards Organization (ISO) 15408
B. Federal Information Processing Standard (FIPS) 140-3
C. National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53
D. Federal Risk and Authorization Management Program (FedRAMP)

A

A. International Standards Organization (ISO) 15408

Explanation:
ISO 15408 is also known as the common criteria. It uses Protection Profiles (PP) and Security Targets (ST) to define the type of product and the test conditions. The result is an EAL of 1-7.

FedRAMP is a program to guide US federal agencies into the cloud carefully and securely. FIPS 140-2/3 is used to validate the physical security of cryptographic modules of products. NIST SP 800-53 is effectively a list of and description of security controls.

66
Q

Which of the following provides access to the underlying storage rather than a storage service?

A. Raw
B. Ephemeral
C. Object
D. Volume

A

A. Raw

Explanation:
Cloud-based infrastructure can use a few different forms of data storage, including:

Ephemeral: Ephemeral storage mimics RAM on a computer. It is intended for short-term storage that will be deleted when an instance is deleted.
Long-Term: Long-term storage solutions like Amazon Glacier, Azure Archive Storage, and Google Coldline and Archive are designed for long-term data storage. Often, these provide durable, resilient storage with integrity protections.
Raw: Raw storage provides direct access to the underlying storage of the server rather than a storage service.
Volume: Volume storage behaves like a physical hard drive connected to the cloud customer’s virtual machine. It can either be file storage, which formats the space like a traditional file system, or block storage, which simply provides space for the user to store anything.
Object: Object storage stores data as objects with unique identifiers associated with metadata, which can be used for data labeling.
67
Q

Data stored in a database is considered to be which of the following?

A. Semi-Structured
B. Unstructured
C. Structured
D. Mostly Structured

A

C. Structured

Explanation:
The complexity of data discovery depends on the type of data being analyzed. Data is commonly classified into one of three categories:

Structured: Structured data has a clear, consistent format. Data in a database is a classic example of structured data where all data is labeled using columns. Data discovery is easiest with structured data because the data discovery tool just needs to understand the structure of the database and the context to identify sensitive data.
Unstructured Data: Unstructured data is at the other extreme from structured data and includes data where no underlying structure exists. Documents, emails, photos, and similar files are examples of unstructured data. Data discovery in unstructured data is more complex because the tool needs to identify data of interest completely on its own.
Semi-Structured Data: Semi-structured data falls between structured and unstructured data, having some internal structure but not to the same degree as a database. HTML, XML, and JSON are examples of semi-structured data formats that use tags to define the function of a particular piece of data.

Mostly structured is not a common classification for data.

68
Q

A corporation has made the difficult decision to move their virtual data center (vDC) that they built using Infrastructure as a Service (IaaS) technology to another cloud provider. One of their first tasks is to move their Structured Query Language (SQL) database.

The security term that best describes the movement of the database is which of the following?

A. Auditability
B. Resiliency
C. Interoperability
D. Portability

A

D. Portability

Explanation:
Portability is defined in ISO/IEC 17788 as the “ability to easily transfer data from one system to another without being required to re-enter the data.” This is the concern in the question because it is about moving the database.

Interoperability is defined in ISO/IEC 17788 as “the ability of two or more systems or applications to exchange information and to mutually use the information that has been exchanged.” This is the ability to use data on a different system. The SQL database would be the same on both cloud providers. This is not the topic of the question. If the question mentioned moving data from an SQL to a mongo database, then interoperability would be the topic.

Resiliency is defined in ISO/IEC 17788 as the “ability of a system to provide and maintain an acceptable level of service in the face of faults (unintentional, intentional, or naturally caused) affecting normal operation.”

Auditability is defined in ISO/IEC 17788 as “the capability of collecting and making available necessary evidential information related to the operation and use of a cloud service, for the purpose of conducting an audit.”

69
Q

Stacie is responsible for creating a cloud data archiving strategy. She works for a medium-sized real estate company that must have access to the records regarding the sales of properties for as long as the company exists, possibly even longer. There are many critical elements that must be taken into consideration to ensure that the archival of data will be successful.

Which of the following is a critical element that Stacie must take into consideration to ensure data will be retrievable?

A. Size
B. Classification
C. Format
D. Amount

A

C. Format

Explanation:
It is crucial that format is taken into consideration when developing a cloud data archiving strategy. If format is not thought about during the strategy, then the archived data may become very difficult to retrieve if needed in the long run. Other important considerations include technologies used to create and maintain the archives, regulatory requirements, and testing procedures.

The classification of data can have an impact on many things. Depending on the classification, the security around the backups may be different than expected.

The size of the amount of data that is backed up can have an impact on costs. But for all three of the wrong answers, they have a problem matching the needs of the question as well as the format.

The question is about success. If the data cannot be retrieved due to a format issue, this is potentially a big issue. This may change over time, though. It has been an issue with companies that stored tapes but forgot to keep the physical tape reader and its software or had images scanned in the 1980s or 1990s and did not retain the software that read the outdated format. Today it may seem like our .docx, .mp3, .mp4, etc. formats will always be with us, but history has shown (as in the examples of the 1980s and 90s) that technology changes. Companies need to work at not repeating those mistakes.

70
Q

At which stage of the incident response process should the organization determine the members of the IRT?

A. Recover
B. Respond
C. Prepare
D. Detect

A

C. Prepare

Explanation:
An incident response plan (IRP) should lay out the steps that the incident response team (IRT) should carry out during each step of the incident management process. This process is commonly broken up into several steps, including:

Prepare: During the preparation stage, the organization develops and tests the IRP and forms the IRT.
Detect: Often, detection is performed by the security operations center (SOC), which performs ongoing security monitoring and alerts the IRT if an issue is discovered. Issues may also be raised by users, security researchers, or other third parties.
Respond: At this point, the IRT investigates the incident and develops a remediation strategy. This phase will also involve containing the incident and notifying relevant stakeholders.
Recover: During the recovery phase, the IRT takes steps to restore the organization to a secure state. This could include changing compromised passwords and similar steps. Additionally, the IRT works to address and remediate the underlying cause of the incident to ensure that it is completely fixed.
Post-Incident: After the incident, the IRT should document everything and perform a retrospective to identify potential room for improvement and try to identify and remediate the root cause to stop future incidents from happening.
71
Q

An information security manager, Lou, has been asked to determine how much data and information must be restored to get to a minimum acceptable operating level after a disaster. Lou works for a manufacturing company that has a critical set of servers that must be operating to be able to run the equipment on the manufacturing floor. This equipment builds products based on the orders that are being received.

What has Lou been asked to determine?

A. Mean Time to Recover (MTR)
B. Recovery Time Objective (RTO)
C. Recovery Point Objective (RPO)
D. Recovery Service Level (RSL)

A

C. Recovery Point Objective (RPO)

Explanation:
The RPO is defined as the amount of data and information that must be restored and recovered after a disaster to meet business continuity and disaster recovery objectives. It is defined as the amount of data that can be lost in a unit of time. For example, we can lose the last two minutes worth of orders.

The RTO is the amount of time that it takes to do the recovery work. This is the time from when an event has been declared a disaster to the point it must be back up and working at some level of functionality. It may not get back to normal conditions at the end of the RTO.

The RSL is the amount of functionality that must be achieved at the end of the RTO. For example, 90%, so if we could process 10 orders an hour, we must get to the ability to handle nine an hour to be able to survive this incident/ disaster.

The MTR (sometimes MTTR) is the average time it takes to repair something. For example, it normally takes 10 minutes to replace a drive in a server.

72
Q

An organization is concerned that moving certain applications to the cloud was not a good choice. Which of the following does this relate to?

A. Portability
B. Reversibility
C. Interoperability
D. Availability

A

B. Reversibility

Explanation:
Some important cloud considerations have to do with its effects on operations. These include:

Availability: The data and applications that an organization hosts in the cloud must be available to provide value to the company. Contracts with cloud providers commonly include service level agreements (SLAs) mandating that the service is available a certain percentage of the time.
Resiliency: Resiliency refers to the ability of a system to weather disruptions. Resiliency in the cloud may include the use of redundancy and load balancing to avoid single points of failure.
Performance: Cloud contracts also often include SLAs regarding performance. This ensures that the cloud-based services can maintain an acceptable level of operations even under heavy load.
Maintenance and Versioning: Maintenance and versioning help to manage the process of changing software and other systems. Updates should only be made via clear, well-defined processes.
Reversibility: Reversibility refers to the ability to recover from a change that went wrong. For example, how difficult it is to restore on-site operations after a transition to an outsourced service (like a cloud provider).
Portability: Different cloud providers have different infrastructures and may do things in different ways. If an organization’s cloud environment relies too much on a provider’s unique implementation or the provider doesn’t offer easy export, the company may be stuck with that provider due to vendor lock-in.
Interoperability: With multi-cloud environments, an organization may have data and services hosted in different providers’ environments. In this case, it is important to ensure that these platforms and the applications hosted on them are capable of interoperating.
Outsourcing: Using cloud environments requires handing over control of a portion of an organization’s infrastructure to a third party, which introduces operational and security concerns.
73
Q

Adrina has been working with the team that has been building the Disaster Recovery Plans (DRP) for several different scenarios. They have been able to determine for their most critical server that they can leave two minutes of data, be offline for 30 seconds, and must come back online with at least 85% of normal functionality.

What have they determined and at what phase of the planning lifecycle are they in?

A. Recovery Point Objective (RPO), Maximum Tolerable Downtime (MTD), Recovery Service Level (RSL), Business Impact Analysis
B. Recovery Time Objective (RTO), Recovery Point Objective (RPO), Maximum Tolerable Downtime (MTD), Recovery Strategies
C. Recovery Time Objective (RTO), Maximum Tolerable Downtime (MTD), Recovery Service Level (RSL), Business Impact Analysis
D. Mean Time to Repair (MTR), Recovery Point Objective (RPO), Maximum Tolerable Downtime (MTD), Recovery Strategies

A

A. Recovery Point Objective (RPO), Maximum Tolerable Downtime (MTD), Recovery Service Level (RSL), Business Impact Analysis

Explanation:
Correct answer: Recovery Point Objective (RPO), Maximum Tolerable Downtime (MTD), Recovery Service Level (RSL), Business Impact Analysis

The Business Impact Analysis (BIA) is the process of performing a risk assessment and determining the critical time values that must be met by any backup/recovery systems to ensure the corporation will succeed if there is a disaster or major business interruption. Those time values include RTO, RPO, MTD, MTO, and then the RSL which is not actually time.

RPO is the minimum amount of data that would be needed to be retained and recovered for an organization to function at a level that is acceptable to stakeholders. The RPO is actually expressed as a maximum amount of time worth of data that can be lost. For example, can your business lose forever the last two hours worth of emails? In this scenario, it is two minutes of data.

RTO is the time allocated to do the work of recovery. This must be less than the MTD. Care should be taken not to underestimate the chaos at the beginning of a failure. Time needs to be left for analysis or damage assessment. The RTO should not be the MTD. It is even internationally recommended that RTO should only be one half of the MTD. There is no value in the question that matches this topic.

MTD is the maximum amount of time that the business can tolerate a critical system being offline completely. In this scenario, it is 30 seconds.

RSL is the level of functionality required in the alternate state. The RSL in this scenario is 85% functionality. If failing over to other systems, does the functionality have to be the same as normal or can it be a little less for a short period of time? That period of time is known as the Maximum Tolerable Outage (MTO).

MTR, or sometimes MTTR, is the average amount of time it actually takes to recover or repair something—not business requirements but rather actual work time. This is not a time frame that is determined in the BIA. This is a statistic that tells us the average amount of time. The BIA is about finding what the business requires, not what is average in the industry or with a specific piece of hardware or software.

74
Q

Which of the following data classification labels indicates the importance of data to an organization’s operations?

A. Type
B. Ownership
C. Sensitivity
D. Criticality

A

D. Criticality

Explanation:
Data owners are responsible for data classification, and data is classified based on organizational policies. Some of the criteria commonly used for data classification include:

Type: Specifies the type of data, including whether it has personally identifiable information (PII), intellectual property (IP), or other sensitive data protected by corporate policy or various laws.
Sensitivity: Sensitivity refers to the potential results if data is disclosed to an unauthorized party. The Unclassified, Confidential, Secret, and Top Secret labels used by the U.S. government are an example of sensitivity-based classifications.
Ownership: Identifies who owns the data if the data is shared across multiple organizations, departments, etc.
Jurisdiction: The location where data is collected, processed, or stored may impact which regulations apply to it. For example, GDPR protects the data of EU citizens.
Criticality: Criticality refers to how important data is to an organization’s operations.
75
Q

Which of the following solutions is designed to manage access to SaaS and other cloud-hosted applications?

A. IdP
B. MFA
C. CASB
D. SSO

A

C. CASB

Explanation:
Correct answer: CASB

Identity and Access Management (IAM) is critical to application security. Some important concepts in IAM include:

Federated Identity: Federated identity allows users to use the same identity across multiple organizations. The organizations set up their IAM systems to trust user credentials developed by the other organization.
Single Sign-On (SSO): SSO allows users to use a single login credential for multiple applications and systems. The user authenticates to the SSO provider, and the SSO provider authenticates the user to the apps using it.
Identity Providers (IdPs): IdPs manage a user’s identities for an organization. For example, Google, Facebook, and other organizations offer identity management and SSO services on the Web.
Multi-Factor Authentication (MFA): MFA requires a user to provide multiple authentication factors to log into a system. For example, a user may need to provide a password and a one-time password (OTP) sent to a smartphone or generated by an authenticator app.
Cloud Access Security Broker (CASB): A CASB sits between cloud applications and users and manages access and security enforcement for these applications. All requests go through the CASB, which can perform monitoring and logging and can block requests that violate corporate security policies.
Secrets Management: Secrets include passwords, API keys, SSH keys, digital certificates, and anything that is used to authenticate identity and grant access to a system. Secrets management includes ensuring that secrets are randomly generated and stored securely.
76
Q

Zuma has been provisioning virtual machines in an Infrastructure as a Service (IaaS) for his manufacturing company. Part of the provisioning he has been configuring is the cloud storage that the virtual machines will connect to. Which storage type appears as a drive (virtual) that the virtual machine attaches to?

A. Object storage
B. Unstructured storage
C. Structured storage
D. Volume storage

A

D. Volume storage

Explanation:
Used in IaaS cloud environments, volume storage involves a virtual hard drive, which is attached to the virtual host. The host is able to access the virtual hard drive in the same way a computer accesses a traditional hard drive. Volume storage is a block storage method.

Object storage is a flat storage system that does not have a directory system. It is accessible over an Application Programming Interface (API) because it is not directly attached to an instance.

Unstructured storage is most commonly associated with Big Data and data lakes. It can be associated with object or file storage though.

Structured storage is most commonly associated with databases and data warehouses but can be associated with block storage.

A good reference to read on cloud storage can be found at ubuntu’s website.

77
Q

Structured and unstructured storage pertain to which of the three cloud service models?

A. Platform as a Service (PaaS)
B. Software as a Service (SaaS)
C. DataBase as a Service (DBaaS)
D. Infrastructure as a Service (IaaS)

A

A. Platform as a Service (PaaS)

Explanation:
Each cloud service model uses a different method of storage as shown below:

Platform as a Service (PaaS) uses the terms of structured and unstructured to refer to different storage types.
Infrastructure as a Service (IaaS) uses the terms of volume and object to refer to different storage types.
Software as a Service (SaaS) uses content and file storage and information storage and management to refer to different storage types.

The use of these terms begins with the Cloud Security Alliance, and it would be a good idea to read the CSA guidance document. As of the time this question was written in 2022, the CSA put out version 4. Version 5 is expected soon.

DBaaS is not one of the three cloud service models.

78
Q

A movie and TV streaming company that uses a public Infrastructure as a Service (IaaS) cloud deployment model is looking for a technology solution to improve how effective the service is in serving up their shows to their customers. Which technology utilizes edge servers to facilitate services like this?

A. Raw Device Mapping (RDM)
B. Software Defined Network (SDN)
C. Software Defined Storage (SDS)
D. Content Delivery Network (CDN)

A

D. Content Delivery Network (CDN)

Explanation:
A Content Delivery Network (CDN) is a technology to improve delivery of content by caches copies of viewed content on edge servers that are closer to the customers. Its major purpose is to accelerate users’ access to web resources that are geographically dispersed. CDNs enable users in remote places to access web data via servers located closer to their location than the origin server.

Software Defined Storage (SDS) is a way to allocate storage that separates the provisioning capacity, protecting data, and controlling data placement from the physical hardware. This way, it is easier to allocate storage as well as upgrade hardware without it impacting the software storage options.

Software Defined Networking (SDN) uses software defined controllers to manage the traffic flows on a switched network.

Raw Device Mapping (RDM) uses a separate VMFS volume as a proxy for a raw physical storage device.

79
Q

A marketing company is looking for the best Application Programming Interface (API) to use in their new application. They have been able to narrow it down to SOAP or REpresentation State Transfer (REST). Which of the following statements regarding SOAP and REST is TRUE that would help them make their final decision?

A. REST is typically only used when technical limitations prevent the use of SOAP
B. SOAP supports a wide variety of data formats, including both JSON and XML
C. SOAP does not allow for caching, making it less scalable and having lower performance than REST
D. REST only allows the use of XML-formatted data

A

C. SOAP does not allow for caching, making it less scalable and having lower performance than REST

Explanation:
SOAP does not allow for caching, making it less scalable and having lower performance than REST. Because of this, SOAP is typically used only when there are restrictions that prevent the use of REST in the environment.

REST is more flexible and supports a variety of data formats, including both JSON and XML, while SOAP only allows the use of XML-formatted data.

80
Q

Which of the following is a system that provides access to a private network from a less secure one?

A. Container
B. Honeypot
C. Sandbox
D. Bastion Host

A

D. Bastion Host

Explanation:
Bastion hosts are systems designed to provide access to a private network from a less secure one.

A honeypot is a dummy system designed to attract an attacker’s attention and waste their time while allowing defenders to detect and observe the attack.

Sandboxing involves running software in an isolated environment where it can’t cause damage to production systems.

Containers wrap an application and its dependencies in a package to improve portability.

81
Q

Which of the following refers to a cloud customer’s ability to grow or shrink their cloud footprint on demand?

A. Scalability
B. Mobility
C. Elasticity
D. Agility

A

C. Elasticity

Explanation:
Correct answer: Elasticity

Elasticity refers to a system’s ability to grow and shrink on demand.
Scalability refers to its ability to grow as demand increases.
Agility and mobility are not terms used to describe cloud environments.
82
Q

Complete the following sentence with the MOST accurate statement: Cloud environments . . .

A. consist of far fewer systems and servers
B. are generally operated out of one physical location
C. take the level of concern away from the cloud customer and place it onto the cloud provider
D. are built of components that are completely different from those used in a traditional environment

A

C. take the level of concern away from the cloud customer and place it onto the cloud provider

Explanation:
Correct answer: take the level of concern away from the cloud customer and place it onto the cloud provider

While it may seem that a cloud infrastructure is completely different from that of a traditional data center, all the components that exist in a traditional data center are still needed in the cloud. The main difference is that within a cloud environment, the responsibility and level of concern is moved away from the cloud customer to the cloud provider. However, not all concerns, but this answer is the best out of the other three statements. The cloud provider is responsible for the physical data center and its security, and depending on the level of service that the customer buys, they are also possibly responsible for the virtual server and applications.

One way to look at the CCSP exam is that it is a data center exam. The language is sometimes changed to the newer cloud language.

If you look at the bigger cloud providers, they have multiple large data centers.

83
Q

It is vital to have an understanding of how data located in cloud storage is being accessed by members of an organization. What should be maintained to preserve visibility and promote monitoring?

A. Classification log scheme
B. Application-specific logs
C. Centralized logs
D. Chain of custody

A

C. Centralized logs

Explanation:
Logging is the process of documenting events or activities that occur against an asset. Logging is crucial for any business since it serves as the primary repository of information about previous events. Security Information and Event Management (SIEM) technology is used to centralize these logs. A SIEM technology enables the collection, analysis, aggregation, correlation, and reporting of suspected security incidents in a centralized manner. SIEM solutions can ingest a variety of different forms of log data from hardware, applications, and data sources. Logging and a SIEM solution operate in tandem to centralize data and make it visible where it is needed.

Application-specific logs would be sent to the centralized log, along with all the others.

Chain of custody is necessary if the logs have been collected as evidence in an investigation. This is not an investigation, simply normal logging conditions.

Logs can certainly be classified, but that only indicates the sensitivity of the data in the logs. It would not reveal information about the visibility of the logs.

84
Q

The term IaC relates to which of the following?

A. Confidential Computing
B. ML/AI
C. DevSecOps
D. Blockchain

A

C. DevSecOps

Explanation:
Cloud computing is closely related to many emerging technologies. Some examples include:

Machine Learning and Artificial Intelligence (ML/AI): Machine learning is a subset of AI and includes algorithms that are designed to learn from data and build models to identify trends, perform classifications, and other tasks. Cloud computing is linked to the rise of ML/AI because it provides the computing power needed to train the models used by ML/AI and operate these technologies at scale.
Blockchain: Blockchain technology creates an immutable digital ledger in a decentralized fashion. It is used to support cryptocurrencies, track ownership of assets, and implement various other functions without relying on a centralized authority or single point of failure. Cloud computing is related to blockchain because many of the nodes used to maintain and operate blockchain networks run on cloud computing platforms.
Confidential Computing: While data is commonly encrypted at rest and in transit, it is often decrypted while in use, which creates security concerns. Confidential computing involves the use of trusted execution environments (TEEs) that protect and isolate sensitive data from potential threats while in use.
DevSecOps: DevSecOps is the practice of building security into automated DevOps workflows. DevSecOps can be used to secure cloud-hosted applications. Also, infrastructure as code (IaC) involves automating the configuration of cloud-based systems and servers to reduce errors and improve scalability.
85
Q

Some communication policies are required by law or regulation. What law requires reporting within 72 hours after an incident is discovered?

A. Gramm Leach Bliley Act (GLBA)
B. Sarbanes Oxley (SOX)
C. Health Information Portability and Accountability Act (HIPAA)
D. General Data Protection Regulation (GDPR)

A

D. General Data Protection Regulation (GDPR)

Explanation:
Some post-incident communication policies are mandated by legislation or regulation. GDPR has a very short time frame of 72 hours. The cloud starts when an incident that involves the breach of personal data is first discovered.

SOX requires the disclosure or reporting of events applying exclusively to financial records. However, it is not a 72-hour window, it is measured in days.

GLBA requires reporting after a privacy breach but it, too, is measured in days. For both SOX and GLBA, the best advice is to consult a lawyer to ensure correct compliance with reporting.

HIPAA does require notification as well, but again, the number is not measured in hours, it is 60 days.

86
Q

When developing a business continuity and disaster recovery (BC/DR) plan, what step should be completed after the scope has been defined?

A. Embed in the user community
B. Test the plan
C. Recovery strategies
D. Business Impact Assessment (BIA)

A

D. Business Impact Assessment (BIA)

Explanation:
After defining the scope, the next step of developing a BC/DR plan is to perform a business impact assessment. This stage determines what should be included in the plan and looks at items such as the Recovery Time Objective (RTO) and Recovery Point Objective (RPO). It will be necessary during this stage to identify critical systems within the environment.

Based on the knowledge found during the BIA, it is then necessary to develop the recovery strategy. For example, when a failure occurs, will the business fail to a different region within that cloud provider or will they fail to a different cloud provider.

The solution must be tested to ensure that it will work when needed.

At the end of the BC/DR planning process, everyone who needs to know about the plan is aware.

87
Q

The global nature of the cloud adds complexity to a cloud customer’s relationships with which of the following?

A. Cloud Service Partners
B. Cloud Service Brokers
C. Regulators
D. Cloud Service Providers

A

C. Regulators

Explanation:
Some of the important roles and responsibilities in cloud computing include:

Cloud Service Provider: The cloud service provider offers cloud services to a third party. They are responsible for operating their infrastructure and meeting service level agreements (SLAs).
Cloud Customer: The cloud customer uses cloud services. They are responsible for the portion of the cloud infrastructure stack under their control.
Cloud Service Partners: Cloud service partners are distinct from the cloud service provider but offer a related service. For example, a cloud service partner may offer add-on security services to secure an organization’s cloud infrastructure.
Cloud Service Brokers: A cloud service broker may combine services from several different cloud providers and customize them into packages that meet a customer’s needs and integrate with their environment.
Regulators: Regulators ensure that organizations — and their cloud infrastructures — are compliant with applicable laws and regulations. The global nature of the cloud can make regulatory and jurisdictional issues more complex.
88
Q

Which organization is responsible for developing the Infinity Paradigm?

A. National Fire Protection Association (NFPA)
B. National Institute of Standards and Technology (NIST)
C. Building Industry Consulting Service International (BICSI)
D. International Data Center Authority (IDCA)

A

D. International Data Center Authority (IDCA)

Explanation:
The International Data Center Authority (IDCA) is responsible for developing the Infinity Paradigm, which is a framework intended to be used for operations and data center design. The Infinity Paradigm covers aspects of data center design, which include location, security, connectivity, and much more.

Building Industry Consulting Service International (BICSI) is a professional association and global organization that specializes in the advancement of Information and Communications Technology (ICT) infrastructure. BICSI provides education, training, certification, and networking opportunities for individuals and companies working in the field of ICT infrastructure.

NFPA stands for the National Fire Protection Association. It is a nonprofit organization in the United States that is dedicated to promoting fire and life safety. NFPA develops and publishes a wide range of codes and standards related to fire protection, electrical safety, building design, hazardous materials, and other areas that impact public safety.

NIST stands for the National Institute of Standards and Technology. It is a non-regulatory agency of the United States Department of Commerce. NIST’s mission is to promote innovation and industrial competitiveness by advancing measurement science, standards, and technology.

89
Q

A cloud architect wants to move all their organization’s physical hardware to the cloud. This includes routers, switches, firewalls, and servers. They are looking for a service that will allow them to manage the operating systems of the servers and all the applications that will be installed on the servers. However, they no longer want to have to manage any physical hardware.

Which type of cloud provider would BEST suit this cloud architect’s needs?

A. Infrastructure as a Service (IaaS)
B. Database as a Service (DBaaS)
C. Platform as a Service (PaaS)
D. Software as a Service (SaaS)

A

A. Infrastructure as a Service (IaaS)

Explanation:
Infrastructure as a Service (IaaS) providers will provide cloud customers with everything they need from a hardware standpoint, including routers, switches, firewalls, and servers. The customer will still be responsible for managing all the software and operating systems but will not need to manage any hardware. IaaS allows a customer to effectively build a virtual data center in the cloud. The actual hardware equipment purchase and maintenance is removed from the customer’s point of view, but the customer can bring the Operating Systems (OS), which are routers, switches, firewalls, etc.

PaaS is an OS or platform at a time if it is server-based. This would not include routers and switches though. There is also server-less, in which case the customer does not even need to worry about the OS at all.

DBaaS is just a database, as the name implies. There is no configuration of the routers or switches. This would be considered PaaS.

SaaS is the furthest removed from the question. The network equipment, routers, switches, and so on as well as the OS for the servers and the administration of the software itself is all under the cloud provider’s control. It does not allow the customer to do anything other than use the software.

90
Q

Waheed has been hired by a corporation to improve their information security. Waheed has realized that the corporation definitely needs to upgrade what they are doing to protect their business and their customers. What should Waheed include in a business plan that will enable real-time investigations of security incidents?

A. Security Information & Event Manager (SIEM)
B. Web Application Firewall (WAF)
C. Intrusion Detection System (IDS)
D. Security Operations Center (SOC)

A

D. Security Operations Center (SOC)

Explanation:
By building a Security Operations Center (SOC), corporations gain the ability to proactively detect and prevent cyber attacks. SOCs use a combination of technologies and processes to detect anomalies and threats in real time and investigate security incidents when they occur. This allows security analysts to respond quickly to security incidents and minimize the impact of any breaches.

Web Application Firewall (WAF), Intrusion Detection System (IDS), and Security Information & Event Manager (SIEM) are all tools that the SOC would rely on for its information. The question is about investigations though and people are needed, so SOC is a more complete answer.

An Intrusion Detection System (IDS) is a security tool or software that monitors network traffic or system activities to detect and respond to potential security threats and attacks. The primary function of an IDS is to identify unauthorized or malicious activities within a network or system and raise alerts or take appropriate actions to mitigate detected threats.

A Web Application Firewall (WAF) is a security technology that monitors and filters incoming and outgoing traffic between a web application and the internet. It operates at the application layer (layer 7) of the Open Systems Interconnection (OSI) model and protects web applications from a variety of attacks such as SQL injection, cross-site scripting (XSS), and other common attacks.

A Security Information and Event Management (SIEM) system is a comprehensive security solution that combines Security Information Management (SIM) and Security Event Management (SEM) functionalities. SIEM systems provide real-time monitoring, analysis, and reporting of security events and incidents across an organization’s IT infrastructure.

91
Q

Dom, the information security manager, has been working with Vicky, the primary developer, on a new application. They have begun the testing phase of their new product. They are testing their new web application through the use of automated tools to simulate what a malicious actor would be able to do. What type of security testing are they doing?

A. Dynamic Application Security Testing (DAST)
B. Penetration testing
C. Vulnerability scanning
D. Static Application Security Testing (SAST)

A

A. Dynamic Application Security Testing (DAST)

Explanation:
Dynamic Application Security Testing (DAST) is a type of security test that uses the running application. That means that the tester is not given the source code. They are attacking a web application like a malicious actor. It is dynamic because the application is in a running condition.

Static Application Security Testing (SAST) is a type of test that is used on the source code. SAST is performed in an offline manner and can be performed as soon as there is source code.

Vulnerability scanning is a test that is run on systems to ensure that the systems are properly hardened, and there are not any known vulnerabilities in the system.

Penetration testing is a type of test in which the tester attempts to break into systems using the same tools that an attacker would to discover vulnerabilities.

92
Q

Through sustained cooperation with a cloud service provider, the third-party file hosting and sharing platform extends its reach to service areas where it lacks infrastructure. What functional cloud computing role does the third-party file hosting and sharing platform play in this scenario?

A. Cloud Service Broker
B. Cloud Service Partner
C. Cloud Service Provider (CSP)
D. Cloud Service Customer (CSC)

A

B. Cloud Service Partner

Explanation:
A cloud service partner is a third-party provider of cloud-based services (infrastructure, storage and application, and platform services) through the CSP with which it is associated. The third-party cloud service partner makes use of the cloud service provider’s service in this scenario.

The CSC is considered party one. The CSP is considered party two. A Cloud Service Partner, cloud service broker, or an external auditor is considered the third party. And, while we are here, if you see a fourth party, they will be a contractor.

93
Q

At which stage of the incident response process will the IRT work to contain the incident and inform stakeholders?

A. Detect
B. Post-Incident
C. Respond
D. Recover

A

C. Respond

Explanation:
An incident response plan (IRP) should lay out the steps that the incident response team (IRT) should carry out during each step of the incident management process. This process is commonly broken up into several steps, including:

Prepare: During the preparation stage, the organization develops and tests the IRP and forms the IRT.
Detect: Often, detection is performed by the security operations center (SOC), which performs ongoing security monitoring and alerts the IRT if an issue is discovered. Issues may also be raised by users, security researchers, or other third parties.
Respond: At this point, the IRT investigates the incident and develops a remediation strategy. This phase will also involve containing the incident and notifying relevant stakeholders.
Recover: During the recovery phase, the IRT takes steps to restore the organization to a secure state. This could include changing compromised passwords and similar steps. Additionally, the IRT works to address and remediate the underlying cause of the incident to ensure that it is completely fixed.
Post-Incident: After the incident, the IRT should document everything and perform a retrospective to identify potential room for improvement and try to identify and remediate the root cause to stop future incidents from happening.
94
Q

The “CIA triad” is MOST closely related to which of the following cloud considerations?

A. Privacy
B. Governance
C. Regulatory Oversight
D. Security

A

D. Security

Explanation:
When deploying cloud infrastructure, organizations must keep various security-related considerations in mind, including:

Security: Data and applications hosted in the cloud must be secured just like in on-prem environments. Three key considerations are the CIA triad of confidentiality, integrity, and availability.
Privacy: Data hosted in the cloud should be properly protected to ensure that unauthorized users can’t access the data of customers, employees, and other third parties.
Governance: An organization’s cloud infrastructure is subject to various laws, regulations, corporate policies, and other requirements. Governance manages cloud operations in a way that ensures compliance with these various constraints.
Auditability: Cloud computing outsources the management of a portion of an organization’s IT infrastructure to a third party. A key contractual clause is ensuring that the cloud customer can audit (directly or indirectly) the cloud provider to ensure compliance with contractual, legal, and regulatory obligations.
Regulatory Oversight: An organization’s responsibility for complying with various regulations (PCI DSS, GDPR, etc.) also extends to its use of third-party services. Cloud customers need to be able to ensure that cloud providers are compliant with applicable laws and regulations.
95
Q

Which of the following network security controls could be used to ensure that ONLY on-site employees can access corporate cloud resources?

A. Network Security Groups
B. Traffic Inspection
C. Geofencing
D. Zero Trust Network

A

C. Geofencing

Explanation:
Network security controls that are common in cloud environments include:

Network Security Groups: Network security groups (NSGs) limit access to certain resources, such as firewalls or sensitive VMs or databases. This makes it more difficult for an attacker to access these resources during their attacks.
Traffic Inspection: In the cloud, traffic monitoring can be complex since traffic is often sent directly to virtual interfaces. Many cloud environments have traffic mirroring solutions that allow an organization to see and analyze all traffic to its cloud-based resources.
Geofencing: Geofencing limits the locations from which a resource can be accessed. This is a helpful security control in the cloud, which is accessible from anywhere.
Zero Trust Network: Zero trust networks apply the principle of least privilege, where users, applications, systems, etc. are only granted the access and permissions that they need for their jobs. All requests for access to resources are individually evaluated, so an entity can only access those resources for which they have the proper permissions.
96
Q

A corporation follows the Privacy Management Framework (PMF), which replaced the Generally Accepted Privacy Principles (GAPP), as the formation of their privacy program. Which component of the PMF applies most closely when that corporation is going to use a payment processor for their payment transactions?

A. Monitoring and enforcement
B. Collection and creation
C. Disclosure to third parties
D. Security for privacy

A

C. Disclosure to third parties

Explanation:
There are nine components of the PMF:

Management
Agreement, notice, and communication
Collection and creation
Use, retention, and disposal
Access
Disclosure to third parties
Security for privacy
Data integrity and quality
Monitoring and enforcement

The disclosure to the third-party component applies to any scenario where an organization shares personal information with third-party vendors, partners, contractors, or any other external entities. Examples of such scenarios may include outsourcing data processing or storage, sharing personal information with a marketing partner, or using a third-party payment processor to process transactions.

The Security for privacy component is aimed at safeguarding personal information against unauthorized access, use, or disclosure. In other words, this component focuses on implementing security measures to protect personal information from data breaches, cyber attacks, or other forms of malicious actions.

The collection and creation component focuses on establishing a framework for the appropriate collection and creation of personal information. This component provides guidance on establishing processes that ensure that individuals’ personal information is collected and created in a transparent, lawful, and ethical manner.

The monitoring and enforcement component focuses on establishing a framework for monitoring compliance with an organization’s privacy policies, laws, and regulations and ensuring that consequences are in place for non-compliance.

97
Q

Carolyn is working for a medium-sized business located in Amsterdam, Netherlands. As a part of the European Union (EU), the Netherlands has a law compliant with the General Data Protection Regulation (GDPR). She is configuring the protection mechanisms for a database that contains data about their customers. The information in that database contains the customer names, addresses, and phone numbers.

This is an example of what type of personal data or personally identifiable information (PII)?

A. Indirect identifier
B. Simple identifier
C. Direct identifier
D. Health identifier

A

C. Direct identifier

Explanation:
Personally Identifiable Information (PII) is broken up into direct and indirect identifiers. Personal data is considered a direct identifier if you can identify a single person with just the information presented. A name by itself is usually considered a direct identifier, but if you combine it with the address and phone number, what is here is definitely a direct identifier.

Indirect identifiers contain information that alone will not get you to the individual. This information is about gender, sex, religious preferences, opinions, and so on.

A simple identifier is just one piece of information as opposed to the combination of name/address/phone number, which would be considered a composite identifier.

A health identifier would be Protected Health Information (PHI).

This topic can be confusing. Further information can be found on Infranet’s website.

98
Q

An information security administrator has implemented Data Loss Prevention (DLP) solutions that are installed on each of the systems that house and store data. This includes any servers, workstations, and mobile devices which hold data. These DLP solutions are used to protect data in which state?

A. Data at rest
B. Data in use
C. Data in motion
D. Data in transit

A

A. Data at rest

Explanation:
To protect data at rest, Data Loss Prevention (DLP) solutions must be deployed on each of the systems that house data, including any servers, workstations, and mobile devices. This is the simplest of DLP solutions, but to be most effective, it may also require network integration. Traditionally, DLP is used to protect data in transit, but it can be used this way to protect data at rest.

An Information Rights Management (IRM) tool would be good to protect data in use.

Data in motion and data in transit are the same.

99
Q

Jonas has been working with the lawyers within his business to ensure that the contract they are accepting with their new cloud provider is acceptable. They are purchasing a Platform as a Service (PaaS) when their software developers will be creating server-based virtual machines to create their new software platform on. Once the software is created, they will be offering Software as a Service (SaaS) to their customers. To ensure that their own business is protected, they need a method of properly removing the customers’ data to ensure that it is sanitized when the contracts are over.

What method of sanitizing the data would work?

A. Crypto Erasure
B. Data dispersion
C. Annonymization
D. Degaussing

A

A. Crypto Erasure

Explanation:
There are many types of data sanitization but not all of them are applicable to cloud environments. For example, physical methods of data destruction, such as incineration or degaussing, are not available in cloud environments for the customer. Crypto erasure is a method that can be used by the PaaS customer to safely remove their customers’ data when they need to.

Data dispersion is something that typically happens when data is stored in the cloud. A file is divided into small pieces, or blocks, similar to how RAID 3, 4, and 5 work. The difference is that the drives that the pieces are stored on are not all in the same server. So, this is how to store data, not how to erase data.

Annonymization is a removal method. However, the purpose is to remove both direct and indirect identifiers. There is nothing in the question about removing Personally Identifiable Information (PII) or privacy. So crypto erasure is a better removal method for this scenario.

100
Q

If an Internet Service Provider (ISP) fails, the customer is responsible for ensuring communication with the Cloud Service Provider (CSP). Which of the following would be the BEST strategy for ensuring that a means of communication with the CSP is always available?

A. Redundant server
B. Redundant topology
C. Redundant path
D., Redundant cloud carrier

A

D., Redundant cloud carrier

Explanation:
The best strategy for ensuring that a means of communication with a cloud vendor is always available when an interruption occurs would be to implement a redundant ISP. This is referred to as the carrier that enables access between the customer and the cloud provider. It is an old networking word. Carrier literally meant that there was power on the wire.

Redundant topology would be the topology within the data center. It could be the customer’s on-premises data center or it could be at the cloud provider’s data center, but it does not cover the connection between those two sites.

Redundant path is the language of the uptime institute and their data center tiers. A redundant path is needed to reach tier 3. The path that the two power and cooling systems take must be separate from each other. So, a redundant path is for power and cooling, not the path between the customer and the cloud.

A redundant server could help at times but not with an ISP failure. The redundant server only helps within the data center if a server fails.

101
Q
A