Pocket Prep 1 Flashcards

1
Q

Sophia has been working with the DevSecOps teams as they are working to deploy new servers into the production environment of the cloud data center. This cloud provider primarily provides Platform as a Service (PaaS) solutions to its customers. What management strategy encompasses this?

A. Deployment management
B. Configuration management
C. Release management
D. Continuity management

A

A. Deployment management

Explanation:
Deployment management is managing new or changed hardware, software, or any other services to production.

Configuration management is managing configuration items that work together to deliver a product or service. It can be used for the parameters of the products. Once hardware is deployed, there will be configuration that needs to be done. However, the question only goes as far as deploying the hardware.

Release management is making new or changed services and features. This does not work as an answer because all that we know is that they are adding hardware. This will lead to new services, possibly.

Continuity management involves ensuring there is continuity of services in the event of a disaster.

Note: These answers are from ITIL. The clue where these answers come from is that they are all forms of managing the data center.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Joan, a cloud architect, wants to ensure that the management plane of her cloud environment is well protected. What security method is the MOST important to implement to accomplish this?

A. Service Level Agreement (SLA)
B. Transport Layer Security (TLS)
C. Intrusion detection systems (IDS)
D. Multi-factor authentication (MFA)

A

D. Multi-factor authentication (MFA)

Explanation:
The management plane is a high priority target for attackers. The management plane is the connection from the administrators and operators to their cloud environment for the purpose of configuration and control. It’s very important to implement Multi-Factor Authentication (MFA) so that only those individuals who need access will be able to gain that access. If it is just password protected, it will be possible for bad actors to eventually get in. So, even moving to a hardware token rather than a software token would be wise for these connections.

An IDS might alert when the management plane is accessed by a bad actor, but depending on placement and type of IDS, there is a good chance that it will miss the connection from the bad actor. Logs of access would be smart to send to the SIEM. Tracking source IP addresses would be a good start to determining a bad actor versus an administrator connecting. There are more configurations that could be done within a cloud, but MFA is a good start.

TLS is a useful tool to encrypt the transmission of the traffic. It could even be used to protect the management plane (depending on tools and configuration), but it would not prevent the bad actor from trying to brute force a password-only protected management plane.

SLAs are useful for contractual requirements of a cloud environment of any kind. They are used for data traffic connections but not typically for management connections. They are also only contractual requirements. These alone would not stop a bad actor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A corporation is looking for a security solution that will provide a way to understand activities within their Infrastructure as a Service (IaaS) environment. They need to have a way to assess all the packets that are traversing their network to ensure that bad actors have not infiltrated the environment.

Which of the following tools would allow the company to discover such bad actors?

A. eXtensible Markup Language (XML) gateway
B. Intrusion Detection System (IDS)
C. Database Integrity Monitor (DIM)
D. Security Information Event Manager (SIEM)

A

D. Security Information Event Manager (SIEM)

Explanation:
SIEM solutions include features such as aggregation, correlation, alerting, reporting, compliance, and dashboards.

An IDS could be the critical tool that feeds logs into the SIEM for detection of the intruder and their activities, but it could also be possible that the firewall and the server logs are needed to understand what has happened.

An XML gateway is a type of firewall that analyzes XML traffic. This could be a critical source of logs to detect the intruder.

DIMs are used to monitor user activity on a database from directly in front of the database. This could also provide logs that help to detect the intruder.

The SIEM takes all of those logs and correlates the events to ensure detection of the bad actors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Oliver is working with a corporation that has recently experienced an attack by a bad actor against their cloud environment. The bad actor managed to access their account within their public cloud provider of choice. What kind of vulnerability was this?

A. Weak control plane
B. Applistructure failure
C. Infostructure weakness
D. Infrastructure failure

A

A. Weak control plane

Explanation:
The control plane should be protected by two-factor authentication to make it harder for the bad actor to access their account.

Applistructure is the structure of the application. There is no mention of any application failure within the question.

The infostructure is the Storage Area Network (SAN) or Network Attached Storage (NAS). The information will be accessible within the account. However, the bad actor got into the account itself.

The infrastructure is the routers, switches, and servers that make up the physical data center. It can also reference the virtual Data Center (vDC). The vDC is accessible and configurable from within the account if this is Infrastructure as a Service (IaaS). Again, the question mentions that it was the account that was accessed making the control plane the issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Bina works for a retail corporation. She has been working with the Information Technology (IT) department to ensure that she brings her security knowledge to their daily operations. A recent vulnerability scan was performed on the Infrastructure as a Service (IaaS) cloud environment. They discovered that there were a number of different types of servers that were in need of a patch.

As they download the available patches from the vendors, they should be checking which of the following?

A. Hash value encrypted with the vendor’s private key to create a digital signature
B. Hash value encrypted with the vendor’s symmetric key to create a digital signature
C. Hash value encrypted with the vendor’s symmetric and private key to create a digital signature
D. Hash value encrypted with the vendor’s public key to create a digital signature

A

A. Hash value encrypted with the vendor’s private key to create a digital signature

Explanation:
It’s very important to ensure that security patches that are downloaded are actually from the vendor and have not been modified by an attacker. In many cases, vendors will provide a hash value that can be used to check and validate the download of the patch file. When these hash values are available, they should be used to validate and ensure that the patch file matches what the vendor has provided. The hash should be signed by the vendor with their private key. It is validated with the vendor’s matched public key.

Symmetric keys are not used to create or validate digital signatures although they could be used for transmission from the vendor to the customer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A large organization specializing in market research has a large hybrid cloud with many different security controls. The lead information security manager knows how important it is to monitor each of the controls to ensure the effectiveness of their security controls. Which component of the organization’s security control monitoring is the MOST fundamental?

A. Security Information Event Manager (SIEM)
B. Security Operations Center (SOC)
C. Documentation
D. Vulnerability assessment

A

C. Documentation

Explanation:
Monitoring your security controls should begin with documentation that details the purpose and implementation of each control. There should be policy, process, baseline, and procedure documentation on how to monitor each security control.

A vulnerability assessment is an effective method of determining the efficiency of your controls, but it is not monitoring of the controls. It is an assessment of the controls.

A SIEM would help to monitor an environment, but it does not ensure that each of the controls is fully monitored.

A Security Operations Center is a critical component of security monitoring, but the documentation that they need is fundamental to how they operate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Edlyn has been architecting the data storage scheme for the hospital where she works. Her concern is that if a drive fails, which happens often, some of the data that the hospital needs may be lost. There is a scheme that allows for data to be broken into smaller pieces and then effectively striped across many different servers [similar to the Redundant Array of Inexpensive Discs (RAID)].

What feature is she looking for in the cloud provider’s environments?

A. Storage cluster
B. Data dispersion
C. Erasure coding
D. Instance Isolation

A

B. Data dispersion

Explanation:
Data dispersion is when a customer’s data is distributed (or dispersed) across many drives in different servers. This is similar to how RAID 0 works within a single server.

If parity is added, then it is called erasure coding. Since there is no mention of parity in the question, the best answer here is data dispersion.

A storage cluster is a group of many servers that work together to provide data storage. In a tightly coupled architecture, the devices are connected to the same backplane.

Instance isolation is when a virtual machine is isolated for some reason from the rest of the devices in the virtual environment, often using firewalls to create the boundary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of the following is a NIST-defined method of media sanitization that would ONLY be possible in a private cloud environment?

A. Destroy
B. Purge
C. Clear
D. Wipe

A

A. Destroy

Explanation:
When data is no longer needed, it should be disposed of using an approved and appropriate mechanism. NIST SP 800-88, Guidelines for Media Sanitization, defines three levels of data destruction:

Clear: Clearing is the least secure method of data destruction and involves using mechanisms like deleting files from the system and the Recycle Bin. These files still exist on the system but are not visible to the computer. This form of data destruction is inappropriate for sensitive information.
Purge: Purging destroys data by overwriting it with random or dummy data or performing cryptographic erasure (cryptoshredding). Often, purging is the only available option for sensitive data stored in the cloud, since an organization doesn’t have the ability to physically destroy the disks where their data is stored. However, in some cases, data can be recovered from media where sensitive data has just been overwritten with other data.
Destroy: Destroying damages the physical media in a way that makes it unusable and the data on it unreadable. The media could be pulverized, incinerated, shredded, dipped in acid, or undergo similar methods. Destroying media is only possible with media that the company owns, such as systems in an on-prem data center or private cloud.

Wipe is not a NIST-defined method of media sanitization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The Disaster Recovery (DR) team is working with Greta, the information security manager, to determine if they have taken into consideration all the right security concerns for their company. They are located in Frankfurt, Germany, and know they must be in compliance with a variety of laws as a bank.

Which of the following is the MAIN concern for using a DR solution in the cloud?

A. The cost and timeline for recovery
B. The number of individuals who have access to the data
C. The organizations that share the same cloud environment
D. The location where the data is stored

A

D. The location where the data is stored

Explanation:
It’s important to take into consideration the laws that the corporation must be in compliance with [e.g., the German law that is in compliance with the European Union (EU) General Data Protection Regulation (GDPR)]. GDPR requires that data be stored within the EU or a few specific countries that have similar laws to GDPR, such as Switzerland and Argentina. It is likely that the German laws and the banking laws are even more restrictive. It is possible with large providers, such as AWS.Amazon, to specify the location that data will be stored in.

The cost and timeline is something to consider, but the location requirement from the law is an initially bigger topic until the specific systems are analyzed.

The number of individuals who have access is always a concern, but that is something that can be configured once the systems are set up.

As with most exam questions, it is likely that the question centers around a public cloud provider since it does not specify otherwise. There are always organizations that will share that cloud. It is the cloud provider’s job to isolate the different tenants, which may turn out to be an issue later.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

An administrator is moving an application from their current cloud provider to a new cloud provider. Which of the following gives them the ability to do this?

A. Interoperability
B. Multi-tenancy
C. Rapid elasticity
D. Portability

A

D. Portability

Explanation:
The ability to move an application between multiple cloud providers is known as cloud portability. To port is to move without having to recreate or reenter the data.

Interoperability is when data can be used by two different systems. For example, a PDF created on a Mac that can be read on a Windows-based system.

Rapid elasticity refers to the ability to quickly (or rapidly) expand and contract to match the needs of the user/application/system. The resources such as CPU, memory, or storage can be increased as needed by the users and decreased when they are no longer needed.

Multi tenancy is always present in cloud servers. The tenants are different customers within the public cloud. In a private cloud, the tenants are different departments or projects. The hypervisor has the responsibility of isolating tenants from each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Jonas is working for a corporation as an information security manager. He is working with his team to determine the risk associated with moving a critical server to the cloud. This particular server is used to store data required by Research & Development (R&D). They are working to determine the risk of having this data in the cloud versus having it at the corporate data center. They are concerned with not having access when they need it versus their concern that their reputation could be damaged if they do not have the data when needed.

What type of risk assessment would work the best for this assessment?

A. Bow tie assessment
B. Fault tree analysis
C. Quantitative risk assessment
D. Qualitative risk assessment

A

D. Qualitative risk assessment

Explanation:
A qualitative assessment is better because the concern is reputational.

It is difficult, although not impossible, to do a quantitative risk assessment. Reputation is hard to do a financial assessment on.

Fault tree analysis is used to analyze failures and their effect on a system. Bow tie assessment helps to visualize the difference between proactive and reactive risk management. The question does not involve failures and their effects, nor proactive and reactive risk management techniques.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Communications with which of the following are MOST likely to be characterized by one party sending out requirements and the other sending proof that they have met those requirements?

A. Vendors
B. Regulators
C. Customers
D. Partners

A

B. Regulators

Explanation:
An organization may need to communicate with various parties as part of its security and risk management process. These include:

Vendors: Companies rely on vendor-provided solutions, and a vendor experiencing problems could result in availability issues or potential vulnerabilities for their customers. Relationships with vendors should be managed via contracts and SLAs, and companies should have clear lines of communication to ensure that customers have advance notice of potential issues and that they can communicate any observed issues to the vendor.
Customers: Communications between a company and its customers are important to set SLA terms, notify customers of planned and unplanned service interruptions, and otherwise handle logistics and protect the brand image.
Partners: Partners often have more access to corporate data and systems than vendors but are independent organizations. Partners should be treated similarly to employees with defined onboarding/offboarding and management processes. Also, the partnership should begin with mutual due diligence and security reviews before granting access to sensitive data or systems.
Regulators: Regulatory requirements also apply to cloud environments. Organizations receive regulatory requirements and may need to demonstrate compliance or report security incidents to relevant regulators.

Organizations may need to communicate with other stakeholders in specific situations. For example, a security incident or business disruption may require communicating with the public, employees, investors, regulators, and other stakeholders. Organizations may also have other reporting requirements, such as quarterly reports to stakeholders, that could include security-related information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which of the following is also referred to as a continuity of operations plan (COOP)?

A. DIA
B. BCP
C. DRP
D. BIA

A

B. BCP

Explanation:
A business continuity plan (BCP) sustains operations during a disruptive event, such as a natural disaster or network outage. It can also be called a continuity of operations plan (COOP).

A disaster recovery plan (DRP) works to restore the organization to normal operations after such an event has occurred.

The decision of what needs to be included in a business continuity plan is determined by a business impact assessment (BIA), which determines what is necessary for the business to function vs. “nice to have.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Belle works for a medium-sized manufacturing organization that has been moving into the cloud. They have a significant number of Platform as a Service (PaaS) solutions from a major cloud provider. They have also moved some of their functionality to a couple of different Software as a Service (SaaS) providers. She is looking for someone to help bring order to the chaos that this has caused.

Which type of person/company does she need to help bring order to this variety of different providers?

A. Cloud service auditor
B. Cloud service operations manager
C. Cloud service business manager
D. Cloud service integrator

A

D. Cloud service integrator

Explanation:
A cloud service integrator, also known as a cloud services integrator or cloud solutions integrator, is a specialized entity or organization that helps businesses integrate and manage various cloud services from multiple providers into a cohesive and unified solution. They act as intermediaries between cloud service providers and their clients, assisting in the selection, configuration, implementation, and ongoing management of cloud services.

A cloud service auditor, also known as a cloud auditor or cloud service assessment provider, is an independent entity or organization that conducts audits and assessments of cloud service providers to evaluate their compliance, security, and overall service quality. The primary role of a cloud service auditor is to provide assurance to clients or customers that the cloud service provider adheres to industry standards, best practices, and regulatory requirements.

A cloud service business manager is a professional responsible for overseeing the strategic planning, implementation, and management of cloud services within an organization. Their primary role is to align cloud services with the organization’s business objectives and ensure that they deliver value, efficiency, and innovation.

A cloud service operations manager is a professional responsible for managing and overseeing the day-to-day operations of cloud services within an organization. They ensure that cloud services are delivered efficiently, securely, and in line with the organization’s objectives and Service-Level Agreements (SLAs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which of the following is NOT one of the three main audit mechanisms for cloud environments?

A. Access Controls
B. Log Collection
C. Correlation
D. Packet Capture

A

A. Access Controls

Explanation:
Three essential audit mechanisms in cloud environments include:

Log Collection: Log files contain useful information about events that can be used for auditing and threat detection. In cloud environments, it is important to identify useful log files and collect this information for analysis. However, data overload is a common issue with log management, so it is important to collect only what is necessary and useful.
Correlation: Individual log files provide a partial picture of what is going on in a system. Correlation looks at relationships between multiple log files and events to identify potential trends or anomalies that could point to a security incident.
Packet Capture: Packet capture tools collect the traffic flowing over a network. This is often only possible in the cloud in an IaaS environment or using a vendor-provided network mirroring capability.

Access controls are important but not one of the three core audit mechanisms in cloud environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Organizations wishing to process payment card data in the cloud need to make sure that their CSP’s infrastructure is compliant with which of the following?

A. PCI DSS
B. G-Cloud
C. FedRAMP
D. ISO/IEC 27017

A

A. PCI DSS

Explanation:
Cloud service providers may have their environments verified against certain standards, including:

ISO/IEC 27017 and 27018: The  International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) publishes various standards, including those describing security best practices. ISO 27017 and ISO 27018 describe how the information security management systems and related security controls described in ISO 27001 and 27002 should be implemented in cloud environments and how PII should be protected in the cloud.
PCI DSS: The Payment Card Industry Data Security Standard (PCI DSS) was developed by major credit card brands to protect the personal data of payment card users. This includes securing and maintaining compliance with the underlying infrastructure when using cloud environments.
Government Standards: FedRAMP-compliant offerings and UK G-Cloud are cloud services designed to meet the requirements of the US and UK governments for computing resources.
17
Q

At which phase of the SDLC are the software’s default configurations reviewed and hardened?

A. Operations and Maintenance
B. Testing
C. Development
D. Deployment

A

D. Deployment

Explanation:
The Software Development Lifecycle (SDLC) describes the main phases of software development from initial planning to end-of-life. While definitions of the phases differ, one commonly-used description includes these phases:

Requirements: During the requirements phase, the team identifies the software's role and the applicable requirements. This includes business, functional, and security requirements.
Design: During this phase, the team creates a plan for the software that fulfills previously identified requirements. Often, this is an iterative process as the design moves from high-level plans to specific ones. Also, the team may develop test cases during this phase to verify the software against requirements.
Development: This phase is when the software is written. It includes everything up to the actual build of the software, and unit testing should be performed regularly through the development phase to verify that individual components meet requirements.
Testing: After the software has been built, it undergoes more extensive testing. This should verify the software against all test cases and ensure that they map back to and fulfill all of the software’s requirements.
Deployment: During the deployment phase, the software moves from development to release. During this phase, the default configurations of the software are defined and reviewed to ensure that they are secure and hardened against potential attacks.
Operations and Maintenance (O&M): The O&M phase covers the software from release to end-of-life. During O&M, the software should undergo regular monitoring, testing, etc., to ensure that it remains secure and fit for purpose.
18
Q

Which of the following is a technique used to improve the resiliency of data?

A. Data labeling
B. Data dispersion
C. Data flow diagram
D. Data mapping

A

B. Data dispersion

Explanation:
Data dispersion is when data is distributed across multiple locations to improve resiliency. Overlapping coverage makes it possible to reconstruct data if a portion of it is lost.

A data flow diagram (DFD) maps how data flows between an organization’s various locations and applications. This helps to maintain data visibility and implement effective access controls and regulatory compliance.

Data mapping identifies data requiring protection within an organization. This helps to ensure that the data is properly protected wherever it is used.

Data labeling contains metadata describing important features of the data. For example, data labels could include information about ownership, classification, limitations on use or distribution, and when the data was created and should be disposed of.

19
Q

Data discovery is a vital tool used within businesses today. It is useful in big data, and it is part of the Data Loss Prevention (DLP) tools. If a team has been looking for patterns and trends that can be used to support business decisions, which step are they in within data discovery?

A. Data exploration
B. Data analysis
C. Data preparation
D. Data profiling

A

A. Data exploration

Explanation:
The data discovery process typically involves several steps, including:

Data profiling: This involves analyzing the structure, content, and quality of the data to identify any errors, inconsistencies, or anomalies.
Data exploration: This involves visualizing and analyzing the data to identify patterns, trends, and relationships that can be used to support business decisions.
Data preparation: This involves cleaning, transforming, and integrating the data into a usable format for analysis.
Data analysis: This involves applying statistical, machine learning, or other techniques to the data to identify insights and patterns.
20
Q

Jon is working for a new corporation. He was hired because of his skill at building and maintaining large data centers. In which phase of implementing a cloud data center should security first be considered?

A. Testing
B. Design
C. Maintenance
D. Implementation

A

B. Design

Explanation:
Security is extremely important to consider when implementing a cloud data center. Due to its importance, security should be taken into consideration in the design phase so that it doesn’t have to be added as an afterthought later on. However, the question mentions “first,” so design is the correct answer.

Security is a consistent thought for anything that is being created from software to data center, during implementation, testing, and maintenance.