Pocket Prep 19 Flashcards

1
Q

Dawson is an information security manager for a Fortune 500 company. He and his team have been working on revising their data governance strategy and the resulting policy. They have decided that they will need to deploy more Data Loss Prevention systems to inspect data on their file systems. They have been experiencing small breaches of data, and they are looking for the source.

What phase of the cloud data lifecycle are they in?

A. Store
B. Use
C. Archive
D. Share

A

A. Store

Explanation:
Since the data is sitting on a file server, the data is in storage. Archival is a type of storage, but there is nothing in the question to lead us to archive. So, store fits the environment of the question better.

They might be losing control of data when it is shared, but DLP to inspect the file systems is not data in transit, it is data at rest. Traditionally, DLP systems only helped us when the data was in transit, but that is no longer the case.

The data breach may be caused by some action a user is taking when they are in the use phase, but again, the DLP system is inspecting the file system, which is data at rest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following common contractual terms is MOST related to customers’ efforts to avoid vendor lock-in?

A. Compliance
B. Litigation
C. Right to Audit
D. Access to Cloud/Data

A

D. Access to Cloud/Data

Explanation:
A contract between a customer and a vendor can have various terms. Some of the most common include:

Right to Audit: CSPs rarely allow customers to perform their own audits, but contracts commonly include acceptance of a third-party audit in the form of a SOC 2 or ISO 27001 certification.
Metrics: The contract may define metrics used to measure the service provided and assess compliance with service level agreements (SLAs).
Definitions: Contracts will define various relevant terms (security, privacy, breach notification requirements, etc.) to ensure a common understanding between the two parties.
Termination: The contract will define the terms by which it may be ended, including failure to provide service, failure to pay, a set duration, or with a certain amount of notice.
Litigation: Contracts may include litigation terms such as requiring arbitration rather than a trial in court.
Assurance: Assurance requirements set expectations for both parties. For example, the provider may be required to provide an annual SOC 2 audit report to demonstrate the effectiveness of its controls.
Compliance: Cloud providers will need to have controls in place and undergo audits to ensure that their systems meet the compliance requirements of regulations and standards that apply to their customers.
Access to Cloud/Data: Contracts may ensure access to services and data to protect a customer against vendor lock-in.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A corporation has systems that apply the principles of data science to uncover useful information from the data that enables them to make better business decisions. It is using which of the following?

A. Artificial intelligence
B. Quantum computing
C. Blockchain
D. Machine learning

A

D. Machine learning

Explanation:
Machine Learning (ML) applies the logic of data science to uncover information hidden within the massive quantity of data that corporations collect today. There are a few ways to use ML. It could be to analyze data to confirm a hypothesis or to uncover information without a preconceived hypothesis.

ML is a subset of Artificial Intelligence (AI). AI seeks to mimic human thought processes. Arguably, we have only achieved narrow AI today. AI is not the correct answer because there is nothing in the question about mimicking human brain capability.

Blockchain is technology that creates an unalterable record of transactions.

Quantum computing is a different physical structure to a computer. Instead of processing electrical bits like our current computers, quantum computing processes multidimensional quantum bits or qubits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An information security manager has been working with the Security Operations Center (SOC) to prepare plans and put processes in place that will allow the impact of something like ransomware to be minimized if/when it does occur. What type of management process is this engineer involved in?

A. Incident management
B. Problem management
C. Change management
D. Deployment management

A

A. Incident management

Explanation:
Any event that causes disruptions within an organization is known as an incident; this includes security events as well. Processes and procedures put in place to limit the effects of these incidents are known as incident management.

Problem management includes the processes that allow an organization to get to the root cause of incidents that continue to happen.

Deployment management involves the processes of adding products to the production environment.

Change management is about managing alterations that need to be made to the production environment.

Each of these is defined within ITIL or IT Service Management (ITSM), according to ISO 20000.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Nicole has been evaluating a potential Cloud Service Provider (CSP). She has been looking at the requirements needed in the contract, what kind of ongoing monitoring they will need to put in place, what audits and certifications the CSP has been through, and their exit strategy. What has she been doing?

A. Vendor risk management
B. Dynamic software management
C. Due process
D. Verified secure software

A

A. Vendor risk management

Explanation:
Vendor risk management involves all those items listed in the question. The question asks how risky is it to use this CSP and what can be done to satisfy the cloud customer’s needs?

Due process is the right to a fair trial. Due process is a legal principle that ensures fair treatment and protection of individual rights in legal proceedings. It refers to the set of procedures and safeguards that must be followed by the government or any entity with authority when depriving a person of life, liberty, or property.

Dynamic software management, also known as dynamic application management, refers to the process of managing and controlling software applications in a dynamic and flexible manner. It involves the deployment, configuration, monitoring, and updating of software applications to ensure optimal performance, security, and efficiency.

Verified secure software refers to software that has undergone rigorous testing and verification processes to ensure its security and reliability. It involves the use of formal verification methods, code analysis, and testing techniques to detect and eliminate vulnerabilities and weaknesses in the software.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

An organization has implemented a new client-server application. The security and compliance officer has been tasked with the responsibility of ensuring that the foundations for all security actions are covered in documentation by setting purpose, scope, roles, and responsibilities. What control is being described?

A. Transport Layer Security (TLS)
B. Policy and baselines
C. Guidelines
D. Procedures and guidelines

A

B. Policy and baselines

Explanation:
The question is asking about documentation. Policies and baselines are a critical start to the controls needed to ensure systems are protected properly. Defining the objective, scope, roles, and responsibilities of all security actions, policies, and baselines establishes a codified framework for all security actions.

Guidelines are documentation, but they only serve as suggestions. It is not part of setting purpose, scope, etc.

Procedures and guidelines should be defined, but this does not get us to purpose and scope. Procedures are step-by-step instructions on how to do something.

TLS should be part of what is defined within the baselines to fulfill the policy requirement. But again, the question is asking about documentation. TLS is the technology.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When Alastair decided to use a Software as a Service (SaaS) provider, he inquired into how the data would be handled. He was told that pieces of his file are stored on different servers throughout a data center (DC). What technology is the cloud provider describing?

A. Data dispersion
B. Data mining
C. Neural networks
D. Bit splitting

A

A. Data dispersion

Explanation:
Data dispersion takes a file and breaks it into many pieces, sometimes called fragments, shards, or chunks. These pieces are then stored on different servers/storage nodes.

Bit splitting is a different technology that spreads data throughout a data center, but it is at the bit level. It has a slightly different description than using the words pieces, fragments, shards, or chunks.

Neural networks are found within machine learning. It is the stages or decision points that a computer that is trying to emulate human brains will go through to make decisions about something.

Data mining is looking for information within data bases or other storage mechanisms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of the following is a seven-step threat model that views things from the attacker’s perspective?

A. ATASM
B. PASTA
C. DREAD
D. STRIDE

A

B. PASTA

Explanation:
Several different threat models can be used in the cloud. Common examples include:

STRIDE: STRIDE was developed by Microsoft and identifies threats based on their effects/attributes. Its acronym stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege.
DREAD: DREAD was also created by Microsoft but is no longer in common use. It classifies risk based on Damage, Reproducibility, Exploitability, Affected Users, and Discoverability.
ATASM: ATASM stands for Architecture, Threats, Attack Surfaces, and Mitigations and was developed by Brook Schoenfield. It focuses on understanding an organization’s attack surfaces and potential threats and how these two would intersect.
PASTA: PASTA is the Process for Attack Simulation and Threat Analysis. It is a seven-stage framework that tries to look at infrastructure and applications from the viewpoint of an attacker.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Your company is looking for a way to ensure that their most critical servers are online when needed. They are exploring the options that their Platform as a Service (PaaS) cloud provider can offer them. The one that they are most interested in has the highest level of availability possible. After a cost-benefit analysis based on their threat assessment, they think that this will be the best option. The cloud provider describes the option as a grouping of resources with a coordinating software agent that facilitates communication, resource sharing, and routing of tasks.

What term matches this option?

A. Storage controller
B. Server redundancy
C. Security group
D. Server cluster

A

D. Server cluster

Explanation:
Server clusters are a collection of resources linked together by a software agent that enables communication, resource sharing, and task routing. Server clusters are considered active-active since they include at least two servers (and any other needed resources) that are both active at the same time.

Server redundancy is usually considered active-passive. Only one server is active at a time. The second waits for a failure to occur; then, it will take over.

Storage controllers are used for storage area networks. It is possible that the servers in the question are storage servers, but more likely they contain the applications that the users and/or the customers require. Therefore, server clustering is the correct answer.

Security groups are effectively virtualized local area networks protected by a firewall.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Containerization is an example of which of the following?

A. Serverless
B. Microservices
C. Application virtualization
D. Sandboxing

A

C. Application virtualization

Explanation:
Application virtualization creates a virtual interface between an application and the underlying operating system, making it possible to run the same app in various environments. One way to accomplish this is containerization, which combines an application and all of its dependencies into a container that can be run on an OS running the containerization software (Docker, etc.). Microservices and containerized applications commonly require orchestration solutions such as Kubernetes to manage resources and ensure that updates are properly applied.

Sandboxing is when applications are run in an isolated environment, often without access to the Internet or other external systems. Sandboxing can be used for testing application code without placing the rest of the environment at risk or evaluating whether a piece of software contains malicious functionality.

Serverless applications are hosted in a Platform as a Service (PaaS) cloud environment, where management of the underlying servers and infrastructure is the responsibility of the cloud provider, not the cloud customer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A cloud architect is designing a Disaster Recovery (DR) solution for the bank that they work at. For their most critical server, they have determined that it can only be offline at any point in time for no more than 10 minutes, and they cannot lose more than 2 seconds worth of data.

When choosing if they should fail within their current cloud provider to another region or to another cloud provider, they need to base that decision mainly on which of the following?

A. The Recovery Point Objective (RPO)
B. The Maximum Tolerable Downtime (MTD)
C. The Recovery Service Level (RSL)
D. The Recovery Time Objective (RTO)

A

B. The Maximum Tolerable Downtime (MTD)

Explanation:
The MTD is the amount of time that a server can be offline. There are a variety of different considerations between the two options, such as 1) Will it be possible to fail to another region? Or will it also be offline? and 2) How long will it take to fail to the other provider? There are other considerations for being able to choose the best option, such as a cost/benefit analysis, but that is not an option within this question.

The RPO is how much data that they can lose. In this question, it is two seconds. Since the question is not asking which data backup service they have to choose from, that is not the right answer.

RSL is the percentage of functionality that must be in the DR alternative. That would be a consideration, but the question does not go far enough to indicate that we are talking about the RSL.

RTO is the time that the administrators would have to do the work of switching services to the other region or the other provider and takes less time than the MTD, but this is not part of what is discussed in the question.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following blockchain types requires permission to join but can be open and utilized by a group of different organizations working together?

A. Private
B. Permissioned
C. Public
D. Consortium

A

D. Consortium

Explanation:
Consortium blockchains are a hybrid of public and private blockchains. They are operated by a consortium or a group of organizations that have a shared interest in a particular industry or use case. Consortium blockchains provide a controlled and permissioned environment while still allowing multiple entities to participate in the consensus and decision-making process.

Public blockchains, such as Bitcoin and Ethereum, are open to anyone and allow anyone to participate in the network, verify transactions, and create new blocks. They are decentralized and provide a high level of transparency and security. Public blockchains use consensus mechanisms, such as Proof of Work (PoW) or Proof of Stake (PoS), to validate transactions and secure the network.

Private blockchains are restricted to a specific group of participants who are granted access and permission to the network. They are typically used within organizations or consortia where participants trust each other and require more control over the network. Private blockchains offer higher transaction speeds and privacy but sacrifice decentralization compared to public blockchains.

Permissioned blockchains require users to have permission to join and participate in the network. They are typically used in enterprise settings where access control and governance are critical. Permissioned blockchains offer faster transaction speeds and are more scalable than public blockchains, but they sacrifice some decentralization and censorship resistance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Olivia, an information security manager, is working on the Disaster Recovery (DR) team for a medium-sized government contractor. They provide a service for the government that has a requirement of being highly available. Which cloud-based strategy can provide the fastest Recovery Time Objective (RTO) for a critical application in the event of a disaster?

A. Creating regular backups of the application and data to an on-premises storage system
B. Replicating the application and data to multiple geographically dispersed regions within a cloud provider’s infrastructure
C. Implementing a hybrid cloud model with a secondary data center for failover and recovery
D. Leveraging a cloud provider’s infrastructure for real-time replication and failover of the application and data

A

D. Leveraging a cloud provider’s infrastructure for real-time replication and failover of the application and data

Explanation:
Correct answer: Leveraging a cloud provider’s infrastructure for real-time replication and failover of the application and data

Leveraging the cloud provider’s infrastructure with real-time replication allows for immediate failover in case of a disaster. With real-time replication, the application and data are continuously synchronized between primary and secondary environments, ensuring minimal data loss and the ability to quickly switch to the secondary environment for seamless operation.

Regular backups is always a good idea. An even better idea is to test those backups. However, the questions is about the speed it takes to do the recovery work. If the question was about the Recovery Point Objective (RPO), then the data backup strategy would be critical to look at.

A secondary data center is an expensive option, especially when we are trying to leverage the cloud.

Replicating the application and data to multiple geographically dispersed regions is the next best answer. However, the question does not give us specifics that drive us to that answer. So, the more generic “leveraging a cloud provider’s infrastructure” is a better answer.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which cloud service role negotiates relationships between cloud customers’ relationships with cloud providers?

A. Cloud service partner
B. Cloud service broker
C. Cloud auditor
D. Cloud service user

A

B. Cloud service broker

Explanation:
The cloud service broker is responsible for negotiating relationships between the customer and the provider. They would be considered independent of both.

Cloud service partners are defined in ISO/IEC 17788 as a party that is engaged in support of, or auxiliary to, either the cloud service customer or the cloud service provider.

The cloud auditor is defined in ISO/IEC 17788 as a partner that audits the provision and use of cloud services.

The cloud auditors and cloud service broker would be considered cloud service partners. The partner is a more generic role.

The cloud service customer is defined in ISO/IEC 17788 as a natural person associated with the cloud service customer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When enforcing OS baselines, which of the following is LEAST likely to be covered?

A. Data retention
B. Approved protocols
C. Approved access methods
D. Compliance requirements

A

A. Data retention

Explanation:
OS baselines establish and enforce known good states of system configuration and focus on ensuring least privilege and other security OS and application best practices. Each configuration option should match a risk mitigation (security control objective). Security objectives often address compliance requirements. The need to match an acceptable level of risk could require certain protocols to be disabled within the OS, such as telnet or ping.

Data retention and other data-specific requirements are not commonly part of an OS baseline.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which of the following tools attempts to identify vulnerabilities with NO knowledge of an application’s internals?

A. SCA
B. SAST
C. DAST
D. IAST

A

C. DAST

Explanation:
Some common tools for application security testing include:

Static Application Security Testing (SAST): SAST tools inspect the source code of an application for vulnerable code patterns. It can be performed early in the software development lifecycle but can’t catch some vulnerabilities, such as those visible only at runtime.
Dynamic Application Security Testing (DAST): DAST bombards a running application with anomalous inputs or attempted exploits for known vulnerabilities. It has no knowledge of the application’s internals, so it can miss vulnerabilities. However, it is capable of detecting runtime vulnerabilities and configuration errors (unlike SAST).
Interactive Application Security Testing (IAST): IAST places an agent inside an application and monitors its internal state while it is running. This enables it to identify unknown vulnerabilities based on their effects on the application.
Software Composition Analysis (SCA): SCA is used to identify the third-party dependencies included in an application and may generate a software bill of materials (SBOM). This enables the developer to identify vulnerabilities that exist in this third-party code.

Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A cloud security architect and administrator are working together to determine the best configuration for their virtual machines in an Infrastructure as a Service (IaaS) environment. They are looking for a technology that would allow their Virtual Machines (VM) to be dynamically managed and moved as necessary across a cluster of physical servers.

What would you recommend?

A. Transport Layer Security (TLS)
B. Dynamic Optimization (DO)
C. Software Defined Networking (SDN)
D. Distributed Resource Scheduling (DRS)

A

D. Distributed Resource Scheduling (DRS)

Explanation:
DRS and DO are two similar yet distinct technologies. DRS allows for automatic load balancing of VMs across a cluster of physical servers. DO allows for the automatic adjustment of Virtual Machine (VM) resources, such as CPU, memory, and storage, based on changing workload demands.

SDN optimizes the way that routers and switches function. It adds a centralized management server called a controller to manage flow path decisions, among a few other capabilities.

TLS is a networking protocol that allows secure (encrypted) sessions to be established across web sessions (and possibly other communications).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which of the following focuses on personally identifiable information (PII) as it pertains to financial institutions?

A. Gramm-Leach-Bliley Act (GLBA)
B. General Data Protection Regulation (GDPR)
C. Sarbanes-Oxley (SOX)
D. Health Insurance Portability Accountability Act (HIPAA)

A

A. Gramm-Leach-Bliley Act (GLBA)

Explanation:
The Gramm-Leach-Bliley Act is a U.S. act officially named the Financial Modernization Act of 1999. It focuses on PII as it pertains to financial institutions, such as banks.

HIPAA is a U.S. regulation that is concerned with the privacy of protected healthcare information and healthcare facilities.

GDPR is an EU specific regulation that encompasses all organizations in all different industries.

SOX is a U.S. regulation about protecting financial data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which essential characteristic of the cloud says that an organization only pays for what it uses rather than maintaining dedicated servers, operating systems, virtual machines, and so on?

A. On-demand self-service
B. Measured service
C. Multi-tenancy
D. Broad network access

A

B. Measured service

Explanation:
Measured service means that Cloud Service Providers (CSP) bill for resources consumed. With a measured service, everyone pays for the resources they are using.

On-demand self-service means that the user/customer/tenant can go to a web portal, select their service, configure it, and get it up and running without interaction with the CSP.

Broad network access means that as long as the user/customer/tenant has access to the network (the “cloud” is on), they will be able to use that service using standard mechanisms.

Multi-tenancy is a characteristic that exists with all cloud deployment models (public, private, and community). It means that there are multiple users/customers/tenants using the same physical server. The hypervisor has the responsibility of isolating them from each other. In a private cloud, the different users or tenants would be different business units or different projects. A good read is the free ISO standard 17788. Pay particular attention to the definition of multi-tenancy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Controlling your corporation’s intellectual property (IP) is an essential element of information security. Your organization is considering using a data rights management (DRM) solution that provides persistent protection. One of the biggest concerns is that once the IP is in the customers’ hands, it could be stolen and used inappropriately.

Which characteristic of DRM would be of most interest to the corporation?

A. Permissions can be modified after a document has been shared
B. The illicit or unauthorized copying of data is prohibited
C. Dates and time-limitations can be applied
D. Data is secure no matter where it is stored

A

D. Data is secure no matter where it is stored

Explanation:
There are many options that DRM tools can provide. This includes having dates and time-limitations applied, the ability to change permissions once the document has been shared, and prohibiting the illicit or unauthorized copying of data.

So, what you have here is a question that matches how (ISC)2 does “all of the above” answers. “Data is secure no matter where it is stored” is a summary of the other three options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

During which stage of the DLP process might pattern matching be used to identify locations containing sensitive data?

A. Enforcement
B. Discovery
C. Monitoring
D. Mapping

A

B. Discovery

Explanation:
Data loss prevention (DLP) solutions are designed to prevent sensitive data from being leaked or accessed by unauthorized users. In general, DLP solutions consist of three components:

Discovery: During the Discovery phase, the DLP solution identifies data that needs to be protected. Often, this is accomplished by looking for data stored in formats associated with sensitive data. For example, credit card numbers are usually 16 digits long, and US Social Security Numbers (SSNs) have the format XXX-XX-XXXX. The DLP will identify storage locations containing these types of data that require monitoring and protection.
Monitoring: After completing discovery, the DLP solution will perform ongoing monitoring of these identified locations. This includes inspecting access requests and data flows to identify potential violations. For example, a DLP solution may be integrated into email software to look for data leaks or monitor for sensitive data stored outside of approved locations.
Enforcement: If a DLP solution identifies a violation, it can take action. This may include generating an alert for security personnel to investigate and/or block the unapproved action.

Mapping is not a stage of the DLP process.

22
Q

Load and stress testing are examples of which type of testing?

A. Non-Functional Testing
B. Usability Testing
C. Functional Testing
D. Unit Testing

A

A. Non-Functional Testing

Explanation:
Functional testing is used to verify that software meets the requirements defined in the first phase of the SDLC. Examples of functional testing include:

Unit Testing: Unit tests verify that a single component (function, module, etc.) of the software works as intended.
Integration Testing: Integration testing verifies that the individual components of the software fit together correctly and that their interfaces work as designed.
Usability Testing: Usability testing verifies that the software meets users’ needs and provides a good user experience.
Regression Testing: Regression testing is performed after changes are made to the software and verifies that the changes haven’t introduced bugs or broken functionality.

Non-functional testing tests the quality of the software and verifies that it provides necessary functionality not explicitly listed in requirements. Load and stress testing or verifying that sensitive data is properly secured and encrypted are examples of non-functional testing.

23
Q

Ardal is the information security manager working for a manufacturing company that specializes in molded silicon kitchen products. They are moving their customer data and product information into a Platform as a Service (PaaS) public cloud environment. Ardal and his team have been analyzing the risks associated with this move so that they can ensure the most appropriate security controls are in place.

Which of the following is TRUE regarding the transfer of risk?

A. Transfer of risk is often the cheapest option for responding to risk
B. Risk transfer should always be the first avenue that an organization takes to respond to risk
C. Risk transfer can only be done when the organization has exhausted all other risk responses
D. RIsk is never truly transferred. Transference simply shares the risk with another company.

A

D. RIsk is never truly transferred. Transference simply shares the risk with another company.

Explanation:
Risk transference is better stated as risk sharing, although transfer is the common word in use. When data is placed on a cloud provider’s infrastructure, it does not remove the risk for the customer. It does not give the risk to the provider. The customer is always responsible for their data.

Risk transfer/share simply means that the cloud provider here has a responsibility to also care for the data. The critical word in that last sentence is also. Under GDPR, the cloud provider is required to care for the data, and a Data Processing Agreement (DPA) should be created to inform the provider of their responsibilities. A DPA is a Privacy Level Agreement (PLA) more generically.

Risk transfer can be done at any time and is not necessarily the cheapest of the options.

Risk transfer is not the first avenue for risk management. There are four options. This is just one of them. The other three are risk reduction/mitigation, risk avoidance, and risk acceptance.
Reference:

24
Q

A bad actor working for an enemy state has created malware that has the purpose of stealing data from the other country regarding their military and its products and capabilities. The bad actor has planted malware on the enemy’s systems and has left it, undetected, for eight months. What is the name of this type of attack?

A. Human error
B. Malicious insider
C. Insecure Application Programming Interface (API)
D. Advanced persistent threat (APT)

A

D. Advanced persistent threat (APT)

Explanation:
Many types of malware and malicious programs are loud and aim to disrupt a system or network. Advanced Persistent Threats (APTs) are the opposite. APTs are attacks that attempt to steal data and stay hidden in the system or network for as long as possible. The longer the APT can stay in the system, the more data it is able to collect. The advanced part of APT is in reference to the skill level of the bad actor.

A malicious insider would be performing bad actions within the business acting without their knowledge. The enemy is probably operating with knowledge inside the government.

Human error is a problem for a business, but it is an accident. Creating malware is not accidental; it is intentional and malicious.

An insecure API is not an attack. It is a vulnerability. There is some weakness in the coding or implementation that leaves it vulnerable.
Reference:

25
Q

When data is stored on a device, but not being used by an application or actively traversing the network, it is called:

A. Unstructured data
B. Data at rest
C. Data in transit
D. Structured data

A

B. Data at rest

Explanation:
Data that is stored on a device but not being used by an application is known as data at rest. So, data stored on a Hard Disk Drive (HDD), Solid State Drive (SSD), tape, microfiche, or any other persistent storage mechanism contains data at rest.

Data that is being used by an application is known as data in transit.

Unstructured and structured data describe or define the nature of how the data is constructed. In structured data, what you can say is that it is predictable. An example is a database. Every field has the name use and size for each and every record. Unstructured data is not predicable. It is more like a file folder that has word documents, spreadsheets, PowerPoint, videos, etc. None of those file types have anything in common between them.

26
Q

Rufus is working for a growing manufacturing business. They have been upgrading their manufacturing equipment over the years to product versions that include internet connectivity for maintenance and management information. This has increased the amount of logs that need to be filtered. Due to the volume of log data generated by systems, it poses a challenge for his organization to perform log reviews efficiently and effectively.

What can his organization implement to help solve this issue?

A. Security Information and Event Manager (SIEM)
B. System Logging protocol (syslog) server
C. Secure Shell (SSH)
D. Data Loss Prevention (DLP)

A

A. Security Information and Event Manager (SIEM)

Explanation:
An organization’s logs are valuable only if the organization makes use of them to identify activity that is unauthorized or compromising. Due to the volume of log data generated by systems, the organization can implement a System Information and Event Monitoring (SIEM) system to overcome these challenges. The SIEM system provides the following:

Log centralization and aggregation
Data integrity
Normalization
Automated or continuous monitoring
Alerting
Investigative monitoring

A syslog server is a centralized logging system that collects, stores, and manages log messages generated by various devices and applications within a network. It provides a way to consolidate and analyze logs from different sources, allowing administrators to monitor system activity, troubleshoot issues, and maintain security. However, it does not help to correlate the logs as the SIEM does.

SSH is a networking protocol that encrypts transmissions. It works at layer 5 of the OSI model. It is commonly used to transmit logs to the syslog server. It is not helpful when analyzing logs. It only secures the transmission of the logs.

DLP tools are used to monitor and manage the transmission or storage of data to ensure that it is done properly. With DLP, the concern is that there will be a data breach/leak unintentionally by the users.

27
Q

Tristan is the cloud information security manager working for a pharmaceutical company. They have connected to the community cloud that was built by the government health agency to advance science, diagnosis, and patient care. They also have stored their own data with a public cloud provider in the format of both databases and data lakes.

What have they built?

A. Storage area network
B. Public cloud
C. Hybrid cloud
D. Private cloud

A

C. Hybrid cloud

Explanation:
A hybrid cloud deployment model is a combination of two of the three options: public, private, and community. It could be public and private, private and community, or public and community as in the question. A public cloud example is Amazon Web Service (AWS). A private cloud is built for a single company. Fundamentally, it means that all the tenants on a single server are from the same company. A community example is the National Institute of Health (NIH), which built a community cloud to advance science, diagnosis, and patient care.

A Storage Area Network (SAN) is the physical and virtual structure that holds data at rest. SAN protocols include Fibre Channel and iSCSI.

28
Q

Ulric is a cloud data architect professional. He has also been studying information security to ensure his cloud deployment designs are resilient. He has determined so far that he will design a Storage Area Network (SAN) using Fibre Channel (FC). However, he is concerned that a server failure could cause a severe impact on their data retention.

How can he design the servers to work together to increase performance, capacity, and reliability?

A. IP-based Small Computer System Interface
B. Stand-alone hosts
C. Daily backups
D. Clustered storage

A

D. Clustered storage

Explanation:
A cluster is taking two or more systems and treating them as one entity. Clustered storage is the process of taking two or more storage servers and combining them to increase performance, capacity, and reliability. Storage clusters are used in cloud environments because high availability is extremely important. Clusters work together to ensure the loss of data is minimized to as close to zero loss as possible.

A stand-alone host will not help in this situation. He has determined he is building a SAN. That is, by definition, many devices connected and working together in some fashion. A stand-alone host is a device that is either not connected to a network or alone on a network.

IP-based Small Computer System Interface (iSCSI) is a different SAN protocol. He has already made a decision to use FC. iSCSI would have been a different choice.

Daily backups can be a good way to backup data for some companies, but this question is about server design.

29
Q

In which of the following cloud service models does the cloud provider offer an environment where the customer can build and deploy applications and the provider manages compute, data storage, and other dependencies?

A. FaaS
B. PaaS
C. SaaS
D. IaaS

A

B. PaaS

Explanation:
Cloud services are typically provided under three main service models:

Software as a Service (SaaS): Under the SaaS model, the cloud provider offers the customer access to a complete application developed by the cloud provider. Webmail services like Google Workspace and Microsoft 365 are examples of SaaS offerings.
Platform as a Service (PaaS): In a PaaS model, the cloud provider offers the customer a managed environment where they can build and deploy applications. The cloud provider manages compute, data storage, and other services for the application.
Infrastructure as a Service (IaaS): In IaaS, the cloud provider offers an environment where the customer has access to various infrastructure building blocks. AWS, which allows customers to deploy virtual machines (VMs) or use block data storage in the cloud, is an example of an IaaS platform.

Function as a Service (FaaS) is a form of PaaS in which the customer creates individual functions that can run in the cloud. Examples include AWS Lambda, Microsoft Azure Functions, and Google Cloud Functions.

30
Q

Recently, your organization has decided it will be using a third party for its cloud migration. This third-party organization requires access to numerous of your organization’s file servers. You must ensure that the third party has access to the necessary resources. What is the FIRST action your organization should take?

A. Monitor third-party access to resources
B. Establish a written IT security policy for the third party
C. Provide minimal access for the third party
D. Conduct vendor due diligence on the third party

A

D. Conduct vendor due diligence on the third party

Explanation:
Before granting access to any resource, you should conduct vendor due diligence for the third-party organization. This diligence is very similar to a risk assessment, but it is usually in the form of a questionnaire completed by the vendor and analyzed by the organization.

The question implies that the business has been in existence, so the IT security policy for third-party vendors should already exist. That policy should include requirements that the third party will only be given minimal access. This is the logic of least privilege and is part of the zero trust architecture. Those vendors should be monitored at all times as well.

The other answer options should occur after the due diligence has been conducted on the vendor.
Reference:

31
Q

Abigail is designing the infrastructure of Identity and Access Management (IAM) for their future Platform as a Service (PaaS) environment. As she is setting up identities, she knows that which of the following is true of roles?

A. Roles are temporarily assumed by another identity
B. Roles are assigned to specific users permanently and occasionally assumed
C. Roles are the same as user identities
D. Roles are permanently assumed by a user or group

A

A. Roles are temporarily assumed by another identity

Explanation:
Roles are not the same as they are in traditional data centers. Roles are in a way similar to traditional roles in that they allow a user or group a certain amount of access. The group is closer to what we traditionally called roles in Role Based Access Control (RBAC). In the cloud, roles are assumed temporarily. You can assume roles in a variety of ways, but, again, they are temporary.

The user is not permanently assigned a specific role. A user will log in as their user identity, then assume a role. This is temporary (e.g., for 15 hours or only the life of that session).

Note the distinction between assigning and assuming roles — you might have access to certain permissions, but you only use the role and those permissions occasionally.

An additional resource for your review/study is on the AWS website. Look for the user guide regarding roles.

32
Q

Which of the following is NOT one of the three stages of developing a BCP/DRP?

A. Auditing
B. Creation
C. Testing
D. Implementation

A

A. Auditing

Explanation:
Managing a business continuity/disaster recovery plan (BCP/DRP) has three main stages:

Creation: The creation stage starts with a business impact assessment (BIA) that identifies critical systems and processes and defines what needs to be covered by the plan and how quickly certain actions must be taken. Based on this BIA, the organization can identify critical, important, and support processes and prioritize them effectively. For example, if critical applications can only be accessed via a single sign-on (SSO), then SSO should be restored before them. BCPs are typically created first and then used as a template for prioritizing operations within a DRP.
Implementation: Implementation involves identifying the personnel and resources needed to put the BCP/DRP into place. For example, an organization may take advantage of cloud-based high availability features for critical processes or use redundant systems in an active/active or active/passive configuration (dependent on criticality). Often, decisions on the solution to use depend on a cost-benefit analysis.
Testing: Testing should be performed regularly and should consider a wide range of potential scenarios, including cyberattacks, natural disasters, and outages. Testing can be performed in various ways, including tabletop exercises, simulations, or full tests.

Auditing is not one of the three stages of developing a BCP/DRP.

33
Q

What is an essential layer around a virtual machine, subnet, or cloud resource as part of a layered defense strategy?

A. Ingress and egress monitoring
B. Network security group
C. Cloud gateway
D. Contextual-based security

A

B. Network security group

Explanation:
A Network Security Group (NSG) protects a group of cloud resources. It provides a set of security rules or virtual firewall for those resources. This gives the customer additional control over security.

A cloud gateway adds an additional layer of security by transferring data between the customer and the CSP away from the public internet.

Contextual-based security leverages contextual information such as identification to assist in securing cloud resources.

External access attempts from the public internet can be blocked by ingress controls. Egress controls are a technique for preventing internal resources from connecting to unauthorized and potentially harmful websites.

34
Q

Which phase of the SDLC should take the LONGEST?

A. Deployment
B. Design
C. Development
D. Operations and Maintenance

A

D. Operations and Maintenance

Explanation:
The Software Development Lifecycle (SDLC) describes the main phases of software development from initial planning to end-of-life. While definitions of the phases differ, one commonly-used description includes these phases:

Requirements: During the requirements phase, the team identifies the software's role and the applicable requirements. This includes business, functional, and security requirements.
Design: During this phase, the team creates a plan for the software that fulfills the previously identified requirements. Often, this is an iterative process as the design moves from high-level plans to specific ones. Also, the team may develop test cases during this phase to verify the software against requirements.
Development: This phase is when the software is written. It includes everything up to the actual build of the software, and unit testing should be performed regularly through the development phase to verify that individual components meet requirements.
Testing: After the software has been built, it undergoes more extensive testing. This should verify the software against all test cases and ensure that they map back to and fulfill all of the software’s requirements.
Deployment: During the deployment phase, the software moves from development to release. During this phase, the default configurations of the software are defined and reviewed to ensure that they are secure and hardened against potential attacks.
Operations and Maintenance (O&M): The O&M phase covers the software from release to end-of-life. During O&M, the software should undergo regular monitoring, testing, etc., to ensure that it remains secure and fit for purpose.
35
Q

Amina has been determining what level of access the sales manager and their team require to enter the customer database. She has determined that she will use an access mechanism that facilitates the setting-up process. What has she been doing?

A,. Identity and Access Management (IAM)
B. Attribute Based Access Control (ABAC)
C. Role Based Access Control (RBAC)
D. Account governance and policy establishment

A

A,. Identity and Access Management (IAM)

Explanation:
IAM is managing the establishment of accounts from provisioning to de-provisioning.

ABAC or RBAC might be technologies that could be used for access control, but what she has been doing is managing the account establishment.

Policy and governance come first before there would be any establishment of accounts.
Reference:

36
Q

If an application accepts XML directly or XML uploads, especially from untrusted sources, or inserts untrusted data into XML documents, which is then parsed by an XML processor, it is susceptible to which attack?

A. Cross-site scripting
B. Security misconfiguration
C. Server-side request forgery
D. Injection

A

B. Security misconfiguration

Explanation:
Security misconfiguration includes the older XML external entities. An application is susceptible if it accepts XML directly, among other conditions.

Cross-Site Scripting (XSS) involves invalidated user-controlled input. There are three types of XSS: reflected, stored, and DOM.

Server-Side Request Forgery (SSRF) occurs when a server accepts content in the user-supplied URL. This forces the server to send a crafted request to another site.

Injection includes SQL and command injection. It happens when user input is not validated. A user should not enter any SQL commands in the application.

37
Q

Which of the following is NOT one of the three main types of policies that may be defined for a cloud environment?

A. Functional
B. Organizational
C. Administrative
D. Cloud Computing

A

C. Administrative

Explanation:
The CCSP classifies policies as organizational, functional, and cloud computing. Administrative is not a type of policy on the CCSP exam.

38
Q

A user is running a couple of VMs on their computer for malware analysis. What type of hypervisor are they likely using?

A. Type 3
B. Type 1
C. Type 2
D. Type 4

A

C. Type 2

Explanation:
Virtualization allows a single physical computer to host multiple different virtual machines (VMs). The guest computers are managed by a hypervisor, which makes it appear to each VM that it is running directly on physical hardware. The two types of hypervisors are:

Type 1: The hypervisor runs on bare metal and hosts virtual machines on top of it. Most data centers use Type 1 hypervisors.
Type 2: The hypervisor is a program that runs on top of a host operating system alongside other applications. Type 2 hypervisors like VirtualBox, Parallels, and VMware are often used on personal computers.

Type 3 and 4 hypervisors do not exist.

39
Q

SAST is classified as which of the following types of security testing?

A. Black-box
B. White-box
C. Gray-box
D. Red-box

A

B. White-box

Explanation:
Software testing can be classified as one of a few different types, including:

White-box: In white-box or clear-box testing, the tester has full access to the software and its source code and documentation. Static application security testing (SAST) is an example of this technique.
Gray-box: The tester has partial knowledge of and access to the software. For example, they may have access to user documentation and high-level architectural information.
Black-box: In this test, the attacker has no specialized knowledge or access. Dynamic application security testing (DAST) is an example of this form of testing.

Red-box is not a classification of security testing.

40
Q

A cloud administrator is going through an old file server and moving data to a repository where it can be preserved for the next couple of years in case it’s needed again. Which step of the cloud secure data lifecycle is this?

A. Use
B. Archive
C. Store
D. Destroy

A

B. Archive

Explanation:
Step 5 of the cloud secure data lifecycle is archive. At this step, the data is taken from a location of active access and moved to a static repository. Here, the data can be preserved for a long period of time in case it is needed in the future.

The first step is create. This is the creation of a new file or data of some type, including voice and video. When a file is altered and saved, it is effectively saving a new file. So, according to the Cloud Security Alliance, this phase includes modification or alteration of the data.

As soon as the data is created, it should be stored somewhere. Ephemeral storage in a virtual machine is temporary. It needs to be moved to persistent storage somewhere.

The use phase is when data is accessed and utilized by a user of some kind.

The share phase is when it is exchanged or sent to someone else.

The last phase is destroy. If data is destroyed through overwriting, degaussing a drive, shredding a drive, cryptographic erasure, etc., then it is gone for good. These techniques may not be perfect, which is the point of this phase.

41
Q

Regarding data privacy, different roles and responsibilities exist between the cloud customer and cloud provider. In a Platform as a Service (PaaS) environment, where does the responsibility fall for platform security?

A. Responsibility is shared between the cloud customer and the cloud provider
B. The cloud customer is solely responsible
C. The regulators are responsible if personal data is involved
D. The cloud provider is solely responsible

A

A. Responsibility is shared between the cloud customer and the cloud provider

Explanation:
Correct answer: Responsibility is shared between the cloud customer and the cloud provider

In a Platform as a Service (PaaS) model, platform security is a responsibility that is shared between both the cloud provider and the cloud customer.

In an SaaS model, platform security is solely the responsibility of the cloud provider, whereas in an IaaS model, platform security is solely the responsibility of the cloud customer.

The regulators are not responsible for security of the cloud.
Reference:

42
Q

Which of the following threat models classifies threats based on their goals/effects?

A. ATASM
B. PASTA
C. STRIDE
D. DREAD

A

C. STRIDE

Explanation:
Several different threat models can be used in the cloud. Common examples include:

STRIDE: STRIDE was developed by Microsoft and identifies threats based on their effects/attributes. Its acronym stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege.
DREAD: DREAD was also created by Microsoft but is no longer in common use. It classifies risk based on Damage, Reproducibility, Exploitability, Affected Users, and Discoverability.
ATASM: ATASM stands for Architecture, Threats, Attack Surfaces, and Mitigations and was developed by Brook Schoenfield. It focuses on understanding an organization’s attack surfaces and potential threats and how these two would intersect.
PASTA: PASTA is the Process for Attack Simulation and Threat Analysis. It is a seven-stage framework that tries to look at infrastructure and applications from the viewpoint of an attacker.
43
Q

Miguel is a user that is logging on to an online Software as a Service (SaaS) through a federated identification system. The Single Sign On (SSO) system will utilize the corporate authentication system to verify Miguel is who he claims to be and approve access to the corporate account at the SaaS provider. Miguel

In a federation environment, the entity that takes the authentication tokens from an identity provider and grants access to resources is known as which of the following?

A.The SaaS provider is considered the identification partner
B. The SaaS provider is considered the relaying party
C. The SaaS provider is considered the identity provider
D. The SaaS provider is considered the relying party

A

D. The SaaS provider is considered the relying party

Explanation:
When a system or user who is part of a federation needs access to an application and the local system has accepted the credentials, the system or user making the request must obtain local tokens through their own authentication process. The corporate authentication system, probably Microsoft Active Directory (AD), is the Identity Provider (IdP).

Miguel requests access and the IdP would send a token through his system as the relaying party. They are called an identity provider, not an identification partner.

The SaaS provider is both the service provider and the relying party.
Reference:

44
Q

Which of the following solutions is designed to improve security and decrease account takeover attack risk by making it harder to use stolen/compromised passwords?

A. Secrets Management
B. Multi-Factor Authentication
C. Single Sign-On
D. Federated Identity

A

B. Multi-Factor Authentication

Explanation:
Identity and Access Management (IAM) is critical to application security. Some important concepts in IAM include:

Federated Identity: Federated identity allows users to use the same identity across multiple organizations. The organizations set up their IAM systems to trust user credentials developed by the other organization.
Single Sign-On (SSO): SSO allows users to use a single login credential for multiple applications and systems. The user authenticates to the SSO provider, and the SSO provider authenticates the user to the apps using it.
Identity Providers (IdPs): IdPs manage a user’s identities for an organization. For example, Google, Facebook, and other organizations offer identity management and SSO services on the Web.
Multi-Factor Authentication (MFA): MFA requires a user to provide multiple authentication factors to log into a system. For example, a user may need to provide a password and a one-time password (OTP) sent to a smartphone or generated by an authenticator app.
Cloud Access Security Broker (CASB): A CASB sits between cloud applications and users and manages access and security enforcement for these applications. All requests go through the CASB, which can perform monitoring and logging and can block requests that violate corporate security policies.
Secrets Management: Secrets include passwords, API keys, SSH keys, digital certificates, and anything that is used to authenticate identity and grant access to a system. Secrets management includes ensuring that secrets are randomly generated and stored securely.
45
Q

Rhonda is working for a retail clothing store as their information security manager. She has been working with the legal department to ensure that they are in compliance with any laws or contracts that they need to be in compliance with. Which of the following applies?

A. They must comply with the United States law referred to as the Payment Card Industry - Data Security Standard (PCI-DSS)
B. They must protect employee medical data that they store, according to the Health Information Portability and Accountability Act (HIPAA)
C. They must be in compliance with their contract with the payment card companies that requires them to follow the Payment Card Industry - Data Security Standard (PCI-DSS)
D. They must be in compliance with the European Union’s (EU) contractual requirement of the General Data Protection Regulation (GDPR)

A

C. They must be in compliance with their contract with the payment card companies that requires them to follow the Payment Card Industry - Data Security Standard (PCI-DSS)

Explanation:
Correct answer: They must be in compliance with their contract with the payment card companies that requires them to follow the Payment Card Industry - Data Security Standard (PCI-DSS).

The Payment Card Industry Data Security Standard (PCI DSS) is a contractual requirement that applies to companies that accept and process payment cards. As a retail store, this definitely applies to the data that they have in their possession.

As a retail clothing store, it is unlikely that they will have health data from their employees. It is possible, but since credit cards are a definite piece of data that they have, PCI-DSS is a better answer.

PCI-DSS is a contractual requirement, not a law, nor is it US specific.

The question is not specific as to where the store is, so it is possible that they are within the EU. If they are in the EU, the GDPR would apply. However, that answer says that GDPR is a contract but it is not, it is a law.

46
Q

Which of the following is the LEAST vulnerable to attack when a resource is not in use?

A. Hypervisors
B. Containers
C. Serverless
D. Ephemeral Computing

A

D. Ephemeral Computing

Explanation:
Some important security considerations related to virtualization include:

Hypervisor Security: The primary virtualization security concern is isolation or ensuring that different VMs can’t affect each other or read each other’s data. VM escape attacks occur when a malicious VM exploits a vulnerability in the hypervisor or virtualization platform to accomplish this.
Container Security: Containers are self-contained packages that include an application and all of the dependencies that it needs to run. Containers improve portability but have security concerns around poor access control and container misconfigurations.
Ephemeral Computing: Ephemeral computing is a major benefit of virtualization, where resources can be spun up and destroyed at need. This enables greater agility and reduces the risk that sensitive data or resources will be vulnerable to attack when not in use. However, these systems can be difficult to monitor and secure since they only exist briefly when they are needed, so their security depends on correctly configuring them.
Serverless Technology: Serverless applications are deployed in environments managed by the cloud service provider. Outsourcing server management can make serverless systems more secure, but it also means that organizations can’t deploy traditional security solutions that require an underlying OS to operate.
47
Q

Which of the following event attributes provides a quick way of identifying anomalous events?

A. User Identity
B. MAC Address
C. IP Address
D. Geolocation

A

D. Geolocation

Explanation:
An event is anything that happens on an IT system, and most IT systems are configured to record these events in various log files. When implementing logging and event monitoring, event logs should include the following attributes to identify the user:

User Identity: A username, user ID, globally unique identifier (GUID), process ID, or other value that uniquely identifies the user, application, etc. that performed an action on a system.
IP Address: The IP address of a system can help to identify the system associated with an event, especially if the address is a unique, internal one. With public-facing addresses, many systems may share the same address.
Geolocation: Geolocation information can be useful to capture in event logs because it helps to identify anomalous events. For example, a company that doesn’t allow remote work should have few (if any) attempts to access corporate resources from locations outside the country or region.

The CCSP doesn’t identify the MAC address as an important attribute to include in event logs.

48
Q

Satria works in the Security Operations Center (SOC), and they have just found that they have some software that has been compromised. A plugin has been added to the software the user is utilizing automatically. The plugin has introduced a serious vulnerability that a bad actor has exploited. What problem has this company experienced?

A. Data breaches
B. Insecure interfaces and APIs
C. Insecure deserialization
D. Software and data integrity failures

A

D. Software and data integrity failures

Explanation:
This is a new category in the OWASP 2021 Top 10 list. It includes the former entry of insecure deserialization. There are many ways this could occur. This is one scenario. This category involves many things that are caused by adding code, plugins, and software updates that are added but not verified for integrity. Insecure deserialization happens when serialization is involved and not handled properly.

The Cloud Security Alliance (CSA) has data breaches at the top of the list of problems that companies experience today. They could occur in many ways. It is possible that is what the bad actor would have done in the scenario of the question. The question is specific to the integrity failure.

Insecure interfaces and APIs are high on our list of problems today as well. The Application Programming Interface (API) is a request/response protocol such as Representation State Transfer (REST) or SOAP.
Reference:

49
Q

A real estate corporation has performed a risk analysis and determined that they are susceptible to software and data integrity failures. In their analysis, they have come to the conclusion that this particular threat has a low likelihood of occurrence and a low impact if experienced. They have also determined that the cost of fixing their software would be particularly high.

In this scenario, which risk response is the organization likely to take?

A. Mitigate the risk
B. Avoid the risk
C. Accept the risk
D. Transfer the risk

A

C. Accept the risk

Explanation:
For some risks, the cost to mitigate the risk would outweigh the cost of accepting the risk and dealing with any potential fallout that would come if the risk was realized. In these scenarios, the organization will often simply accept the risk and deal with the exploit when or if it were to occur.

Transferring the risk might be a plausible option. However, there is nothing in the question to drive us to that answer. All the question is pointing us to is that it is expensive to fix and unlikely to happen with a low impact. Accepting it makes more sense.

Mitigating the risk would involve purchasing tools to protect the environment the software is in or spending time fixing the software. Again, with a low impact, it is not worth it.

There is no avoidance since this is software the corporation already has. If they were considering purchasing or installing the software, this might be a good answer.

50
Q

Orchestration tools are critical in today’s cloud environments. What are some of the protocols that could be in use by orchestration tools?

A. Secure Shell (SSH), RESTful APIs (Representation State Transfer Application Programming Interface) and Advanced Message Queuing Protocol (AMQP)
B. Secure Shell (SSH), RESTful APIs (Representation State Transfer Application Programming Interface) and Dynamic Host Configuration Protocol (DHCP)
C. Point to Point Protocol (PPP), RESTful APIs (Representation State Transfer Application Programming Interface) and Advanced Message Queuing Protocol (AMQP)
D. Secure Shell (SSH), Multiprotocol Label Switching (MPLS) and Advanced Message Queuing Protocol (AMQP)

A

A. Secure Shell (SSH), RESTful APIs (Representation State Transfer Application Programming Interface) and Advanced Message Queuing Protocol (AMQP)

Explanation:
Correct answer: Secure Shell (SSH), RESTful APIs (Representation State Transfer Application Programming Interface) and Advanced Message Queuing Protocol (AMQP)

Orchestration tools use some of the following protocols:

SSH is frequently used for secure remote access, command execution, and configuration management tasks.
Orchestration tools often interact with RESTful APIs to communicate with different systems and services. RESTful APIs allow the exchange of data using standard HTTP methods, making them widely supported and flexible for integration.
Orchestration tools can leverage AMQP for message-based communication and coordination between different components or services. AMQP provides reliable and efficient messaging capabilities, enabling event-driven architectures and workflow management.

Dynamic Host Configuration Protocol (DHCP) assigns an IP address and other networking information to devices in the network automatically. This facilitates the creation of a centralized management system. This is used in networks today but not directly by orchestration tools.

Orchestration tools typically do not directly use Multiprotocol Label Switching (MPLS), as it is a network protocol used by service providers to create Virtual Private Networks (VPNs) with Quality of Service (QoS) guarantees and traffic engineering capabilities.

Orchestration tools typically do not directly utilize Point-to-Point Protocol (PPP), as it is a network protocol used for establishing a direct connection between two nodes over a serial link. PPP is commonly used in scenarios such as dial-up connections or PPP over Ethernet (PPPoE) for DSL connections.

51
Q

Hemi is working for a New Zealand bank, and they are growing nicely. They really need to carefully address their information security program, especially as they grow into their virtual data center that they are building using Infrastructure as a Service (IaaS) technology. As they are planning their information security carefully to ensure they are in compliance with all relevant laws and they provide the level of service their customers have come to expect, they are looking for a document that contains best practices.

What would you recommend?

A. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27017
B. Federal Information Processing Standard (FIPS) 140-2/3
C. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27018
D. National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53

A

A. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27017

Explanation:
Correct answer: International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27001

ISO/IEC 27017 is Information technology — Security techniques — Code of practice for information security controls based on ISO/IEC 27002 for cloud services. This document pulls the security controls from ISO/IEC 27002 that apply to the cloud.

ISO/IEC 27018 is Information technology — Security techniques — Code of practice for protection of Personally Identifiable Information (PII) in public clouds acting as PII processors. A processor is defined in the European Union (EU) General Data Protection Regulation (GDPR) as “a person who processes data solely on behalf of the controller, excluding the employees of the data controller.” Processing is defined to include storage of data, which then applies to cloud services.

NIST SP 800-53 is: Security and Privacy Controls for Information Systems and Organizations. It is effectively a list of security controls and is similar to ISO/IEC 27002.

FIPS 140-2/3 is Security Requirements for Cryptographic Modules. This is for products such as TPMs and HSMs that store cryptographic keys.

Since the question is about security in the cloud, ISO/IEC 27017 is the best fit of these four documents.