CCSK Practice Exam 1 (WhizLabs) Flashcards

1
Q

Which of the following is the key difference between cloud computing and traditional computing?

A.Infrastructure
B.Metastructure
C.Infostructure
D.Appistructure

A

B.Metastructure

Explanation:
The key difference between cloud and traditional computing is the metastructure.
Cloud metastructure includes the management plane components, which are network-enabled and remotely accessible

At a high level, both cloud and traditional computing adhere to a logical model that helps identify different layers based on functionality. This is useful to illustrate the differences between the different computing models themselves:

Infrastructure: The core components of a computing system: compute, network and storage. The foundation that everything else is built on. The moving parts

Metastructure: The protocols and mechanisms that provide the interface between the infrastructure layer and the other layers. The glue that ties the technologies and enables management and configuration

Infostructure: The data and information. Content in a database, file storage etc.

Applistructure: The application deployed in the cloud and the underlying application services used to build them. For example, Platform as a Service features like message queues, artificial intelligence analysis, or notification services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The data security lifecycle includes six phases from creation to destruction, which of the following are these stages and in the correct order?

A. Create, Use, Store, Share, Archive, Destroy
B. Create, Store, Use, Share, Archive, Destroy
C.Create, Use, Store, Archive, Share, Destroy
D.Create, Process, Store, Share, Archive, Destroy
E.Create, Process, Store, Archive, Share, Destroy
F.0

A

B. Create, Store, Use, Share, Archive, Destroy

Explanation:
The lifecycle includes six phases from creation to destruction. Although it is shown as linear progression, once created, data can bounce between phases without restriction, and may not pass through all stages
update
Create - Creation is the generation of new digital content, or the alteration/updating/modifying of existing content
Store - Storing is the act of committing the digital data to some sort of storage repository and typically occurs nearly simultaneously with creation
Use -Data, is viewed, processed, or otherwise used in some sort of activity, not including modification
Share - Information is made accessible to others, such as between users, to customers and to partners
Destroy - Data is permanently destroyed using physical or digital means (ie cryptoshredding)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the three main aspects of business continuity and disaster recovery in the cloud?

A.Ensuring continuity and recovery within a cloud provider, Preparing for and managing cloud provider outages, Considering options for portability, in case you need to migrate providers or platforms
B.Ensuring continuity and recovery within a cloud provider, Preparing for and managing cloud provider services, Considering options for portability, in case you need to migrate providers or platforms
C.Ensuring continuity and recovery within a cloud provider, Preparing for and managing cloud provider services, Considering options for portability, in case you need to migrate providers or platforms
D.Ensuring continuity and recovery within a given cloud provider, Preparing for and managing cloud provider services, Considering options for availability, in case you need to migrate providers or platforms

A

A.Ensuring continuity and recovery within a cloud provider, Preparing for and managing cloud provider outages, Considering options for portability, in case you need to migrate providers or platforms

Explanation:
Business Continuity and Disaster Recovery (BC/DR) is just as important in cloud computing as it is for any other technology. Aside from differences resulting from the potential involvement of a third-party (something we often deal with in BC/DR), there are additional consideration due to the inherent differences when using shared resources.

The three main aspects of BC/DR in the cloud are:

Ensuring continuity and recovery within a given cloud provider.
These are the tools and techniques to best architect your cloud deployment to keep things running if either what you deploy breaks, or a portion of the cloud provider breaks

Preparing for and managing cloud provider outages.
This extends from the more constrained problems that you can architect around within a provider to the wider outages that take down all or some of the provider in a way that exceeds the capabilities of inherent DR controls

Considering options for portability, in case you need to migrate providers or platforms
This could be due to anything from desiring a different feature set to the complete loss of the provider, if for example, they go out of business or you have a legal dispute

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following is not a security benefit of immutable workloads?

A.Security testing can be manage during image creation
B. You no longer patch running systems or worry about dependencies
C.You can enable remote logins to running workloads
D.It is much faster to roll out updated versions
E.It is easier to disable services and whitelist applications

A

C.You can enable remote logins to running workloads

Explanation:
You can, and should, disable remote logins to running workloads (if logins are even an option).
This is an operational require to prevent changes that arent consistent across the stack, which also has significant security benefits

Auto-scaling and containers, by nature, work best when you run instances launched dynamically based on an image: Those instances can be shut down when no longer needed for capacity without breaking an application stack.
This is core to the elasticity of compute in the cloud.
Thus, you no longer patch or make other changes to a running workload, since that wouldnt change the image and thus new instances would be out of sync with whatever manual changes you make on whatever is running.

We call these virtual machines immutable.
Immutable workloads enable significant security benefits:

You no longer patch running systems or worry about dependencies, broken patch processes, etc. You replace them with a new gold master

It is much faster to roll out updated versions, since applications must be designed to handle individual nodes going down (remember, this is fundamental to any auto-scaling).
You are less constrained by the complexity and fragility of patching a running system. Even if something breaks , you just replace it.

It is easier to disable services and whitelist applications/processes since the instance should never change
Most security testing can be managed during image creation, reducing the need for vulnerability assessment on running workloads since their behavior should be completely known at the time of creation. This doesnt eliminate all security testing for production workloads, but it is a means of offloading large portions of testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following is not the primary security responsibility of the cloud user when it uses the virtualized environment?

A.Monitoring and logging
B.Image Asset Management
C.Identity management to the virtual resources
D.Use of dedicated hosting
E.Isolation
A

E.Isolation

Explanation
Isolation is the primary security responsibility of the cloud provider in compute virtualization

Cloud User Responsibilities:
Security Settings
Settings such as identity management, to the virtual resources.
This is not the identity management within the resource, such as the operating system login credentials, but the identity management of who is allowed to access the cloud management of the resource - for example, stopping or changing the configuration of a virtual machine.

Monitoring and Logging
How to handle system logs from virtual machines of containers, but the cloud platform will likely offer additional logging and monitoring at the virtualization level. This can include the status of a virtual machine, management events, performance etc.

Image Asset Management
Cloud compute deployments are based on master images - be it a virtual machine, container or other code - that are then run in the cloud.
This is often highly automated and results in a larger number of images to base assets on, compared to traditional computing master images.
Managing these - including which meet security requirements, where they can be deployed and who has access to them is an important security responsibility

Use of Dedicated Hosting
If available, based on the security context of the resource.
In some situations you can specify that your assets run on hardware dedicated to only you (at higher cost), even on a multitenant cloud.
This may help meet compliance requirements or satisfy security needs in special cases where sharing hardware with another tenant is considered a risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which common component of big data is focused on the mechanisms used to ingest large volumes of data, often of a streaming nature?

A.Distributed data collection
B.Distributed Attribution
C.Distributed Processing
D. Distributed Storage
E.Distributed Data Information
A

A.Distributed data collection

Explanation:
Distributed data collection is the mechanism used to ingest large volumes of data, often of a streaming nature

There are three common components of big data, regardless of the specific toolset used:

Distributed Data Collection
Mechanisms to ingest large volumes of data, often of a streaming nature.
This could be as “lightweight” as web-click streaming data and as complex as highly distributed scientific imaging or sensor data
Not all big data relies on distributed or streaming data collection, but it is a core big data technology

Distributed Storage
The ability to store the large data sets in distributed file systems (such as Google File System, Hadoop Distributed File System etc) or databases (often NoSQL), which is often required due to the limitations of non-distributed storage technologies

Distributed Processing
Tools capable of distributing processing jobs (such as map reduce, spark, etc) for the effective analysis of data sets so massive and rapidly changing that single origin processing cant effectively handle them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the three main components of an encrypted system?

A. User, data and encryption engine
B. User, encryption, and key management
C. User, data, and encryption
D.Data, encryption, and decryption algorithm
E. Data, encryption engine and key management

A

E. Data, encryption engine and key management

Explanation:
There are three components of an encryption system: data, the encryption engine and key management

The data is of course, the information that you’re encrypting.
The engine is what performs the mathematical process of encryption
The key manager handles the keys for the encryption.
The overall design of the system focuses on where to put each of these components

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In a cloud provider and user relationship, the virtual or abstracted infrastructure is managed by which entity?

A. Cloud user 
B.Cloud Provider
C.As per the contract between the cloud provider and cloud user
D.Its a shared responsibility
E.It is managed by third party
A

A. Cloud user

Explanation:
In cloud computing there are two macro layers to infrastructure
The fundamental resources pooled together to create a cloud.
This is the raw, physical and logical compute (processors, memory etc), networks and storage used to build the clouds resource pools.
For example, this includes the security of networking hardware and software used to create the network resource pool

The virtual/abstracted infrastructure managed by a cloud user.
Thats the compute, network and storage assets that they use from the resource pools.
For example, the security of the virtual network, as defined and managed by the cloud user.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following statements best describes an identity federation?

A.Interconnection of disparate directory services
B.Cloud service providers with the same identity store
C.Identities that share similar access rights
D.Shared use of single cloud services
E.Role based access provisioning

A

A.Interconnection of disparate directory services

Explanation:
Conceptually speaking, federation is the interconnection of disparate directories services
In cloud computing, the fundamental problem is that multiple organizations are now managing the identity and access management to resources, which can greatly complicate the process.
For example, imagine having to provision the same user on dozens - or hundreds - of different cloud services.
Federation is the primary tool used to manage this problem, by building a trust relationships between organizations and enforcing them through standards based technologies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the following items is NOT an example of Security as a Service (SecaaS)?

A.Identity
B.IDS/IPS
C.Provisioning
D.Email
E.Web Services
A

C.Provisioning

Explanation:
Provisioning is not part of the most common categories

There are a large number of products and services that fall under the heading of SecaaS.
While the following is not a canonical list, it describes many of the more common categories soon are:

Identity, Entitlement and Access management services
Cloud Access and Security Broker (CASB, also known as Cloud Security Gateways)
Web Security 
Email Security
Web Application Firewalls
Intrusion Detection/Prevention
SIEM
Encryption and key Management
BC/DR
Security Management
Distributed Denial of Service Protection
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Identity brokers handle federating between identity providers and relying parties

A.True
B.False

A

A. True

Explanation:
Identity brokers handle federating between identity providers and relying parties (which may not always be a cloud service)

They can be located on the network edge or even in the cloud in order to enable web-SSO

Identity providers dont need to be located only on-premises; many cloud providers now support cloud-based directory servers that support federation internally and with other cloud services.

For example, more complex architectures can synchronize or federate a portion of an organizations identities for an internal directory through an identity broker and then to a cloud-hosted directory which then servers as an identity provider for other federated connections

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following is a valid statement regarding entitlement?

A.Entitlement is the same thing as authorization
B.Entitlement maps identities to authorizations and any required attributes
C.Entitlement is the same thing as access control
D.Entitlement is permission to do something
E.Entitlement allows or denies the expression of authorization

A

B.Entitlement maps identities to authorizations and any required attributes

Explanation:
Entitlement maps identities to authorizations and any required attributes

The terms entitlement, authorization and access control all overlap somewhat and are defined differently depneding on the context.

An authorization is permission to do something - access a file, or perform a certain function like an API call on a particular resource

An access control allows or denies the expression of that authorization, so it includes aspects like assuring that the user is authenticated before allowing access

An entitlement maps identities to authorizations and any required attributes (ie user x is allowed access to resource y when z attributes have designated values.)

We commonly refer to a map of these entitlement as an entitlement matrix.
Entitlements are often encoded as technical policies for distribution and enforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When using federation, the cloud provider is responsible for mapping attributes, including roles and groups, to the cloud user.

A.True
B.False

A

B.False

Explanation:
When using federation, the cloud user is responsible for mapping attributes, including roles and groups, to the cloud provider and ensuring that these are properly communicated during authentication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In a cloud based WAF, the traffic is redirected to a service that analyzed and filters traffic before passing it to the web application

A.True
B.False

A

A.True

Explanation:
In a cloud based WAF, customers redirect traffic (using DNS) to a service that analyzes and filters traffic before passing it through to the destination web application.
Many cloud WAFs also include anti-DDoS capabiltities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which of the following statements best describes the potential advantages of security as a service?

A.Many areas of security as a service are ready for adoption with notable exceptions of anti-malware and web security gateway programs
B. The advantage may include deployment flexibility, extensive domain knowledge and capabilities to scale of SecaaS providers
C.The standardization of security software makes the outsourcing of security as a service nearly obsolete
D.The higher costs and reduced flexibility are more than compensated by the ability to pass the security responsibilities on to another firm

A

B. The advantage may include deployment flexibility, extensive domain knowledge and capabilities to scale of SecaaS providers

Explanation:
Potential benefits of SecaaS are cloud computing benefits, staffing and expertise, intelligence sharing, deployment flexibility, insulation of clients and scaling and cost

Cloud Computing Benefits
The normal potential benefits of cloud computing - such as reduced capital expenses, agility, redundancy, high availability, and resiliency - all apply to SecaaS.
As with any other cloud provider the magnitude of these benefits depend on pricing, execution and capabilities of the security provider

Staffing and Expertise
Many organizations struggle to employ, train and retain security professionals across relevant domains of expertise.
This can be exacerbated due to limitations of local markets, high costs for specialists, and balancing day-to-day needs with the high rate of attacker innovation.
As such, SecaaS provider bring the benefit of extensive domain knowledge and research that may be unattainable for many organizations that are not solely focused on security or the specific security domain

Intelligence-Sharing
SecaaS providers protect multiple clients simultaneously and have the opportunity to share data intelligence and data across them.
For example, finding a mawlare sample in one client allows the provider to immediately add it to their defensive platform, thus protecting all other customers.
Practically speaking this isnt a magic wand, as the effectiveness will vary across categories, but since intelligence-sharing is built into the service, the potential upside is there.

Deployment Flexibility
SecaaS may be better positioned to support evolving workplaces and cloud migrations, since it is itself a cloud-native model delivered using broad network access and elasticity.
Services can typically handle more flexible deployment models, such as supporting distributed locations without the complexity of multi-site hardware

Insulation of Clients
In some cases, SecaaS can intercept attacks before they hit the organization directly.
For example, spam filtering and cloud based Web Application Firewalls are positioned between the attackers and the organizations. They can absorb certain attacks before they ever reach the customers assets.

Scaling and Costs
The cloud model provides the customer with a “Pay as You Grow” model, which also helps organizations focus on their core business and lets them leave security concerns to the experts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

By nature, most of the DDoS are not cloud based and they do not operate by rerouting traffic

A. False
B.True

A

A. False

Explanation:
By nature, most DDoS protections are cloud-based.
They operate by rerouting traffic through the DDoS service in order to absorb attacks before they can affect the customers own infrastructure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which of the following is not a security concern of serverless computing?

A.Serverless places a much higher security burden on the cloud user
B.The cloud user will not have access to commonly used monitoring and logging levels
C.Serverless will result in high levels of access to the cloud providers management plane
D.Vulnerability assessment must comply with the providers terms of service
E.Incident response will be more complicated

A

A.Serverless places a much higher security burden on the cloud user

Explanation:
Choosing your provider and understanding security SLAs and capabilities is absolutely critical.
Although the cloud provider is responsible for security below the serverless platform level, the cloud user is still responsible for properly configuring and using the products

From a security standpoint, serverless key issues include:
Using serverless, the cloud user will not have access to commonly used monitoring and logging levels, such as server or network logs. Applications will need to integrate more logging and cloud providers should provide necessary logging to meet core security and compliance requirements

Although the providers services may be certified or attested for various compliance mappings to more up to date and customers need to ensure they only use services within their compliance scope

There will be high levels of access to the cloud providers management plane since that is the only way to integrate and use the serverless capabilities

Serverless can dramatically reduce attack surface and pathways and integrating serverless components may be an excellent way to break links in an attack chain, even if the entire application stack is not serverless.

Any vulnerability assessment or other security testing must comply with the providers terms of service.
Cloud users may no longer have the ability to directly test applications or must test with a reduced scope, since the providers infrastructure is now hosting everything and cant distinguish between legitimate tests and attacks

Incident response may also be complicated and will definitely require changes in process and tooling to manage a serverless based incident

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Incident response plan following by cloud users must not change in case of serverless technology

A.False
B.True

A

A.False

Explanation:
Serverless places a much higher security burden on the cloud provider.
Choosing your provider and understanding security SLAs and capabilities is absolutely critical.
Incident response may also be complicated and will definitely require changes in process and tooling to manage a serverless based incident

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

In the SecasS relationship, who is responsible for the majority of the security?

A.Application User
B.Cloud User
C.Cloud Provider
D.Application Owner
E.Application Developer
A

C. Cloud Provider

Explanation:
Security as a Service (SeecaS) providers offer security capabilities as a cloud service
SecaaS providers offer security capabilities as a cloud service.
This includes dedicated SecaaS providers, as well as packaged security features from general cloud computing providers.
SecaaS encompasses a very wide range of possible technologies, but they must meet the following criteria:

SecaaS includes security products or services that are delivered as a cloud service

To be considered SecaaS, the services must still meet the essential NIST characteristics for cloud computing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What should every cloud customer set up with its cloud provider that can be utilized in the event of an incident?

A.Contract
B. Communication Plan
C. Remediation Kit
D. A data destruction plan
E. Communication Officer
A

B. Communication Plan

Explanation:
Cloud customers must be set up proper communication paths with the provider that can be utilized in the event of an incident.

Existing open standards can facilitate incident communication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Which of the following facilitates the underlying communications method for components within a cloud, some of which are exposed to the cloud user to manage their resources and configurations?

A. Cloud Service Provider
B.Cloud Management Plane
C. Cloud Control Plane
D. Application Programming Interface
E.Hypervisor
A

D. Application Programming Interface

Explanation:
APIs are typically the underlying communications method for components with a cloud, some of which (or an entirely different user) are exposed to the cloud user to manage their resources and configurations

The cloud resources are pooled using abstraction and orchestration.
Abstractions, often via virtualization, frees the resources from their physical constraints to enable pooling.
Then a set of core connectivity and delivery tools (orchestration) ties these abstracted resources together, creates the pools, and provides the automation to deliver them to customers.

All this is facilitated using Application Programming Interfaces. APIs are typically the underlying communications method for components within a cloud, some of which (or an entirely different set) are exposed to the cloud user to manage their resources and configurations. Most cloud APIs these days use REST (Representational State Transfer), which runs over the HTTP protocol, making it extremely well suited for Internet services.

In most cases, those APIs are both remotely accessible and wrapped into a web-based user interface.
This combination is the cloud management plane, since consumers use it to manage and configure the cloud resources, such as launching virtual machines (instances) or configuring virtual networks.
From a security perspective, it is both the biggest difference from protecting physical infrastructure (since you cant rely on physical access as a control) and the top priority when designing a cloud security program.
If an attacker gets into your management plane, they potentially have full remote access to your entire cloud deployment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Which of the following is the primary tool of governance between a cloud provider and a cloud customer which is true for both public and private cloud?

A.Audit
B.Cloud provider assessment
C.Compliance Reports
D.Contract
E.Non-Disclosure Agreements
A

D.Contract

Explanation:
The primary tool of governance is the contract between a cloud provider and a cloud customer (this is true for public and private cloud)

As with any other area, there are specific management tools used for cloud governance.
This list focuses more on tools for external providers, but these same tools can often be used internally for private deployments

Contracts
The primary tool of governance is the contract between a cloud provider and a cloud customer (this is true for public and private cloud)
The contract is your only guarantee of any level of service or commitment - assuming there is no breach of contract, which tosses everything into a legal scenario.
Contracts are the primary tool to extend governance into business partners and providers

Supplier (Cloud Provider) Assessments
These assessments are performed by the potential cloud customer using available information and allowed processes/techniques.
They combine contractual and manual research with third-party attestations (legal statements often used to communicate the results of an assessment or audit) and technical research.
They are very similar to any supplier assessment and can include aspects like financial viability, history feature offerings, third party attestations, feedback from peers and so on

Compliance Reporting
Compliance Reporting includes all the documentation on a providers internal and external compliance assessments.
They are the reports from audits of controls, which an organization can perform themselves, a customer can perform on a provider (although this usually isnt an option in cloud) or have performed by a trusted third party.
Third-party audits and assessments are preferred since they provide independent validation (assuming you trust the third party)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

When associating the functions to an actor, which of the following is used to restrict a list of possible actions down to allowed actions?

A.Controls
B.Functions
C.Locations
D.Permissions
E.Actions
A

A.Controls

Explanation:
A control restricts a list of possible actions down to allowed actions
Functions can be performed with the data, by a given actor (person or system) and a particular location

Functions
There are three things we can do with a given datum:
Read - View/Read the data, including creating, copying, file transfers, dissemination, and other exchanges of information
Process - Perform a transaction on the data; update it; use it in a business processing transaction, etc.

Store
Hold the data (in a file, database etc)

Actor
An actor (person, application or system/process as opposed to the access device) performs each function in a location

Controls
A control restricts a list of possible actions down to allowed actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Which of the following statement is false about Serverless Computing? (Select 2)

A.The cloud provider manages all the underlying layers
B.The cloud user manages all the underlying layers
C. The cloud provider manages the security functions and controls
D.The cloud user manages the security functions and controls
E.The cloud user accesses the exposed function

A

B.The cloud user manages all the underlying layers
D.The cloud user manages the security functions and controls

Explanation:
Serverless is merely a combined term that covers containers and platform-based workloads, where the cloud provider manages all the underlying layers, including foundational security functions and controls.
Serverless computing is broad category that refers to any situation where the cloud user doesnt manage any of the underlying hardware or virtual machines, and just accessed exposed functions.
For example, there are serverless platforms for directly executing application code.
Under the hood, these still utilize capabilties such as containers, virtual machines or specialized hardware platforms.
From a security perspective, serverless is merely a combined term that covers containers and platform-based workloads, where the cloud provider manages all the underlying layers, including foundational security functions and controls

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Which of the following are the primary security responsibilities of the cloud provider in compute virtualization? (Select two)

A.Isolation
B.Identity and Access management
C.Encryption
D.Securing the underlying infrastructure
E.Monitoring and Logging
A

A.Isolation
D.Securing the underlying infrastructure

Explanation:
The primary security responsibilities of the cloud provider in compute virtualization are to enforce isolation and maintain a secure virtualization infrastructure

Cloud Provider Responsibilities
The primary security responsibilities of the cloud provider in compute virtualization are to enforce isolation and maintain a secure virtualization infrastructure

Isolation
Isolation ensures that compute processes or memory in one virtual machine/container should not be visible to another. It is how we separate different tenants, even when they are running processes on the same physical hardware

The cloud provider is also responsible for securing the underlying infrastructure and the virtualization technology from external attack or internal misuse.
This means using patched and up-to-date hypervisors that are properly configured and supported with processes to keep them up to date and secure over time.
The inability to patch hypervisors across a cloud deployment could create a fundamentally insecure cloud when a new vulnerability in the technology is discovered.

26
Q

Which action is part of the preparation phase of the incident response lifecycle?

A. Evaluating infrastructure by proactive scanning and network monitoring, vulnerability assessments and performing risk assessments
B.Designating a person will communicate the incident containment and recovery status to senior management
C.Determining the extend of the potential data loss
D.Notification and coordination of activities
E.Configuring and validating alerts

A

A. Evaluating infrastructure by proactive scanning and network monitoring, vulnerability assessments and performing risk assessments

Explanation:
Preparation
Establishing an incident response capability so that the organization is ready to respond to incidents

  • Process to handle incidents
  • Handler communications and facilities
  • Incident analysis hardware and software
  • Internal documentation (port lists, asset lists, network diagrams, current baselines of network traffic)
  • Identifying training
  • Evaluating infrastructure by proactive scanning and network monitoring, vulnerability assessments, and performing risk assessments
  • Subscribing to third party threat intelligence services
27
Q

Which of the following is true of data collection forensics in a cloud environment?

A.Forensics is not allowed in private or hybrid cloud configurations due to the sensitive nature of the data
B.Forensics is allowed in private or hybrid cloud configurations after taking approval from the cloud service providers
C.Forensics is allowed in private or hybrid cloud configurations after putting proper clauses in the contracts
D.Bit-by-bit imaging of a cloud data source is typically difficult or impossible
E.If the data is hosted by the same provider, it is easy to conduct a thorough analysis

A

D.Bit-by-bit imaging of a cloud data source is typically difficult or impossible

Explanation:
Bit-by-bit imaging of a cloud data source is generally difficult or impossible
For obvious security reasons, providers are reluctant to allow access to their hardware, particularly in a multitenant environment where a client could gain access to other clients data
Even in a private cloud, forensics may be extremely difficult, and clients may need to notify opposing counsel or the courts of these limitations

Luckily, this type of forensic analysis is rarely warranted in cloud computing because the environment often consists of a structured data hierarchy or virtualization that does not provide significant additional relevant information in a bit by bit analysis

28
Q

Which layer of the logical stack includes code and message queues?

A.Infrastructure
B.Metastructure
C.Infostructure
D.Applistructure

A

D.Applistructure

Explanation:

Applistructure
The applications deployed in the cloud and the underlying application servcies used to build them.
For example, Platform as a Service features like message queues, artificial intelligence analysis, or notification services.
At a high level, both cloud and traditional computing adhere to a logical model that helps identify different layers based on functionality.
This is useful to illustrate the differences between the computing models themselves

Infrastructure
The core components of a computing system: compute, network, and storage. The foundations that everything else is built on. The move parts.

Infostructure
The data and information. Content in a database, file storage etc.

Applistructure
The applications deployed in the cloud and the underlying application services used to build them.
For example, Platform as a Service features like message queues, AI, or notification services

29
Q

Which of the following is one of the challenges of application security in a
cloud environment?

A.Responsiveness
B.Isolated environments
C.Elasticity
D.DevOps
E.Limited Detailed Visibility
A

E.Limited Detailed Visibility

Explanation:
Visibility and the availability of monitoring and logging are impacted, requiring new approaches to gathering security related data.
The rest of the options are opportunities

Challenges of application security in a cloud environment:

Limited Detailed Visibility
Visibility and the availability of monitoring and logging are impacted, requiring new approaches to gathering security-related data. This is especially true when using PaaS, where commonly available logs, such as system or network logs are often no longer accessible to the cloud user.

Increased Application Scope
The management plane/metastructure security directly affects the security of any applications associated with the that cloud account.
Developers and operations will also likely need access to the management plane, as opposed to always going through a different team.
Data and sensitive information is also potentially exposable within the management plane.
Lastly, modern cloud applications often connect with the management plane to trigger a variety of automated actions, especially when PaaS is involved.
For all those reasons, management plane security is now within scope of the applications security and a failure on either side coould bridge into the other

Changing Threat Models
The cloud provider relationship and the shared security model will need to be included in the threat model, as well as in any operational and incident response plans.
Threat models also need to adapt to reflect the technical differences of the cloud provider or platform in use.

Reduced Transparency
There may be less transparency as to what is going on within the application, especially as it integrates with external services.
For example, you rarely know the entire set of security controls for an external PaaS service integrated with your application

Overall, there will be changes to application security due to the shared security model,
Some of these are directly tied to governance and operations, but there are many more in terms of how you think and plan for the applications security.

30
Q

Which of the following is one of the five essential characteristics of cloud computing as defined by NIST?

A.On-demand Pricing
B.Public Cloud
C.Unlimited Resources
D.Measured Service
E.Multi-tenanacy
A

D.Measured Service

Explanation:
Measured service meters what is provided, to ensure that consumers use what they are allotted, and if necessary, to charge them for it

NIST defines cloud computing by describing five essential characteristics, three cloud service models and four cloud deployment models.

These are the characterisitics that make a cloud a cloud. If something has these characteristics, we consider it cloud computing. If it lacks any of them, it is likely not a cloud.

Resource Pooling
This is the most fundamental characteristic, as discussed above.
The provider abstracts resources and collects them into a pool, portions of which can be allocated to different consumers (typically based on policies)

Consumers provision the resources from the pool using on demand self service.
They manage their resources themselves, without having to talk to a human admin

Broad network access means that all resources are available over a network, without any need for direct physical access; the network is not necessarily part of the service

Rapid elasticity allows consumers to expand or contract the resources they use from the pool (provisioning and de-provisioning), often completely automatically.
This allows them to more closely match resource consumption with demand (for example, adding virtual server as demand increase, then shutting them down when demand drop

Measured Service meters what is provided, to ensure that consumers only use what they are allotted, and if necessary to charge them for it.
This is where the term utility computing comes from, since computing resources can now be consumed like water and electricity, with the client only paying for what they use

ISO/IEC 17788 lists six key characteristics, the first five of which are identical to the NIST characteristics.
The only addition is multi-tenancy, which is distinct from resource pooling

31
Q

What factors should be considered about the data specifically due to regulatory, contractual and other jurisdictional?

A.Own of the data who has the accountability
B.Logical and physical locations of data
C.Algorithm that is used to encrypt the data
D.Size of the data and the type of storage
E.The channel the data uses while in transit

A

B.Logical and physical locations of data

Explanation:
Due to all the potential regulatory, contractual and other jurisdictional issues, it is extremely important to understand both the logical and physical location of data.
Data is accessed and stored in multiple locations, each with its own lifecycle
Due to all the potential regulatory, contractual and other jurisdictional issues, it is extremely important to understand both the logical and physical locations of data

32
Q

How will you ensure that all data has been removed from a public cloud environment including all media such as back-up tapes?

A.Encrypt the data while storing and allow decryption rights to authorized individuals
B. Maintain local key management and revoke or delete keys from the key management system to prevent the data from being accessed again
C.Practice Segregation of duties so that only you can delete the data
D.Use key management system provided by CSP (Cloud Service Provider) and revoke/delete keys to prevent the data from being accessed again
E. Work with CSP and get all the data purged from main storage and back-ups

A

B. Maintain local key management and revoke or delete keys from the key management system to prevent the data from being accessed again

Explanation:
Where data is stored in a public cloud environment, there are problems when existing that environment to be able to prove all that data (especially PII or SPI data, or data subject to regulatory assurance regimes) has been deleted from the public cloud environment, including all other media, such as back up tapes

Maintain losing the key from the local key management allows such assurance by revoking (or just deleting/losing) the key from the key management system, thus assuring that any data remaining in the public cloud cannot be decrypted

33
Q

If, after all your assessments and the controls that you implement yourself there is still residual risk, what are your only options?

A.You should contact your insurance partner and have a contract on residual risk.
B. You can accept the risk by informing the senior management
C.You can contact the cloud service provider as risk in the cloud is a shared responsibility
D.You can change the cloud service provider and chose the one which has no risk
E.You can transfer, accept or avoid risks.

A

E.You can transfer, accept or avoid risks.

Explanation:
After reviewing and understand what risks the cloud provider manages, what remains is residual risk.
Residual risk may often be managed by controls that you implement (ie encryption)
The availability and specific implementation of risk controls vary greatly across cloud providers, particular services/features, service models and deployment models.
If, after all your assessments and the controls that you implement yourself there is still residual risk your only options are to transfer it, accept the risk or avoid it

34
Q

What must the monitoring scope cover in addition to the deployed assets?

A.The data plane
B.The application plane
C.The service plane
D.The access plane
E.The management plane
A

E.The management plane

Explanation:
In all cases, the monitoring scope must cover the clouds management plane, not merely the deployed assets.
Detection and analysis in a cloud environment may look nearly the same (for IaaS) and quite different (for SaaS).
In all cases, the monitoring scopoe must cover the clouds management plane, not merely the deployed assets

35
Q

Which of the following statements is not true regarding securely building and deploying applications in cloud computing environments?

A.Management plane security directly affects the security of any applications associated with the cloud account
B.Data and sensitive information is not exposed within the management plane
C.Cloud applications often connect with the management plane to trigger a variety of automated actions
D.Developers and operations will also likely need access to the management plane
E.Management plane security is now within the scope of the applications security

A

B.Data and sensitive information is not exposed within the management plane

Explanation:
Data and sensitive information is also potentially exposable within the management plane.
The management plane/metastructure security directly affects the security of any applications associated with that cloud account
Developers and operations will also likely need access to the management plane, as opposed to always going through a different team.
Data and sensitive information is also potentially exposable within the management plane.
Lastly, modern cloud applications often connect with the management plane to trigger a variety of automated actions, especially when PaaS is involved.
For all of those reasons, management plane security is now within the scope of the applications security and a failure on either side could bridge into the other

36
Q

Dynamic Application Security Testing (DAST) may be limited and or require pre-testing permission from the provider.

A.True
B.False

A

A.True

Explanation:
DAST tests running applications and includes testing such as web vulnerability testing and fuzzing.
Due to the terms of service with the cloud provider DAST may be limited and/or require pre-testing permission from the provider.
With cloud and automated deployment pipelines it is possible to stand up entirely functional test environments using infrastructure as code and then perform deep assessments before approving changes for production

37
Q

Which of the following statements about Cloud Access and Security Brokers is not true?

A.They help discover internal use of cloud service
B.They use various mechanisms such as network monitoring
C.They integrate with an existing network gateway
D.They monitor DNS queries
E.They cannot do man in the middle monitoring

A

E.They cannot do main in the middle monitoring

Explanation:
CASB can do inline interception (main in the middle monitoring)
CASB discover internal use of cloud service using various mechanisms such as network monitoring, integrating with an existing network gateway or monitoring tool, or even by monitoring DNS queries
After discovering which services your users are connecting to, most of these products then offer monitoring of activity on approved services through API connections (when available) or inline interception (man in the middle monitoring)
Many support DLP and other security alerting and even offer controls to better manage use of sensitive data in cloud services

38
Q

Which of the following will not help detect actual migrations, monitor cloud usage and any data transfer to cloud?

A.CASB
B.URL Filtering
C. DLP
D.Data encryption in transit

A

D.Data encryption in transit

Explanation:
You can detect actual migrations monitor cloud usage and any data transfer using CASB, URL filtering and DLP

Data encryption in transit will help to secure the data while in motion but will not help to detect actual migrations, monitor cloud usage and any data transfers to cloud
To detect actual migrations, monitor cloud usage and any data transfers, you can do this with the help of the following tools:

CASB
CASB discover internal use of clouod services using various mechanism such as network monitoring, integrating with an existing network gateway or monitoring tool, or even by monitoring DNS queries. After discovering which services your users are connecting to, most of these products then offer monitoring of activity on approved services via API connections or inline interception.
Many support DLP and other security alerting and even offer controls to better manage use of sensitive data in cloud services

URL Filtering
While not as robust as CASB a URL filter/web gateway may help you understand which cloud services your users are using (or trying to use)

DLP
If you monitor web traffic (and look inside SSL connections) a DLP tool may also help detect data migrations to cloud services.
However, some cloud SDKs and APIs may encrypt portions of data and traffic that DLP tools can unravel and thus they wont be able to understand the payload

39
Q

Tokenization is often used when preserving the format of the data is not important

A.True
B.False

A

B.False

Explanation:
Tokenization is often used when the format of the data is important
Tokenization is often used when the format of the data is important (ie replacing credit card numbers in an existing system that requires the same format text string)

Format Preserving Encryption encrypts data with a key but also keeps the same structural format as tokenization, but it may not be as cryptographically secure due to the compromises

40
Q

Which of the following statements is not true regarding “instance managed encryption”?

A.The encryption engine runs within the instance
B.The key is stored outside the volume
C.The volume can be protected by a passphrase
D.The volume can be protected by a key pair

A

B.The key is stored outside the volume

Explanation:
The key is stored in the volume but protected by a passphrase or key pair
IaaS volumes can be encrypted using different methods, depending on your data

Volume Storage Encryption
Instance Managed encryption: The encryption engine runs within the instance and the key is stored in the volume but protected by a passphrase or key pair

Externally Managed Encryption:
The encryption engine runes in the instance, but the keys are managed externally and issues to the instance on request

41
Q

Which of the following is not part of the PaaS encryption?

A.Application Layer Encryption
B.Proxy Encryption
C.Database Encryption
D.Provider-managed layers in the application such as the messaging queue

A

B.Proxy Encryption

Explanation:
Proxy encryption is part of IaaS or SaaS encryption
PaaS encryption varies tremendously due to all the different PaaS platforms

Application Layer Encryption
Data is encrypted in the PaaS application or the client accessing the platform

Database Encryption
Data is encrypted in the database using encryption thats built in and is supported by a database platform like Transparent Database Encryption (TDE) or at the field level

Other: These are provider managed layers in the application, such as the messaging queue

42
Q

Which of the following is not the potential option of handling key management?

A.HSM
B. Virtual Appliance
C. Hybrid
D. Proxy
E.Cloud Service Provider
A

D. Proxy

Explanation:
There are four potential options for handling key management HSM/Appliance, Virtual Appliance/Software, Cloud Provider Service, and Hybrid

HSM/Appliance
Use a traditional hardware security module (HSM) or appliance based key manager, which will typically need to be on-premises, and liver they keys to the cloud over a dedicated connection

Virtual Appliance/Software
Deploy a virtual appliance or software-based key managed in the cloud

Cloud Provider Service
This is a key management service offered by the cloud provider.
Before selecting this option, make sure you understand if your key could be exposed

Hybrid
You can also use a combination, such as using a HSM as the root of trust for keys but then delivering application-specific keys to a virtual appliance thats located in the cloud and only manages keys for its particular context

43
Q

The hub and spoke architecture uses internal identity providers or sources connected directly to cloud providers

A.True
B.False

A

B.False

Explanation:
Hub and spoke - internal identity providers/sources communicate with a central broker or repository that then serves as the identity provider for federation to cloud providers
When using federation, the cloud user needs to determine the authoritative source that holds the unique identities they will federate.
This is often an internal directory server.
The next decision is whether to directly use the authoritative source as the identity provider, use a different identity source that feeds

44
Q

You cannot have a cloud without virtualization

A.True
B.False

A

A.True

Explanation:
At its most basic, virtualization abstracts resources from their underlying physical assets.
You can virtualize nearly anything in technology, from entire computers to networks to code.
As mentioned in the introduction, cloud computing is fundamentally based on virtualization; its how we abstract resources to create pools.
Without virtualization, there is no cloud.

45
Q

In Federation which party makes assertions to which party?

A. Identity provider makes assertions to a relying party after building a trust relationship
B. Relying party makes assertions to Identity Provider after building a trust relationship
C. Relying party makes assertions to identity broker
D.Identity broker makes assertions to Identity Provider

A

A. Identity provider makes assertions to a relying party after building a trust relationship

Explanation:
How Federated Identity Management Works:
Federation involves an identity provider making assertions to a relying party after building a trust relationship.
At the heart are a series of cryptographic operations to build the trust relationship and exchange credentials.
A practical example is a user logging in to their work network, which hosts a directory server for accounts.
That user then opens a browser connection to a SaaS appliance.
Instead of logging in, there are a series of behind the scenes operations, where the identity provider (the internal directory server) asserts the identity of the user, and that the user authenticated as well as any attributes.
The relying party trusts those assertions and logs the user in without the user entering any credentials.
In fact, the relying party doesn’t even have a username or password for that user; it relies on the identity provider to asset successful authentication.
To the user they simply go to the website for the SaaS applications and are logged in, assuming they have successfully authenticated with the internal directory

46
Q

ISO/IEC 17788 lists six key characteristics for cloud, the first five of which are identical to the NIST characteristics.

Which of the following is the additional one?

A.Resource Pooling
B.Broad Network Access
C.Multi-tenancy
D.On-Demand Self-Service
E.Measured Service
F.Rapid Elasticity
A

C.Multi-tenancy

Explanation:
ISO/IEC 17788 list six key characteristics, the first give of which are identical to the NIST characteristics.
The only addition is multi-tenancy, which is distinct from resource pooling

47
Q

When there are gaps in network logging data, what step could be taken?

A. Keep the logs in one location
B.Keep the log digest along with original log files
C.Instrument the technology stack with your own logging
D.Work with cloud provider and fix the gaps
E.Encrypt the logs; keep the log digest along with the original files

A

C.Instrument the technology stack with your own logging

Explanation:
Where there are gaps you can sometimes instrument the technology stack with your own logging
Cloud platform logs are not universally available.
Ideadlly they should show all management-plane activity.
Its important to understand what is logged and the gaps that could affect incident analysis.
Do they include automated system activities (like auto scaling) or cloud
provider management activities?
Is all management activity recorded?
In the case of a serious incident, providers may have other logs that are not normally available for customers
One challenge in collecting information may be limited network visibility.
Network logs from a cloud provider will tend to be flow records, but not full packet capture
Where there are gaps you can sometimes instrument the technology stack with your own logging.
This can work within instances, containers, and application code in order to gain telemetry important for the investigation.
Pay particular attention to PaaS and serverless application architectures; you will likely need to add custom application-level logging

48
Q

Which statement best describes the options for PaaS encryption?

A.PaaS is limited to hybrid networks only
B. PaaS encryption is limited to APIs built into the platform, external encryption services and other variations
C.PaaS is limited to client/application and database encryption
D.PaaS would most likely include file/folder encryption and enterprise digital rights management
E.PaaS is very diverse and may include client/application, database and proxy encryption as well as other options

A

E.PaaS is very diverse and may include client/application, database and proxy encryption as well as other options

Explanation:
PaaS is very diverse; the following list may not cover all potential options:

Client/Application Encryption
Data is encrypted in the PaaS application or the client accessing the platform

Database Encryption
Data is encrypted in the database using encryption built in and supported by the datatbase platform

Proxy Encryption
Data passes through an encryption proxy before being sent to the platform

Other
Additional options may include APIs built into the platform, external encryption services and other variations

49
Q

How can you monitor and filter data in a virtual network when traffic might not cross the physical network?

A.Route traffic to a virtual network monitoring or filtering tool on the same hardware
B.Route it to a virtual appliance on the same virtual network
C. Route traffic to the physical network device for capturing
D.Route the traffic through a virtual network interface
E.A and B

A

E.A and B

Explanation:
In particular, monitoring and filtering (including firewalls) change extensively due to the differences in how packets move around the virtual network.

Resources may communicate on a physical server without traffic crossing the physical network.
For example, if two virtual machines are located on the same physical machine there is no reason to route network traffic off the box and onto the network.
Thus, they can communicate directly and monitoring and filtering tools inline on the network (or attached to the routing/switching hardware) will never see the traffic

To compensate, you can route traffic to a virtual network monitoring or filtering tool on the same hardware (including a virtual machine version of a network security product)
You can also bridge all network traffic back out to the network, or route it to a virtual application the same virtual network

50
Q

Which of the following is not a reason for public cloud provider to maintain a higher baseline security?

A. Cloud providers have significant economic incentives to maintain higher baseline security
B. Not maintaining a baseline will undermine the trust that a public cloud provider need
C.Cloud providers are subject to a wider range of regulatory and industry compliance requirements
D. Higher baseline security is needed to attract customers
E.Maintaining a higher baseline security is a shared responsibility and should be handled accordingly

A

E.Maintaining a higher baseline security is a shared responsibility and should be handled accordingly

Explanation:
Cloud computing mostly brings security benefits to applications, but as with most areas of cloud technology, it does require commensurate changes to existing practices, processes, and technologies that were not designed to operate in the cloud.
At a high level, this balance of opportunities and challenges include:

Opportunities:
Higher baseline security. Cloud providers especially major IaaS and PaaS providers, have significant economic incentives to maintain higher baseline security than most organizations.
In a cloud environment, major baseline security failures completely undermine the trust that a public cloud provider needs in order to maintain relationships with its customer base.

Cloud providers are also subject to a wider range of security requirements in order to meet all the regulatory and industry compliance baselines needed to attract customers from those verticals.

These combine to strongly motivate cloud providers to maintain extremely high levels of security

51
Q

Which security advantage considers that anything that goes into the production is created by the CI/CD pipeline on approved code and configuration templates?

A.Standardization
B.Automated Testing
C.Immutable Infrastructure
D.Improved auditing and change management
E.SecDevOps/DevSecOps and Rugged DevOps
A

A.Standardization

Explanation:
Standardization
With DevOps, anything that goes into production is created by the CI/CD pipeline on approved code and configuration templates.
Dev/Test/Prod are all based on the exact same source files, which eliminates any deviation from known-good standards

Automated Testing
As discussed, a wide variety of security testing can be integrated into the CI/CD pipeline, with manual testing added as needed to supplement

Immutable:
CI/CD pipelines can produce mas ter images for virtual machines, containers and infrastructure stacks very quickly and reliably. This enables automated deployments and immutable infrastructure

Improved Auditing and Change Management
CI/CD pipelines can track everything down to individual character changes in source files that are tied to the person submitting the change, with the entire history of the application stack (including infrastructure) stored in a version control repository. This offers considerable audit and change tracking benefits

52
Q

Which security advantage can produce master images for virtual machines, containers, and infrastructure stacks very quickly and reliably by using CI/CD pipelines?

A.Standardization
B.Automated Testing
C.Immutable 
D.Improved auditing and change management
E.SecDevOps/DevSecOps/Rugged DevOps
A

C.Immutable

Explanation:
Immutable
CI/CD pipelines can produce master images for virtual machines, containers and infrastructure stacks very quickly and reliably

53
Q

Leveraging “manual” data transfer methods such as Secure File Transfer Protocol (SFTP) is often more secure and cost effective than mechanisms provided by cloud provider mechanisms to transfer data

A.True
B.False

A

B. False

Explanation:
Sending data to a provider object storage over an API is likely much more reliable and secure than setting up your own SFTP server on a virtual machine in the same provider.

Ensure that you are protecting your data as it moves to the cloud.
This necessitates understanding your providers data migration mechanisms, as leveraging provider mechanisms is often more secure and cost effective than manual data transfer methods such as SFTP

For example, sending data to a providers object storage over an API is likely much more reliable and secure than setting up your own SFTP server on a virtual machine in the same provider

54
Q

Which of the following statement is not true for the mechanisms that are used to secure the data storage at rest?

A.Encryption and tokenization are separate technologies
B.Encryption scrambles the data
C.Tokenization replaces the data with a fixed value
D.Encryption results in a cipher text
E.Tokenization stores the original and the randomized version in a secure database

A

C.Tokenization replaces the data with a fixed value

Explanation:
Tokenization takes the data and replaces it with a random value
Encryption and tokenization are two separate technologies
Encryption protects data by applying a mathematical algorithm that scrambles the data, which then can only be recovered by running it though an unscrambling (decryption) process with a corresponding key.
The result is a blob of ciphertext.
Tokenization, on the other hand, takes the data and replaces it with a random value.
It then stores the original and the randomized version in a secure database for later recovery

55
Q

Which of the following are the types of Volume Storage Encryption?

A.Instance-managed encryption and Externally managed encryption
B.Client side encryption and server side encryption
C.Proxy encryption
D.Instance-managed encryption and internally managed encryption

A

A.Instance-managed encryption and Externally managed encryption

Explanation:
INstance managed encryption and externally managed encryption are the types of volume storage encryption
IaaS volumes can be encrypted using different methods, depeneding on your data.

Volume Storage Encryption
Instance managed encryption: The encryption engine runs within the instance and the key is stored in the volume but protected by a passphrase or keypair

Externally managed Encryption: The encryption engine runs in the instance, but the keys are managed externally and issued to the instance on request

Object and File storage
Client Side Encryption: When object storage is used as the back end for an application (including mobile application), encrypt the data using an encryption engine embedded in the application or client

Server Side Encryption: Data is encrypted on the server (cloud) side after being transferred in. The cloud provider has access to the key and runs the encryption engine

Proxy Encryption: In this model, you connect the volume to a special instance or appliance/software and then connect your instance to the encryption instance.
The proxy handles all crypto operations and may keep keys either onboard or externally

56
Q

In proxy encryption, the proxy handles all crypto operations and may keep keys either internally or externally

A.True
B.False

A

A.True

Explanation:
Proxy encryption
In this model, you connect the volume to a special instance or application/software and then connect your instance to the encryption instance.
The proxy handles all crypto operations and may keep keys either onboard or externally

57
Q

Which of the following should be the main consideration for key management?

A.Performance, access control, latency, non-repudiation
B.Performance, accessibility, latency, security
C.Performance, access control, speed, non-repudiation
D.Performance, availability, speed, security

A

B.Performance, accessibility, latency, security

58
Q

When data or operations are transferred to a cloud, the responsibility for protecting and securing the data typically remains with the collector or custodian of the data

A.True
B.False

A

A.True

59
Q

Virtual machines abstract the running of the code, not including the operating systems, from the underlying hardware

A.True
B.False

A

B.False

Explanation:
The Virtual Machine management (hypervisor) also abstracts an operating system from the underlying hardware

60
Q

Which of the following statements is not true about Security Assertion Markup Language (SAML) 2.0?

A.OASIS standard for federate identity management
B. Uses XML to make assertions between an identity provider and a relying party
C.Assertions can contain authentication statements and attribute statements
D.Assertions can contain authorization decisions statements
E.Supports only authentication and not authorization

A

E.Supports only authentication and not authorization

Explanation:
SAML assertions can contain authentication statements; attribute statements and authorization decision statements