8. Virtualization and containers - DONE Flashcards

1
Q

virtualization adds two new layers for security controls:

A

*Security of the virtualization technology itself, such as hypervisor security. This rests with the provider.
*Security controls for the virtual assets. The responsibility for implementing available controls rests with the customer. Exposing controls for the customers to leverage is the provider’s responsibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

“The main areas of virtualization that you need to know for your exam are”

A

“Compute, Network, Storage. Each of the three creates its own storage pools, and those pools are possible only as a result of virtualization”

“Virtualization is how compute, network, and storage pools are created and is the enabling technology behind the multitenancy aspect of cloud services.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is compute virtualization?

A

Compute virtualization abstracts the running of code (including operating systems) from the underlying hardware. Instead of running code directly on the hardware, it runs on top of an abstraction layer (such as a hypervisor) that isolates (not just segregation!) one virtual machine (VM) from another. This enables multiple operating systems (guest OSs) to run on the same hardware.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An older form of virtualization that you may be aware of is the Java Virtual Machine (JVM). What does it do?

A

“the JVM creates an environment for a Java application to run in. The JVM abstracts the underlying hardware from the application. This allows for more portability “across hardware platforms because the Java app does not need to communicate directly with the underlying hardware, only with the JVM.

There are many other examples of virtualization out there, but the big takeaway is that virtualization performs abstraction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the primary cloud provider responsibilities in compute virtualization?

A

The primary security responsibilities of the cloud provider in compute virtualization are to enforce isolation and maintain a secure virtualization infrastructure. Isolation ensures that compute processes or memory in one virtual machine/container are not visible to another. This isolation supports a secure multitenant model, where multiple tenants can run processes on the same physical hardware (such as a single server).

The cloud provider is also responsible for securing the underlying physical infrastructure and the virtualization technology from external attack or internal misuse. Like any other software, hypervisors need to be properly configured and will require the latest patches installed to address new security issues.

Cloud providers should also have strong security in place for all aspects of virtualization for cloud users. This means creating a secure chain of processes from the image (or other source) used to run the virtual machine through a boot process, with security and integrity being top concerns. This ensures that tenants cannot launch machines based on images that they shouldn’t have access to, such as those belonging to another tenant, and that when a customer runs a virtual machine (or another process), it is the one the customer expects to be running.

Finally, cloud providers should also assure customers that volatile memory is safe from unapproved monitoring since important data could be exposed if another tenant, a malicious employee, or a bad actor can access running memory belonging to another tenant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a volatile memory?

A

volatile memory contains all kinds of potentially sensitive information (think unencrypted data, credentials, and so on) and must be protected from unapproved access. Volatile memory must also have strong isolation implemented and maintained by the provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are some cloud consumer responsibilities around virtualization?

A

The primary responsibility of the cloud user is to implement security properly for everything deployed and managed in a cloud environment. Cloud customers should take advantage of the security controls exposed by their providers for managing their virtual infrastructures. Of course, there are no rules or regulations as to what a provider must offer customers, but some controls are usually offered.

Cloud providers offer security settings such as identity and access management (IAM) to manage virtual resources. When you’re considering the IAM offered by the provider, remember that this is generally at the management plane, not the applistructure. In other words, we’re talking about the ability for your organization’s users accessing the management plane to be given the appropriate permissions required to start or stop an instance, for example, not log on to the server itself.”

“Cloud providers will also likely offer logging of actions performed at the metastructure layer and monitoring of workloads at the virtualization level. This can include the status of a virtual machine, performance (such as CPU utilization), and other actions and workloads.

“Another option that providers may offer is that of “dedicated instances” or “dedicated hosting.” This usually comes at an increased cost, but it may be a useful option if the perceived risk of running a workload on hardware shared with another tenant is deemed unacceptable, or if there is a compliance requirement to run a workload on a single-tenant server.

Finally, the customer is responsible for the security of everything within the workload itself. All the standard stuff applies here, such as starting with a secure configuration of the operating system, securing any applications, updating patches, using agents, and so on. The big difference for the cloud has to do with proper management of the images used to build running server instances as a result of the automation of cloud computing. It is easy to make the mistake of deploying older configurations that may not be patched or properly secured if you don’t have strong asset management in place.
Other general compute security concerns include these:
*Virtualized resources tend to be more ephemeral and can change at a more rapid pace. Any corresponding security, such as monitoring, must keep up with the pace.
*Host-level monitoring/logging may not be available, especially for serverless deployments. Alternative log methods such as embedding logging into your applications may be required.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are Cloud computing deployments based on?

A

“Cloud compute deployments are based on master images—a virtual machine, container, or other code—that are then run as an instance in the cloud. Just as you would likely build a server in your data centre by using a trusted, preconfigured image, you would do the same in a cloud environment. Some Infrastructure as a Service (IaaS) providers may have “community images” available. But unless they are supplied by a trusted source, I would be very hesitant to use these in a production environment, because they may not be inspected by the provider for malicious software or back doors being installed by a bad actor who’s waiting for someone to use them. Managing images used by your organization is one of your most vital security responsibilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

“You know there are multiple network virtualization technologies out there, examples include:

A

virtual LANs (VLANs) to software-defined networking (SDN). You now understand that “software-driven everything” is how the industry is going. The software-driven aspect is a key contributor to resource pooling, elasticity, and all other aspects that make the cloud work at the scale it does.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

We have to perform an inspection and filtering of network traffic, but we can no longer use the same security controls we have used in the past. what are some other options?

A

Back in the early days of virtualization, some people thought it was a good idea to send all virtual network traffic out of the virtual environment, inspect the traffic using a physical firewall, and then reintroduce it back to the virtual network.

Newer virtual approaches to address this problem could include routing the virtual traffic to a virtual inspection machine on the same physical server or routing the network traffic to a virtual appliance on the same virtual network. Both approaches are feasible, but they still introduce bottlenecks and require less efficient routing.

The provider will most likely offer some form of filtering capability, be it through the use of an SDN firewall or within the hypervisor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

From a network monitoring perspective, don’t be surprised if you can’t get the same level of detail about network traffic from the provider that you had in the past in your own environment. why so?

A

This is because the cloud platform/provider may not support access for direct network monitoring. They will state that this is because of complexity and cost. Access to raw packet data will be possible only if you collect it yourself in the host or by using a virtual appliance. This accounts only for network traffic that is directed to, or originates from, a system that you control. In other environments, such as systems managed by the provider, you will not be able to gain access to monitor this network traffic, because this would be a security issue for the provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
A

“By default, the virtual network management plane is available to the entire world, and if it’s accessed by bad actors, they can destroy the entire virtual infrastructure in a matter of seconds via an API or web access. It is therefore paramount that this management plane be properly secured.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

As with compute virtualization in a cloud environment, virtual networks have a shared responsibility. What are some responsibilities of the provider?

A

The absolute top security priority is segregation and isolation of network traffic to prevent tenants from viewing another tenant’s traffic. At no point should one tenant ever be able to see traffic from another tenant unless this is explicitly allowed by both parties (via cross-account permissions, for example). This is the most foundational security control for any multitenant network.

Next, packet sniffing (such as using Wireshark), even within a tenant’s own virtual networks, should be disabled to reduce the ability of an attacker to compromise a single node and use it to monitor the network, which is common in traditional networks. This is not to say that customers cannot use some packet-sniffing software on a virtual server, but it means the customers should be able to see traffic sent only to a particular server.

In addition, all virtual networks should offer built-in firewall capabilities for cloud users without the need for host firewalls or external products. The provider is also responsible for detecting and preventing attacks on the underlying physical network and virtualization platform. This includes perimeter security of the cloud itself.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

As with compute virtualization in a cloud environment, virtual networks have a shared responsibility. What are some responsibilities of the cloud consumer?

A

The consumer is ultimately responsible for adhering to their own security requirements. this will require consuming and configuring security controls that are created and managed by the cloud provider, especially any virtual firewalls. Here are recommendations for consumers when it comes to securing network virtualization.

Take advantage of new network architecture possibilities. For example, compartmentalizing application stacks in their own isolated virtual networks to enhance security can be performed at little to no cost. Such an implementation may be cost-prohibitive in a traditional physical network environment.

Next, software-defined infrastructure (SDI) includes the ability to create templates of network configurations. You can essentially take a known-good network environment and save it as software. This approach enables you to rebuild an entire network environment incredibly quickly if needed. You can also use these templates to ensure that your network settings remain in a known-good configuration.

Finally, when the provider doesn’t expose appropriate controls for customers to meet their security requirements, customers will need to implement additional controls (such as virtual appliances or host-based security controls) to meet their requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are cloud overlay networks?

A

cloud overlay networks are a function of the Virtual Extensible LAN (VXLAN) technology, and they enable a virtual network to span multiple physical networks across a wide area network (WAN).“This is possible because VXLAN encapsulates packets in a routable format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is the classic redundant array of independent disks (RAID)

A

“a storage virtualization method”
RAID 0 (stripe set), for example, enables you to take three 1TB hard drives and make them look like a single 3TB hard drive. What could you call that? How about a pool of storage? Yes, that should work. Using software RAID, you virtualize your storage by joining drives together virtually to form a storage pool.

17
Q

How is storage virtualization done in a cloud environment?

A

They are likely using network-attached storage (NAS) or storage area networks (SANs) to form these pools. “SAN is a dedicated network of storage devices”

18
Q

Storage Virtualization Security

A

Most cloud platforms use highly redundant and durable storage mechanisms that make multiple copies of data and spread those copies across multiple storage locations. This is called data dispersion. This approach enables the provider to offer incredible levels of resiliency
Resiliency and availability aren’t the same thing. Data can be inaccessible if the network is down. The data is still there (resiliency), but it cannot be accessed (availability).

Providers will usually encrypt all customer data at the physical level, which doesn’t protect data at the virtual level, but does protect data on a drive that is decommissioned and is awaiting destruction

19
Q

What is a container and how do they differ from virtual machines?

A

You know that containers are a compute virtualization technology and that they differ from virtual machines in that only the application and required dependencies are bundled in a container, which is then run in an isolated user space on a shared kernel. Containers can run directly on a physical server (even a laptop), or they can run in a virtual machine.

A container is an abstraction at the application layer that isolates software from its environment. Containers don’t necessarily provide full-stack security isolation, but they do provide task segregation. On the other hand, virtual machines typically do provide security isolation. You can put tasks of equivalent security context on the same set of physical or virtual hosts to provide greater security segregation.

20
Q

Container systems always have the following components:

A

*Container This is the execution environment itself. The container provides code running inside a restricted environment with access only to the processes and capabilities defined in the container configuration via a configuration file (covered later in this chapter). While a VM is a full abstraction of an operating system, a container is a constrained place to run segregated processes while still utilizing the kernel and other capabilities of the base OS.

*Engine Also referred to as the container runtime, this is the environment on top of which a container is run. A very popular example of a container runtime is Docker Engine.

*Orchestration and scheduling controller Container orchestration deals with managing the lifecycle of containers. Orchestration deals with items such as provisioning and deployment of containers, scaling, movement of containers, and container health monitoring. When a container needs to be deployed, the orchestration tool schedules the deployment and identifies an appropriate system to run the container on. It knows how to deploy and manage containers based on a configuration file that tells the orchestration software where to find the container image (repository) and configuration items such as networking, mounting of storage space, and where to store container logs. Examples of container orchestration and scheduling tools include Kubernetes and Docker Swarm.

*Image repository This is where all of the images and code that can be deployed as containers are stored. Docker Hub is a popular example of a container image repository. Image repositories can be public or private.”

21
Q

Container Security Recommendations

“These are general best practices. Always consult vendor documentation for the latest product-dependent security recommendations.”

A

*Securing the underlying infrastructure
Security always begins in the container, and in a cloud environment, this is the provider’s responsibility. Just as the provider is responsible for the security of the physical infrastructure and the hypervisors in a virtual machine world, the provider is responsible for the physical infrastructure and the container platform hosting consumer containers.

*Securing the orchestration and scheduling service You know that orchestration and scheduling are critical components of container deployments and management. CSA Guidance refers to this as the “management plane” for containers.”

*Securing the image repository The image repository for containers can be considered like images for virtual machines. Images must be stored in a secure location, and appropriate access controls should be configured to ensure that only approved access is granted to modify images or configuration files.

*Securing the tasks/code in the container Containers hold software code. Weak application security will be weak regardless of whether it is run in a container or on a VM. Weak security isn’t limited to the code in the container; it can also apply to the definition files you read about in the “Container Definitions Backgrounder.” Appropriate network ports, file storage, secrets, and other settings can increase the security of the container environment and therefore the application as a whole.”

22
Q
A

“*Cloud providers must make strong isolation of workloads their primary duty.
*Providers are responsible for all physical security and any virtualization technologies that customers use. They must keep hypervisors secured and implement any required security patches.
*Providers must implement all customer-managed virtualization features with a “secure-by-default” (aka deny-by-default) configuration.
*Providers must ensure that any volatile memory is properly secured to prevent unintended access by other tenants or administrators.
*Providers must implement strong networking controls to protect customers at the physical level as well as virtual networking the customers cannot control.”

23
Q
A

“*Providers must isolate virtual network traffic, even when networks are controlled by the same customer.
*Providers must secure the physical storage system in use. This can include encryption at the physical layer to prevent data exposure during drive replacements.
*Consumers always need to know what security is offered by the provider and what they need to do in order to meet their own security requirements.
*For container security, remember that all the various components (engine, orchestration, and repository) need to be properly secured.
*Containers offer application isolation, but not complete isolation. Containers with similar security requirements should be grouped together and run on the same physical or virtual host to provide greater security segregation.
*Proper access controls and strong authentication should be in place for all container components.
*Ensure that only approved, known, and secure container images or code can be deployed.”

24
Q

“Why must the provider encrypt hard drives at the physical layer?”

A

“ Providers encrypt hard drives so that the data cannot be read if the drive is stolen or after it is replaced. Encryption at the physical layer does not protect data that is requested via the virtual layer.”

25
Q

“Which of the following are examples of compute virtualization?”

A

of the list, only containers can be considered as compute virtualization. Software templates are used to build an entire environment quickly. Although you could use these templates in infrastructure as code (IaC) to build or deploy containers and VMs, this is not considered a compute virtualization. A cloud overlay network enables a virtual network to span multiple physical networks

26
Q

“What is/are benefits of a virtual network compared to physical networks?”

A

The only accurate answer listed is that virtual networks can be compartmentalized, and this can increase security; this is expensive, if not impossible, in a physical network. SDN can offer a single management plane for physical network appliances, and the “ease” of filtering is quite subjective. Filtering in a virtual network is different, but it may or may not be more difficult.

27
Q

“Nathan is trying to troubleshoot an issue with a packet capture tool on a running instance. He notices clear-text FTP usernames and passwords in the captured network traffic that is intended for another tenant’s machine. What should Nathan do?”

A

“Nathan is able to see network traffic destined for other machines, so there has been a failure of network isolation, and this should be the provider’s top security priority. If I were Nathan, I would change cloud providers as soon as possible. All the other answers are not applicable (although writing a bunch of screen captures to the other tenant’s FTP directory to advise them of their exposure would be pretty funny).