7. Infrastructure Security - DONE Flashcards

1
Q

Define Network virualisaztion

A

Network virtualization abstracts the underlying physical network and is used for the network resource pool. How these pools are formed, and their associated capabilities, will vary based on the particular provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Underneath the virtualization are three networks that are created as part of an Infrastructure as a Service (IaaS) cloud.

List these and the traffic that each supports

A

The management network, the storage network, and the service network.

These three networks have no functional or traffic overlap so they should run on three separate networks dedicated to associated activity. Yes, this means that the provider needs to implement and maintain three sets of network cables and network infrastructure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is VXLAN

A

Virtual Extensible LAN (VXLAN) is a network virtualization technology standard made to address the scalability and isolation issues with VLANs. VXLAN encapsulates layer 2 frames within UDP packets by using a VXLAN Tunnel End Point (VTEP), essentially creating a tunneling scenario where the layer 2 packets are “hidden” while they traverse a network, using layer 3 (such as IP) addressing and routing capabilities. Inside these UDP packets, a VXLAN network identifier (VNI) is used for addressing. Unlike the VLAN model discussed earlier, VXLAN uses 24 bits for tagging purposes, meaning approximately 16.7 million addresses, thus addressing the scalability issue faced by normal VLANs.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is VLAN?

A

Virtual local area networks (VLANs) technology essentially uses tagging of network packets (usually at the port on the switch to which a system is connected) to create single broadcast domains. This creates a form of network segmentation, not isolation. Segmentation can work in a single-tenant environment like a trusted internal network but isn’t optimal in a cloud environment that is multitenant by nature.

Another issue when it comes to the use of VLANs in a cloud environment is address space. Per the IEEE 802.1Q standard, a VLAN can support 4096 addresses. That’s not a whole lot.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is software-defined networking?

A

“SDN is an architectural concept that enables centralized management and emphasizes the role of software in running networks to dynamically control, change, and manage network behaviour.

Centralized management is achieved by breaking out the control plane (brains) and making this plane part of an SDN controller that manages the data plane, which remains on the individual networking components (physical or virtual). Dynamic change and management are supplied through the application plane. All three of these planes (mostly) communicate via APIs. Figure 7-6 shows the various planes in an SDN environment

SDN is an architectural concept that can be realized by using a protocol such as VXLAN.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

“So SDN separates the control plane from the data plane. Wait…hold on. Those are already separate, right? ”

A

“Exactly so, but as I said in the previous section, in traditional networking gear, all three planes exist in the single hardware appliance. SDN moves the control plane from the actual networking device to an SDN controller. This consolidation and centralization of control result in a more agile and flexible networking environment. Remember that SDN isn’t a networking protocol, but VXLAN is a networking protocol. Quite often, as in the CSA Guidance, people will combine the two technologies when talking about SDN, but just remember that SDN is an architectural concept that can be realized by using a protocol such as VXLAN.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is an Open flow switch?

A

The OpenFlow protocol was first released in 2011 by the Open Networking Foundation (ONF) and is considered the enabler of SDN. It is a protocol through which a logically centralized controller can control an OpenFlow switch. Each OpenFlow-compliant switch maintains one or more flow tables, which are used to perform packet lookups.

every network device (physical or virtual) has a data plane that contains a flow table that is managed by the control plane (SDN controller in this case)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does an Open-flow SDN controller work?

A

The OpenFlow SDN controllers will communicate with the OpenFlow-compliant networking devices using the OpenFlow specification (such as southbound APIs) to configure and manage the flow tables. Communication between the controller and the applications occurs over the northbound interface. There is no standard communication method established for these northbound interfaces, but typically APIs are used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How is the SDN beneficial to the cloud providers and cloud clients?

A

Through the implementation of SDN (and enabling technologies), cloud providers can offer clients much higher flexibility and isolation.

By design, cloud providers offer clients what they are generally accustomed to getting. For example, clients can select “whatever IP range they want in the cloud environment, create their own routing tables, and architect the metastructure networking exactly the way they want it. This is all possible through the implementation of SDN (and related technologies).

The SDN implementation not only hides all the underlying networking mechanisms from customers, but it also hides the network complexities from the virtual machines running in the provider’s network. All the virtual instance sees is the virtual network interface provided by the hypervisor, and nothing more.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

“How Security Changes with Cloud Networking”

A

Back in the good old days of traditional networking, security was a whole lot more straightforward than it is today. Back then, you may have had two physical servers with physical network cards that would send bits over a physical network, and then a firewall, intrusion prevention system (IPS), or another security control would inspect the traffic that traversed the network.

Now, virtual servers use virtual network cards and virtual appliances. Although cloud providers do have physical security appliances in their environments, you’re never going to be able to ask your provider to install your own physical appliances in their cloud environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

“Pretty much the only commonality between the old days of physical appliance security controls and today’s virtual appliances is that both can be potential bottlenecks and single points of failure. ”

A

“ Not only can appliances become potential bottlenecks, but software agents installed in virtual machines can also impact performance. Keep this in mind when architecting your virtual controls in the cloud, be they virtual appliances or software agents.

“After all, virtual machines can crash just like their associated physical servers, and an improperly sized virtual appliance may not be able to keep up with the amount of processing that is actually required. Also, remember the costs associated with the virtual appliances that many vendors now offer in many Infrastructure as a Service (IaaS) environments.”

Excerpt From
CCSK Certificate of Cloud Security Knowledge All-in-One Exam Guide
Graham Thompson
This material may be protected by copyright.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A benefit of the SDN is the isolation. Describe how?

A

You know that SDN (through associated technologies) offers isolation by default. You also know that thanks to SDN, you can run multiple networks in a cloud environment using the same IP range. There is no logical way these networks can directly communicate because of addressing conflicts. Isolation can be a way to segregate applications and services with different security requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A benefit of the SDN is the SDN firewalls. Describe how?

A

These may be referred to as “security groups.

Different providers may have different capabilities, but SDN firewalls are generally applied to the virtual network card of a virtual server. They are just like a regular firewall in that you make a “policy set” (aka firewall ruleset) that defines acceptable traffic for both inbound (ingress) and outbound (egress) network traffic. This means SDN firewalls have the granularity of host-based firewalls but are managed like a network appliance. As part of the virtual network itself, these firewalls can also be orchestrated. How cool is it that you can create a system that notices a ruleset change, automatically reverts to the original setting and sends a notification to the cloud administrator? That’s the beauty of using provider-supplied controls that can be orchestrated via APIs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A benefit of the SDN is the Deny by default. Describe how?

A

SDN networks are typically deny-by-default-for-everything networks. If you don’t establish a rule that explicitly allows something, the packets are simply dropped.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A benefit of the SDN is identification tags. Describe how?

A

The concept of identifying systems by IP address is dead in a cloud environment; instead, you need to use tagging to identify everything. This isn’t a bad thing; in fact, it can be a very powerful resource to increase your security. Using tags, you could automatically apply a security group to every server, where, for example, a tag states that a server is running web services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A benefit of the SDN is network attacks. Describe how?

A

Many low-level network attacks against your systems and services are eliminated by default. Network sniffing of an SDN network, for example, doesn’t exist because of inherent isolation capabilities. Other attacks, such as ARP spoofing (altering of NIC hardware addresses), can be eliminated by the provider using the control plane to identify and mitigate attacks. Note that this doesn’t necessarily stop all attacks immediately, but there are numerous research papers discussing the mitigation of many low-level attacks through the software-driven functionality of SDN.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Describe the principle behind micro-segmentation

A

You know that a VLAN segments out networks. You can take that principle to create zones, where groupings of systems can be placed into their zones. This moves network architecture from the typical “flat network,” where network traffic is inspected in a “north–south” model (once past the perimeter, there is free lateral movement), toward a “zero-trust” network based on zones, and traffic can be inspected in both a “north–south” and “east–west” (or within the network) fashion.

That’s the same principle behind microsegmentation—except microsegmentation takes advantage of network virtualization to implement a fine-grained approach to creating these zones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the benefit of micro-segmentation?

A

say, grouping five web servers together in a microsegmented zone rather than creating a single demilitarized zone (DMZ) with hundreds of servers that shouldn’t need to access one another. This enables the implementation of fine-grained “blast zones,” where if one of the web servers is compromised, the lateral movement of an attacker would be limited to the five web servers, not every server in the DMZ.

This fine-grained approach isn’t very practical with traditional zoning restrictions associated with using VLANs to group systems together

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Building on the concepts discussed in SDN and microsegmentation, the CSA has developed a model called the Software Defined Perimeter (SDP).

what is it and what are its three components?

A

The SDP combines both device and user authentication to provision network access to resources dynamically. There are three components in SDP :
*An SDP client (agent) installed on a device
*The SDP controller that authenticates and authorizes SDP clients based on both device and user attributes
*The SDP gateway that serves to terminate SDP client network traffic and enforces policies in communication with the SDP controller

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are some security considerations for CSPs or Private cloud

A

CSPs are required to properly secure the physical aspects of a cloud environment that everything is built upon. A security failure at the physical layer can lead to devastating results, where all customers are impacted.

As mentioned, SDN offers the ability to maintain segregation and isolation for a multitenant environment. Providers must always consider all tenants as being potentially hostile, and, as such, CSPs must address the additional overhead associated with properly establishing and maintaining SDN security controls. Providers must also expose security controls to cloud consumers so they can appropriately manage their virtual network security.

“Perimeter security still matters in a cloud environment. Providers should implement standard perimeter security controls such as distributed denial of service (DDoS) protection, IPS, and other technologies to filter out hostile traffic before it can impact consumers in the cloud environment.”

Finally, as far as the reuse of hardware is concerned, providers should always be able to properly clean or wipe any resources (such as volumes) that have been released by clients before reintroducing them into the resource pool to be used by other customers.

21
Q

What are some security concerns of the Hybrid cloud?

A

Recall from Chapter 1 an example of a hybrid cloud—when a customer has their own data centre and also uses cloud resources. As far as large organizations are concerned, this connection is usually made via a dedicated wide area network (WAN) link or across the Internet using a VPN. For your network architects to incorporate a cloud environment (especially IaaS), the provider has to support arbitrary network addressing as determined by the customer, so that cloud-based systems don’t use the same network address range used by your internal networks.
As a customer, you must ensure that both areas have the same levels of security applied. Consider, for example, a flat network in which anyone within a company can move laterally (east–west) through the network and can access cloud resources as a result. You should always consider the network link to your cloud systems as potentially hostile and enforce separation between your internal and cloud systems via routing, access controls, and traffic inspection (such as firewalls).”

22
Q

How can you remediate security concerns around network access to cloud resources in the hybrid cloud

A

“The bastion (or transit) virtual network is a network architecture pattern mentioned in the CSA Guidance to provide additional security to a network architecture in the cloud. Essentially, a bastion network can be defined as a network that data must go through to get to a destination. With this in mind, creating a bastion network and forcing all cloud traffic through it can act as a chokepoint (hopefully in a good way). You can tightly control this network (as you would with a bastion host) and perform all the content inspection that is required to protect traffic coming in and out of the data centre and cloud environments. Figure 7-8 shows the bastion network integration between a cloud environment and a data centre.

23
Q

What is a workload?

A

A workload is a unit of processing. It can be executed on a physical server, on a virtual server, in a container, or as a function on someone else’s virtual server—you name it. Workloads will always run on some processor and will consume memory, and the security of those items is the responsibility of the provider.

24
Q

The CSA Guidance covers four types of “compute abstraction”

list these

A

virtual machines
containers
platform based workloads
serverless computing

25
Q

Describe the abstraction process for virtual machines

A

A virtual machine manager (aka hypervisor) is responsible for creating a virtual environment that “tricks” a guest OS (an instance in cloud terminology) into thinking that it is talking directly to the underlying hardware, but in reality, the hypervisor takes any request for underlying hardware (such as memory) and maps it to a dedicated space reserved for that particular guest machine. This shows you the abstraction (the guest OS has no direct access to the hardware) that hypervisors perform

26
Q

Describe the benefit of the isolation qualities of a hypervisor

A

“Allocating particular memory spaces assigned to a particular guest environment demonstrates segregation and isolation (the memory space of one guest OS cannot be accessed by another guest OS on the same machine).
The isolation qualities of a hypervisor make multi-tenancy possible. If this isolation were to fail, the entire multitenancy nature, and therefore business model, of cloud services would be thrown into question.

27
Q

what is the difference between containers and virtual machines?

A

As you can see in the figure, containers differ from traditional virtual machines in that a container doesn’t have all the “bloat” of an operating system to deal with. Instead of cramming an operating system, required libraries, and the application itself into a 30GB package with a VM, you can place the application and any dependencies (such as libraries) the application requires in a much smaller package, or container if you will. With containers, the application uses a shared kernel and other capabilities of the base OS. The container provides code running inside a restricted environment with access only to the processes and capabilities defined in the container configuration. This technology isn’t specific to cloud technologies; containers themselves can run in a virtualized environment or directly on a single server.

28
Q

Because a container is much smaller than a traditional virtual machine, it offers two primary benefits

What are the two benefits of a container?

A

First, a container can launch incredibly quickly because it involves no OS that needs time to boot up. This aspect can help you with agility.

Second, a container can help with portability. I said “help,” not “fully address,” portability. Moving a container is a quick operation, but the container itself will require a supported shared kernel. But that only addresses the runtime (engine) dependency itself, and many providers will support the Docker Engine, which has become pretty much the default container engine today. Where portability can get derailed is in all of the other elements of containerization technology, such as container hosts, images, and orchestration via management systems (such as Docker Swarm or Kubernetes).

29
Q

What are platform-based workloads?

A

Platform-based workloads are defined in the CSA Guidance as anything running on a shared platform that isn’t a virtual machine or a container.

As you can imagine, this encompasses an incredibly wide array of potential services. Examples include stored procedures running on a multitenant Platform as a Service (PaaS) database and a job running on a machine-learning PaaS.

The main thing to remember for your CCSK exam is that although the provider may expose a limited amount of security options and controls, the provider is responsible for the security of the platform itself and, of course, everything else down to the facilities themselves, just like any other PaaS offering.

30
Q

What is serverless computing?

A

“Serverless,” in this context, is essentially a service exposed by the provider, where the customer doesn’t manage any of the underlying hardware or virtual machines and accesses exposed functions (such as running your Python application) on the provider’s servers. It’s “serverless” to you, but it runs on a backend, which could be built using containers, virtual machines, or specialized hardware platforms. The provider is responsible for securing everything about the platform they are exposing, all the way down to the facilities

31
Q

“How the Cloud Changes Workload Security”

A

“The main item that changes security in a cloud environment is multitenancy. It is up to the provider to implement segregation and isolation between workloads, and this should be one of the provider’s top priorities. Some providers may offer a dedicated instance capability, where one workload operates on a single physical server, but this option is usually offered at additional cost. This is true for both public and private cloud environments”

32
Q

What are immutable workloads?

A

“You know that after an attacker successfully exploits a vulnerability, they’ll install a back door for continued access. it’s only a matter of time until you get around to patching the vulnerability. The attacker will slowly explore your network and siphon off data. It can take months until they achieve their ultimate goal. That said, in the typical approach, your system patch affects the original vulnerability and does nothing to the back door that’s been installed. But in an immutable approach, you replace the server with a new instance from a patched server image, thus removing the vulnerability and the back door

33
Q

why do you want to use immutable workloads?

A

Remember that using an immutable approach enables you to perform the bulk of security tests on the images before they go into production.

whitelisting applications and processes require that nothing on the server itself be changed. As such, arming the servers with some form of file integrity monitoring should also be easier, because if anything changes, you know you probably have a security issue on your hands.

34
Q

How does the immutable workload work at the metastructure level?

A

Typically, you need to take advantage of the elasticity supplied by the cloud provider through the use of auto-scaling. You would state that you require, say, three identical web server instances running in an auto-scaling group. When the time comes (say, every 45 days), you would update the image with the latest patches (not all patches are security patches) and test the new image. Once you are happy with the image itself, you tell the system to use the new image, which should be used for the auto-scaling group, and terminate a server. The auto-scaling group will detect that an instance is missing and will build a new server instance based on that new image. At that point, one of your server instances will be built from the new image.

35
Q

What is the downside to immutable workloads?

A

“Manual changes made to instances. When you go immutable, you cannot allow any changes to be made directly to the running instances, because they will be blown away with the instances. Any and all changes must be made to the image, and then you deploy the image. To achieve this, you would disable any remote logins to the server instances.

36
Q

What are the Requirements of Immutable Workloads?

A
  • A provider that supports this approach—that shouldn’t be an issue with larger IaaS providers.
  • You also need a consistent approach to the process of creating images to account for patch and malware signature updates
  • You also need to determine how security testing of the images themselves will be performed as part of the image creation and deployment process. This includes any source code testing and vulnerability assessments, as applicable.
  • Logging and log retention will require a new approach as well. In an immutable environment, you need to get the logs off the servers as quickly as possible, because the servers can disappear on a moment’s notice. This means you need to make sure that server logs are exported to an external collector as soon as possible.
  • All of these images need to have appropriate asset management. This may introduce increased complexity and additional management of the service catalogue that maintains accurate information on all operational services and those being prepared to be run in production.
  • Finally, running immutable servers doesn’t mean that you don’t need to create a disaster recovery plan. You need to determine how you can securely store the images outside of the production environment. Using cross-account permissions, for example, you could copy an image to another account, such as a security account, for safekeeping. If your production environment is compromised “by an attacker, they will probably delete not only your running instances but any images as well. But if you have “copied these images elsewhere for safekeeping, you can quickly rebuild.
37
Q

What are some potential challenges you’ll face with the new approach on running workloads?

A

Implementing software agents (e.g. host-based firewalls) may be impossible, especially in serverless environments

Even if running agents is possible, you need to ensure that they are lightweight (they don’t have a lot of overhead) and will work properly in a cloud environment (for example, they will keep up with auto-scaling). Remember that any application that relies on IP addresses to track systems (via either the agent side or the central console) is useless, especially in an immutable environment. CSA often refers to this as having an “ability to keep up with the rapid velocity of change” in a cloud environment. The bottom line is this: ask your current provider if they have “cloud versions” of their agents.

By no means a cloud-specific, agents shouldn’t increase the attack surface of a server. Generally, the more ports that are “open on a server, the larger the attack surface. Beyond this, if the agents consume configuration changes and/or signatures, you want to ensure that these are sourced from a trusted authoritative system such as your internal update servers.

Speaking of using internal update services, there is little reason why these systems shouldn’t be leveraged by the systems in a cloud environment. Consider how you do this today in your traditional environment. Do you have centralized management tools in every branch office, or do you have a central management console at the head office and have branch office computers updated from that single source? For these update services, you can consider a cloud server instance in the same manner as a server running in a branch office, assuming you have established appropriate network paths.

38
Q

How has monitoring and logging become more challenging with cloud?

A

Both security monitoring and logging of virtual machines are impacted by the traditional use of IP addresses to identify machines; this approach doesn’t work well in a highly dynamic environment like the cloud.

Other technologies such as serverless won’t work with traditional approaches at all, because they offer no ability to implement agents. This makes both monitoring and logging more challenging.”

39
Q

Rather than using IP addresses as a unique identifier, what else can you use in cloud?

A

Rather than using IP addresses, you must be able to use some other form of unique identifier. This could, for example, involve collecting some form of tagging to track down systems. Such identifiers also need to account for the ephemeral nature of the cloud. Remember that this applies both to the agents and to the centralized console. Speak with your vendors to make sure that they have a cloud version that tracks systems by using something other than IP addresses.

40
Q

I mentioned that logs should be taken off a server as soon as possible in a virtual server environment. why?

A

Logging in a serverless environment will likely require some form of logging being implemented in the code that you are running on the provider’s servers.

Also, you need to look at the costs associated with sending back potentially large amounts of traffic to a centralized logging system or security information and event management (SIEM) system. You may want to consider whether your SIEM vendor has a means to minimize the amount of network traffic this generates through the use of some kind of “forwarder” and ask for their recommendations regarding forwarding of log data. For example, does the vendor recommend that network flow logs not be sent to the centralized log system due to overhead? If network traffic costs are a concern, you may need to sit down and create a plan on what log data is truly required and what isn’t.

41
Q

There are two approaches to performing vulnerability assessments (VAs)

A

On one hand, some companies prefer to perform a VA from the viewpoint of an outsider, so they will place a scanner on the general Internet and perform their assessment with all controls (such as firewall, IPS, and so on) taken into consideration.

Alternatively, some security professionals believe that a VA should be performed without those controls in place so that they can truly see all of the potential vulnerabilities in a system or application without a firewall hiding the vulnerabilities

42
Q

“The CSA recommends that you perform a VA as close to the actual server as possible, and, specifically, VAs should focus on the image, not just the running instances. With this in mind, here are some specific recommendations from the CSA that you may need to know for your CCSK exam.

A

“Providers will generally request that you notify them of any testing in advance. This is because the provider won’t be able to distinguish whether the scanning is being performed by a “good guy” or a “bad guy.”
They’ll want the general information that is normally requested for internal testing, such as where the test is coming from (such as the IP address of the test station, the target of the test, and the start time and end time).

The provider may also throw in a clause stating that you are responsible for any damages you cause not just to your system but to the entire cloud environment. ( if this is required, you’ll probably want someone with authority to sign off on the test request.)

As for testing the cloud environment itself, forget about it. The CSP will limit you to your use of the environment only. A PaaS vendor will let you test your application code but won’t let you test their platform. Same thing for SaaS—they won’t allow you to test their application. When you “think of it, it makes sense after all, because in either example, if you take down the shared component (platform in PaaS or application in SaaS), everyone is impacted, not just you.
Performing assessments with agents installed on the server is best. In the deny-by-default world that is the cloud, controls may be in place (such as SDN firewalls or security groups) that block or hide any vulnerabilities. Having the deny-by-default blocking potentially malicious traffic is great and all, but what if the security group assigned to the instances changes, or if the instances are accidentally opened for all traffic? This is why CSA recommends performing VAs as close to the workload as possible.”

43
Q

“Regarding vulnerability assessments versus penetration testing:

A

Generally, a VA is considered a test to determine whether there’s a potential vulnerability, and a pentest will try to exploit any discovered potential vulnerabilities found. Quite often, the provider will not make a distinction between the two as far as approvals are concerned.”

44
Q
A

*SDN networking should always be implemented by a cloud provider. A lack of a virtualized network means a loss of scalability, elasticity, orchestration, and, most importantly, isolation.

*Remember that the isolation capabilities of SDN allow for the creation of multiple accounts/segments to limit the blast radius of an incident impacting other workloads.

*CSA recommends that providers leverage SDN firewalls to implement a deny-by-default environment. Any activity that is not expressly allowed by a client will be automatically denied.

*Network traffic between workloads in the same virtual subnet should always be restricted by SDN firewalls (security groups) unless communication between systems is required. This is an example of microsegmentation to limit east–west traffic.

*Ensure that virtual appliances can operate in a highly dynamic and elastic environment (aka “keep up with velocity of change”). Just like physical appliances, virtual appliances can be bottlenecks and single points of failure if they’re not implemented properly.

*For compute workloads, an immutable environment offers incredible security benefits. This approach should be leveraged by customers whenever possible.

45
Q
A

*When using immutable servers, you can increase security by patching and testing images and replacing nonpatched instances with new instances built off the newly patched image.

*If using immutable servers, you should disable remote access and integrate file integrity monitoring because nothing in the running instances should change.

*Security agents should always be “cloud aware” and should be able to keep up with the rapid velocity of change (such as never use IP addresses as system identifiers).

*As a best practice, get logs off servers and on to a centralized location as quickly as possible (especially in an immutable environment), because all servers must be considered ephemeral in the cloud.

*Providers may limit your ability to perform vulnerability assessments and penetration testing, especially if the target of the scans is their platform.”

46
Q

“Which of the following is/are accurate statement(s) about the differences between SDN and VLAN?”

A

“A.SDN isolates traffic, which can help with microsegmentation. VLANs segment network nodes into broadcast domains.
B.VLANs have roughly 65,000 IDs, while SDN has more than 16 million.
C.SDN separates the control plane from the hardware device and allows for applications to communicate with the control plane.”
.

47
Q

Select two attributes that a virtual appliance should have in a cloud environment.

A

“Auto-scaling and failover are the two most important attributes that a virtual appliance should have in a cloud environment. Any appliance can become a single point of failure and”

48
Q

In a cloud provider and user relationship, the virtual or abstracted infrastructure is managed by which entity?

A

The cloud user

In cloud computing, there are two macro layers to the infrastructure.

The fundamental resources pooled together to create a cloud. This is the raw, physical and logical compute (processors, memory, etc.), networks and storage used to build the cloud’s resource pools. For example, this includes the security of the networking hardware and software used to create the network resource pool.

The virtual/abstracted infrastructure is managed by a cloud user. That’s the compute, network and storage assets that they use from the resource pools. For example, the virtual network’s security is defined and managed by the cloud user.