11. Data Security and Encryption - DONE Flashcards

1
Q

When considering data security controls in cloud environments, you need to consider three main components:

A

*Determine which data is allowed to be stored in the cloud, based on data classifications (covered in Chapter 5) that address your legal and regulatory compliance requirements. Pay particular attention to permitted jurisdictions and storage media.

*Protect and manage data security in the cloud. This will involve establishing a secure architecture, proper access controls, encryption, detection capabilities, and other security controls as needed.

*Ensure compliance with proper audit logging established and backups and business continuity in place.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

list the most common ways storage can be consumed by customers of a cloud provider:

how will they affect how you secure your data in a cloud environment?

A
  • object storage
  • volume storage
  • database
  • application/platform

Each of these storage types has different threats and data protection options, which can differ depending on the provider. For example, typically you can give individual users access to individual objects, but a storage volume is allocated to a virtual machine (VM) in its entirety. This means your approach to securing data in a cloud environment will be based on the storage model used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe object storage

A

This storage type is presented like a file system and is usually accessible via APIs or a front-end interface (such as the Web). Files (such as objects) can be made accessible to multiple systems simultaneously.

This storage type can be less secure, as it has often been discovered to be accidentally made available to the public Internet.

Examples of common object storage include Amazon S3, Microsoft Azure Block binary large objects (blobs), and Google Cloud Storage service.

“NOTEBlob storage is used to hold unstructured data such as video, audio, and other file types.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe volume storage

A

This is a storage medium such as a hard drive that you attach to your server instance. Generally a volume can be attached only to a single server instance at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe database storage

A

Cloud service providers may offer customers a wide variety of database types, including commercial and open source options. Quite often, providers will also offer proprietary databases with their own APIs. These databases are hosted by the provider and use existing standards for connectivity.

Databases offered can be relational or nonrelational. Examples of nonrelational databases include NoSQL, other key/value storage systems, and file system–based databases such as Hadoop Distributed File System (HDFS).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe Application/platform storage

A

This storage is managed by the provider. Examples of application/platform storage include content delivery networks (CDNs), files stored in Software as a Service (SaaS) applications (such as a customer relationship management [CRM] system), caching services, and other options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Regardless of the storage model, what are common practices the CSP will use when managing data

A

“Regardless of the storage model in use, most CSPs employ redundant, durable storage mechanisms that use data dispersion (also called “data fragmentation of bit splitting” in the CSA Guidance). This process takes data (say, an object), breaks it up into smaller fragments, makes multiple copies of these fragments, and stores them across multiple servers and multiple drives to provide high durability (resiliency). In other words, a single file would not be located on a single hard drive, but would be spread across multiple hard drives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Once acceptable storage locations are determined, you must monitor them for activity using tools such as :

A

database activity monitor (DAM) and file activity monitor (FAM). These controls can be not only detective in nature but may also prevent large data migrations from occurring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The following tools and technologies can be useful for monitoring cloud usage and any data transfers:

Cloud access security broker (CASB)

A
  • CASB systems were originally built to protect SaaS deployments and monitor their usage, but they have recently expanded to address some concerns surrounding Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) deployments as well.

You can use CASB to discover your actual usage of cloud services through multiple means such as network monitoring, integration with existing network gateways and monitoring tools, or even monitoring Domain Name System (DNS) queries. This could be considered a form of a discovery service. Once the various services in use are discovered, a CASB can monitor activity on approved services either through an API connection or inline (man-in-the-middle) interception. Quite often, the power of a CASB is dependent on its data loss prevention (DLP) capabilities (which can be either part of the CASB or an external service, depending on the CASB vendor’s capabilities).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The following tools and technologies can be useful for monitoring cloud usage and any data transfers:

URL filtering

A

URL filtering (such as a web gateway) may help you understand which cloud services your users use (or try to use).

The problem with URL filtering, however, is that you are generally stuck in a game of “whack-a-mole” when trying to control which services are allowed to be used and which are not. URL filtering will generally use a whitelist or blacklist to determine whether or not users are permitted to access a particular website.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is the main difference between URL filtering and CASB

A

The main difference between URL filtering and CASB is that, unlike traditional whitelisting or blacklisting of domain names, CASB can use DLP when it is performing an inline inspection of SaaS connections.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The following tools and technologies can be useful for monitoring cloud usage and any data transfers:

Data loss prevention

A

A DLP tool may help detect data migrations to cloud services. You should, however, consider a couple of issues with DLP technology.

First, you need to “train” a DLP to understand what is sensitive data and what is not. Second, a DLP cannot inspect traffic that is encrypted. Some cloud SDKs and APIs may encrypt portions of data and traffic, which will interfere with the success of a DLP implementation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

To protect data as it is moving to a cloud, you need to focus on the security of data in transit. what are some examples?

A

Does your provider support Secure File Transfer Protocol (SFTP), or do they require you to use File Transfer Protocol (FTP) that uses clear-text credentials across the Internet? Your vendor may expose an API to you that has strong security mechanisms in place, so there is no requirement on your behalf to increase security.

some data transfers may involve data that you do not own or manage, such as data from public or untrusted sources. You should ensure that you have security mechanisms in place to inspect this data before processing or mixing it in with your existing data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

To protect data as it is moving to a cloud, you need to focus on the security of data in transit. what are some examples of encryption of data in transit?

A

As far as encryption of data in transit is concerned, many of the approaches used today are the same approaches that have been used in the past. This includes Transport Layer Security (TLS), Virtual Private Network (VPN) access, and other secure means of transferring data. If your provider doesn’t offer these basic security controls, get a different provider—seriously.

Another option for ensuring encryption of data in transit is that of a proxy (aka hybrid storage gateway or cloud storage gateway). The job of the proxy device is to encrypt data using your encryption keys prior to it being sent out on the Internet and to your provider. This technology, while promising, has not achieved the expected rate of adoption. Your provider may offer software versions of this technology, however, as a service to its customers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

“When you’re considering transferring very large amounts of data to a provider, do not overlook shipping of hard drives to the provider if possible

why so?.

A

Although data transfers across the Internet are much faster than they were ten years ago, I would bet that shipping 10 petabytes of data would be much faster than copying it over the Internet. Remember, though, that when you’re shipping data, your company may have a policy that states all data leaving your data center in physical form must be encrypted. If this is the case, talk with your provider regarding how best to do this.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You need to be aware of only two security controls for your CCSK exam:

A

The core data security controls are access controls and encryption. Access controls are your single most important controls.

Remember that access controls are your number-one controls. If you mess this up, all the other controls fall apart. Once you get the basics right (meaning access controls), then you can move on to implementing appropriate encryption of data at rest using a risk-based approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Access controls must be implemented properly in three main areas:

A

*Management plane- These access controls are used to restrict access to the actions that can be taken in the CSP’s management plane. Most CSPs have deny-by-default access control policies in place for any new accounts that may be created.

*Public and internal sharing controls - These controls must be planned and implemented when data is shared externally to the public or to partners.

*Application-level controls - Applications themselves must have appropriate controls designed and implemented to manage access. This includes both your own applications built in PaaS as well as any SaaS applications your organization consumes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

With the exception of application-level controls, your options for implementing access controls will vary based on the cloud service model and the provider’s specific features.

what can assist you in this decision?

A

“To assist with planning appropriate access controls, you can use an entitlement matrix on platform-specific capabilities. This entitlement matrix is essentially a grid, similar to the following, that lists the users, groups, and roles with access levels for resources and functions:

After entitlements are established, you must frequently validate that your controls meet requirements, with a particular focus on public-sharing controls. You can establish “alerts for all new public shares or for changes in permission that allow public access to quickly identify any overly permissive entitlements.

It is important that you understand the capabilities exposed by the provider to support appropriate access controls on all data under your control and that you build your entitlement matrix and implement these controls in the environment. This spans all data such as databases and all cloud data stores.

19
Q

what is the primary purpose of the entitlement matrix

A

The primary purpose of an entitlement matrix is to implement application-level operational risk controls. If the provider doesn’t offer you the ability to fine-tune permissions (aka granularity) needed to implement your entitlements, you should look for a different provider.

20
Q

what are the various ways that data can be protected at rest?

A

The two technologies addressed in the CSA Guidance are encryption and tokenization. Both of these technologies make data unreadable to unauthorized users or systems that are trying to read your data.
Encryption scrambles the data to make it unreadable for the foreseeable future

Tokenization replaces each element of a data set with a random value. The tokenization system stores both the original data and the randomized version in a secure database for later retrieval.

Tokenization is a method proposed by the payment card industry (PCI) as a means to protect credit card numbers. In a PCI tokenization system, for example, a publicly accessible tokenization server can be used as a front end to protect actual credit card information that is held in a secure database in the back end. When a payment is processed, the vendor receives a token that acts like a reference ID that can be used to perform actions on a transaction such as refunds. At no time does the vendor need to store actual credit card information; rather, they store these tokens.

The CSA Guidance states that tokenization is often used when the format of the data is important. Format-preserving encryption (FPE) encrypts data but keeps the same structural format.

21
Q

difference between tokenization and encryption

A

“encryption will often dramatically increase the string of a text, while tokenization and data masking techniques can keep the same length and format of data while rendering it unusable to anyone who may access it.

22
Q

“In the cloud, there are three components of an encryption system and two locations.

These are:

A

The three components are the data itself, the encryption engine, and key management that holds the encryption keys.

“Any of these components can be run in any location. For example, your data could be in a cloud environment, and the encryption engine and key-management service that holds the keys could be within your data centre. Any combination is possible. The combination will often be based on your risk appetite.

One organization could be perfectly fine with all three being in a cloud environment, whereas another organization would require that all data be stored in a cloud environment only after being encrypted locally.

23
Q

When designing an encryption system, you should start with a threat model and answer some basic questions such as these:

A

*Do you trust the cloud provider to store your keys?
*How could the keys be exposed?
*Where should you locate the encryption engine to manage the threats you are concerned with?
*Which option best meets your risk tolerance requirements: managing the keys yourself or letting the provider do that for you?
*Is there separation of duties between storing the data encryption keys, storing the encrypted data, and storing the master key?
The answers to these and other questions will help guide your encryption system design for cloud services.

24
Q

When considering encryption in IaaS, you need to think about the two main storage offerings: volume-storage and object- and file-storage encryption.

describe what volume storage encryption involves

A

Volume-storage encryption involves the following:
*Instance-managed encryption -The encryption engine runs inside the instance itself. An example of this is the Linux Unified Key Setup. The issue with instance-managed encryption is that the key itself is stored in the instance and protected with a passphrase. In other words, you could have AES-256 encryption secured with a passphrase of 1234.

*Externally managed encryption - Externally managed encryption stores encryption keys externally, and a key to unlock data is issued to the instance on request.

25
Q

When considering encryption in IaaS, you need to think about the two main storage offerings: volume-storage and object- and file-storage encryption.

describe what Object-storage encryption involves

A

“*Client-side encryption In this case, data is encrypted using an encryption engine embedded in the application or client. In this model, you are in control of the encryption keys used to encrypt the data.
*Server-side encryption Server-side encryption is supplied by the CSP, who has access to the encryption key and runs the encryption engine. Although this is the easiest way to encrypt data, this approach requires the highest level of trust in a provider. If the provider holds the encryption keys, they may be forced (compelled) by a government agency to unencrypt and supply your data.”

*Proxy encryption
This is a hybrid storage gateway. This approach can work well with object and file storage in an IaaS environment, as the provider is not required to access your data in order to deliver services. In this scenario, the proxy handles all cryptography operations, and the encryption keys may be held within the appliance or by an external key-management service.. It is part of Iaas or Saas encryption

26
Q

Unlike IaaS, where there are a few dominant players, there are numerous PaaS providers, all with different capabilities as far as encryption is concerned. The CSA Guidance calls out three areas where encryption can be used in a PaaS environment:

A

*Application-layer encryption When you’re running applications in a PaaS environment, any required encryption services are generally implemented within the application itself or on the client accessing the platform.
*Database encryption PaaS database offerings will generally offer built-in encryption capabilities that are supported by the database platform. Examples of common encryption capabilities include Transparent Database Encryption (TDE), which encrypts the entire database, and field-level encryption, which encrypts only sensitive portions of the database.
*Other PaaS providers may offer encryption for various components that may be used by applications such as message queuing services.

27
Q
A

“Encryption of SaaS is quite different from that of IaaS and PaaS. Unlike the other models, SaaS generally is used by a business to process data to deliver insightful information (such as a CRM system). The SaaS provider may also use the encryption options available for IaaS and PaaS providers. CSPs are also recommended to implement per-customer keys whenever possible to improve the enforcement of multitenancy isolation.

28
Q

Why may Customers choose to use the encryption supplied by the provider?

A

“Customers may choose to use the encryption supplied by the provider for many reasons. For example, data that is encrypted by the client (by the implementation of an encryption proxy) may not be able to be processed by the provider.

“Using your own encryption (such as using an encryption proxy device) will generally break SaaS. Unlike IaaS and PaaS providers, SaaS providers are often used to process data into valuable information. “f a customer encrypts data prior to sending it to the SaaS provider, it may impact functionality. SaaS providers should offer customer-managed keys to enhance multitenancy isolation.

Provider-managed encryption can make encryption as simple as checking a box. This may address compliance, but you need to ensure that your provider’s encryption system is acceptable from a security perspective.

29
Q

Strong key management is a critical component of encryption. After all, if you lose your encryption keys, you lose access to any encrypted data, and if a bad actor has access to the keys, they can access the data.

The main considerations concerning key-management systems according to the CSA Guidance are the performance, accessibility, latency, and security of the key-management system.

The following four key-management system deployment options are covered in the CSA Guidance:

HSM/appliance

A

HSM/appliance

Use a traditional hardware security module (HSM) or appliance-based key manager, which will typically need to be on premises (some vendors offer cloud HSM), and deliver the keys to the cloud over a dedicated connection. Given the size of the key material, many vendors will state that there is very little latency involved with this approach to managing keys for data held in a cloud environment.

30
Q

The main considerations concerning key-management systems according to the CSA Guidance are the performance, accessibility, latency, and security of the key-management system.

The following four key-management system deployment options are covered in the CSA Guidance:

Virtual appliance/software

A

Virtual appliance/software

A key-management system does not need to be hardware-based. You can deploy a virtual appliance or software-based key manager in a cloud environment to maintain keys within a provider’s environment to reduce potential latency or disruption in network communication between your data center and cloud-based systems. In such a deployment, you still own the encryption keys, and they cannot be used by the provider if legal authorities demand access to your data.

31
Q

The following four key-management system deployment options are covered in the CSA Guidance:

Cloud provider service

A

Cloud provider service

This key-management service is offered by the cloud provider. Before selecting this option, make sure you understand the security model and service level agreements (SLAs) to determine whether your key could possibly be exposed. You also need to understand that although this is the most convenient option for key management in a cloud environment, the provider has access to the keys and can be forced by legal authorities to hand over any data upon request.

32
Q

The main considerations concerning key-management systems according to the CSA Guidance are the performance, accessibility, latency, and security of the key-management system.

The following four key-management system deployment options are covered in the CSA Guidance:

Hybrid

A

Hybrid

This is a combination system, such as using an HSM as the root of trust for keys but then delivering application-specific keys to a virtual appliance that’s located in the cloud and manages keys only for its particular context.

33
Q

If the data you are storing is so sensitive that you cannot risk a government accessing it, you have two choices:

A

use your own encryption that you are in complete control of,

or don’t process this data in a cloud environment.

34
Q

Many providers (such as storage) may offer encryption by default with some services (such as object storage). In this scenario, the provider owns and manages the encryption keys. The provider holding the encryption keys may be seen as risk.

What is the solution

A

Because the provider holding the encryption keys may be seen as a risk, most providers generally implement systems that impose separation of duties. To use keys to gain access to customer data would require collusion among multiple provider employees. Of course, if the provider is able to unencrypt the data (which they can if they manage the key and the engine), they must do so if required to by legal authorities.

Customers should determine whether they can replace default provider keys with their own that will work with the provider’s encryption engine

35
Q

Customer-managed encryption keys are controlled to some extent by the customer.

Give an example

A

For example, a provider may expose a service that either generates or imports an encryption key. Once in the system, the customer selects the individuals who can administer and/or use the key to encrypt and decrypt data. As the key is integrated with the provider’s encryption system, access to encrypted data can be used with the encryption engine created and managed by the provider.

As with provider-managed keys, because these keys are accessible to the provider, the provider must use them to deliver data to legal authorities if required to do so.”

“Data that is processed in a cloud environment needs to be unencrypted in order to be processed. This will remain true until homomorphic encryption becomes a valid technology.

36
Q

You know that cloud providers spend a lot of time and money ensuring strong security in their environments. Using provider-supplied services as part of your architecture can result in an increase of your overall security posture.

give an example of how?.

A

For example, you can realize large architectural security benefits by something as simple as having cloud instances transfer data through a service supplied by the provider.

n many cases, you will be able to find opportunities to incorporate provider services into your architecture that will be more scalable, cheaper, and more secure than traditional architecture patterns used in your current computing environment.

37
Q

“When considering monitoring of your cloud environment, you need to have access to telemetry data from both the applistructure and metastructure layers.”

where do you get this from?

A

“you need to collect logging from the server and applications (applistructure), as well as from the metastructure itself.

*Applistructure Collect event logs from servers and applications and deliver to security information and event management (SIEM). To collect database activity, consider a DAM solution.

*Metastructure Collect any data from the API activity occurring in your cloud environment, as well as logs from any service you may be consuming, such as any file access in object storage.”

Strong monitoring, auditing, and alerting capabilities cannot be seen as optional. Copies of log data should be stored in a safe location, such as a separate logging account. You want to do this for multiple reasons, including chain of custody for legal purposes. Note, however, that log data will likely need to be accessible to administrators and engineers so they can troubleshoot issues. You may want to determine whether the provider offers the ability to replicate this data to the logging account. This way, administrators can access the logs without requiring any access to a centralized logging area.

38
Q

“Additional Data Security Controls

“Cloud Platform/Provider-Specific Controls”

A

Providers may offer machine learning–based anomaly detection, intrusion prevention systems, layer 7 firewalls (such as web application firewalls), data classification systems, and more. In fact, provider offerings that are not designed for security can be leveraged to augment security.

Examples include increasing security architecture through the implementation of a service that removes direct data paths, or even services that can tell you the various services consumed by an application running in a serverless environment.”

39
Q

What is a data loss prevention (DLP) system

A

A DLP system is both a preventative and detective control that can be used to detect potential breaches or misuse of data based on information it is programmed to identify. It can work with data in use (installed on an endpoint or server, for example), in motion (such as a network device), or at rest (such as scan data in storage). DLP systems need to be configured to understand what is sensitive data and in what context it is considered sensitive.”

“DLP is generally considered an SaaS security control. You know that DLP services are often used in a CASB to identify and stop potentially sensitive data from being sent to an SaaS product through integration with inline inspection. DLP functionality in a CASB can either be supplied as part of the CASB itself or it can integrate with an existing DLP platform. When investigating potential CASB solutions, you should be sure to identify the DLP functionality and whether it will integrate with an existing DLP platform you own. The success of your CASB to identify and protect sensitive data will rely heavily on the effectiveness of the DLP solution it is leveraging.

40
Q

what are “Enterprise rights management and digital rights management

Enterprise rights management is also known as information rights management.”

A

“Enterprise rights management and digital rights management (DRM) are both security controls that provide control over accessing data.

While DRM is more of a mass-consumer control and typically used with media such as books, music, video games, and other consumer offerings, ERM is typically used as an employee security control that can control actions that can be performed on files, such as copy and paste operations, taking screenshots, printing, and other actions. Need to send a sensitive file to a partner but want to ensure the file isn’t copied? ERM can be used to protect the file (to a degree).”

“Both ERM and DRM technologies are not often offered or applicable in a cloud environment. Both technologies rely on encryption, and as you know, encryption can break capabilities, especially with SaaS.

41
Q

What is data masking?

A

Data masking is an obfuscation technology that alters data while preserving its original format. Data masking can address multiple standards such as the Payment Card Industry Data Security Standard (PCI DSS), personally identifiable information (PII) standards, and protected health information (PHI) standards.

42
Q

“Data masking is generally performed in one of two ways:

A

test data generation (often referred to as static data masking) and dynamic data masking:”

“Test data generation A data-masking application is used to extract data from a database, transform the data, and then duplicate it to another database in a development or test environment. This is generally performed so production data is not used during development.
*Dynamic data masking Data is held within its original database, but the data stream is altered in real time (on the fly), typically by a proxy, depending on who is accessing the data. Take a payroll database, for example. If a developer accesses the data, salary information would be masked. But if an HR manager accesses the same data, she will see the actual salary data.”

43
Q

“Enforcing Lifecycle Management Security”

A

“As mentioned, data residency can have significant legal consequences. Understand the methods exposed by your provider to ensure that data and systems are restricted to approved geographies. You will want to establish both preventative controls to prohibit individuals from accessing unapproved regions and detective controls to alert you if the preventative controls fail. Provider encryption can also be used to protect data that is accidentally moved across regions, assuming the encryption key is unavailable in the region to which data was accidentally copied.

All controls need to be documented, tested, and approved. Artifacts of compliance may be required to prove that you are maintaining compliance in your cloud environment.”

“Security requires both protective and detective controls. If you can’t detect an incident, you can never respond to it. Make sure you are logging both API- and data-level activity and that logs meet compliance and lifecycle policy requirements.”

“Although risk assessment of cloud providers is critical, this activity is not a data security control.”

44
Q

How will you ensure that all the data has been removed from a public cloud environment including all media such as back-up tapes?

A

Maintain local key management and revoke or delete keys from the key-management system to prevent the data from being accessed again