Domain 2 Sybex Flashcards
Angela is an information security architect at a bank and has been assigned to ensure that transactions are secure as they traverse the network. She recommends that all transactions use TLS. What threat is she most likely attempting to stop, and what method is she most likely using to protect against it?
A. Man-in-the-middle, VPN
B. Packet injection, encryption
C. Sniffing, encryption
D. Sniffing, TEMPEST
C. Encryption is often used to protect traffic like bank transactions from sniffing. While packet injection and man-in-the-middle attacks are possible, they are far less likely to occur, and if a VPN were used, it would be used to provide encryption. TEMPEST is a specification for techniques used to prevent spying using electromagnetic emissions and wouldn’t be used to stop attacks at any normal bank.
Control Objectives for Information and Related Technology (COBIT) is a framework for information technology (IT) management and governance. Which data management role is most likely to select and apply COBIT to balance the need for security controls against business requirements?
A. Business owners
B. Data processors
C. Data owners
D. Data stewards
A. Business owners have to balance the need to provide value with regulatory, security, and other requirements. This makes the adoption of a common framework like COBIT attractive. Data owners are more likely to ask that those responsible for control selection identify a standard to use. Data processors are required to perform specific actions under regulations like the EU GDPR. Finally, in many organizations, data stewards are internal roles that oversee how data is used.
Nadia’s company is operating a hybrid cloud environment with some on-site systems and some cloud-based systems. She has satisfactory monitoring on-site, but needs to apply security policies to both the activities her users engage in and to report on exceptions with her growing number of cloud services. What type of tool is best suited to this purpose?
A. A NGFW
B. A CASB
C. An IDS
D. A SOAR
B. The best option for Nadia is a cloud access security broker (CASB). A CASB is designed to sit between a cloud environment and the users who use it, and it provides monitoring and policy enforcement capabilities. A next-generation firewall (NGFW), an intrusion detection system (IDS), and a security operations and response (SOAR) tool could each provide some insight into what is going on, but they are not purpose built and designed for this like the CASB is. The NGFW and IDS are most likely to provide insight into traffic patterns and behaviors, while the SOAR is primarily intended to monitor other systems and centralize data for response, making it potentially the least useful in this specific scenario
When media is labeled based on the classification of the data it contains, what rule is typically applied regarding labels?
A. The data is labeled based on its integrity requirements.
B. The media is labeled based on the highest classification level of the data it contains.
C. The media is labeled with all levels of classification of the data it contains.
D. The media is labeled with the lowest level of classification of the data it contains.
B. Media is typically labeled with the highest classification level of data it contains. This prevents the data from being handled or accessed at a lower classification level. Data integrity requirements may be part of a classification process but don’t independently drive labeling in a classification scheme.
Which one of the following administrative processes assists organizations in assigning appropriate levels of security control to sensitive information?
A. Data classification
B. Remanence
C. Transmitting data
D. Clearing
A. The need to protect sensitive data drives data classification. Classifying data allows organizations to focus on data that needs to be protected rather than spending effort on less important data. Remanence describes data left on media after an attempt is made to remove the data. Transmitting data isn’t a driver for an administrative process to protect sensitive data, and clearing is a technical process for removing data from media.
How can a data retention policy help to reduce liabilities?
A. By ensuring that unneeded data isn’t retained
B. By ensuring that incriminating data is destroyed
C. By ensuring that data is securely wiped so it cannot be restored for legal discovery
D. By reducing the cost of data storage required by law
A. A data retention policy can help to ensure that outdated data is purged, removing potential additional costs for discovery. Many organizations have aggressive retention policies to both reduce the cost of storage and limit the amount of data that is kept on hand and discoverable. Data retention policies are not designed to destroy incriminating data, and legal requirements for data retention must still be met.
Staff in an information technology (IT) department who are delegated responsibility for day-to-day tasks hold what data role?
A. Business owner
B. User
C. Data processor
D. Custodian
D. Custodians are delegated the role of handling day-to-day tasks by managing and overseeing how data is handled, stored, and protected. Data processors are systems used to process data. Business owners are typically project or system owners who are tasked with making sure systems provide value to their users or customers.
Helen’s company uses a simple data lifecycle as shown in the figure here. What stage should come first in their data lifecycle?
A. Data policy creation
B. Data labeling
C. Data collection
D. Data analysis
C. In a typical data lifecycle, collection is the first stage. Once collected, data can be analyzed, used, stored, and disposed of at the end of its useful life. Policies may be created at any time, and organizations often have data before they have policies. Labels are added to data during the analysis, usage, or retention cycle.
Ben has been tasked with identifying security controls for systems covered by his organization’s information classification system. Why might Ben choose to use a security baseline?
A. It applies in all circumstances, allowing consistent security controls.
B. They are approved by industry standards bodies, preventing liability.
C. They provide a good starting point that can be tailored to organizational needs.
D. They ensure that systems are always in a secure state.
C. Security baselines provide a starting point to scope and tailor security controls to your organization’s needs. They aren’t always appropriate to specific organizational needs, they cannot ensure that systems are always in a secure state, and they do not prevent liability.
Megan wants to prepare media to allow for its reuse in an environment operating at the same sensitivity level. Which of the following is the best option to meet her needs?
A. Clearing
B. Erasing
C. Purging
D. Sanitization
A. Clearing describes preparing media for reuse. When media is cleared, unclassified data is written over all addressable locations on the media. Once that’s completed, the media can be reused. Erasing is the deletion of files or media and may not include all of the data on the device or media, making it the worst choice here. Purging is a more intensive form of clearing for reuse in lower-security areas, and sanitization is a series of processes that removes data from a system or media while ensuring that the data is unrecoverable by any means.
Mikayla wants to identify data that should be classified that already exists in her environment. What type of tool is best suited to identifying data like Social Security numbers, credit card numbers, and similar well-understood data formats?
A. Manual searching
B. A sensitive data scanning tool
C. An asset metadata search tool
D. A data loss prevention system (DLP)
B. Sensitive data scanning tools are designed to scan for and flag sensitive data types using known formatting and structure. Social Security numbers, credit card numbers, and other regularly structured data that follows known rules can be identified and then addressed as needed. Manual searching is a massive undertaking for an organization with even a relatively small amount of data; asset metadata needs to be set first and would have already been identified; and a DLP system looks for data that is in transit using rules rather than hunting down data at rest and in storage.
What issue is common to spare sectors and bad sectors on hard drives as well as overprovisioned space on modern SSDs?
A. They can be used to hide data.
B. They can only be degaussed.
C. They are not addressable, resulting in data remanence.
D. They may not be cleared, resulting in data remanence.
D. Spare sectors, bad sectors, and space provided for wear leveling on SSDs (overprovisioned space) may all contain data that was written to the space that will not be cleared when the drive is wiped. This is a form of data remanence and is a concern for organizations that do not want data to potentially be accessible. Many wiping utilities only deal with currently addressable space on the drive. SSDs cannot be degaussed, and wear leveling space cannot be reliably used to hide data. These spaces are still addressable by the drive, although they may not be seen by the operating system.
Naomi knows that commercial data is typically classified based on different criteria than government data. Which of the following is not a common criterion for commercial data classification?
A. Useful lifespan
B. Data value
C. Impact to national security
D. Regulatory or legal requirements
C. Commercial data classification often takes into account the value of the data, any regulatory or legal requirements that may apply to the data, and how long the data is useful—its lifespan. The impact to national security is more typically associated with government classification schemes.
Your organization regularly handles three types of data: information that it shares with customers, information that it uses internally to conduct business, and trade secret information that offers the organization significant competitive advantages. Information shared with customers is used and stored on web servers, while both the internal business data and the trade secret information are stored on internal file servers and employee workstations.
What term best describes data that is resident in system memory?
A. Data at rest
B. Buffered data
C. Data in use
D. Data in motion
C. Data is often considered based on the data state that it is in. Data can be at rest (on a drive or other storage medium), in use and thus in memory or a buffer and often decrypted for use, or in transit over a network. Data that is resident in system memory is considered data in use.
Your organization regularly handles three types of data: information that it shares with customers, information that it uses internally to conduct business, and trade secret information that offers the organization significant competitive advantages. Information shared with customers is used and stored on web servers, while both the internal business data and the trade secret information are stored on internal file servers and employee workstations.
What technique could you use to mark your trade secret information in case it was released or stolen and you need to identify it?
A. Classification
B. Symmetric encryption
C. Watermarks
D. Metadata
C. A watermark is used to digitally label data and can be used to indicate ownership, as well as to assist a digital rights management (DRM) system in identifying data that should be protected. Encryption would have prevented the data from being accessed if it was lost, while classification is part of the set of security practices that can help make sure the right controls are in place. Finally, metadata is used to label data and might help a data loss prevention system flag it before it leaves your organization.
Your organization regularly handles three types of data: information that it shares with customers, information that it uses internally to conduct business, and trade secret information that offers the organization significant competitive advantages. Information shared with customers is used and stored on web servers, while both the internal business data and the trade secret information are stored on internal file servers and employee workstations.
What type of encryption is best suited for use on the file servers for the proprietary data, and how might you secure the data when it is in motion?
A. TLS at rest and AES in motion
B. AES at rest and TLS in motion
C. VPN at rest and TLS in motion
D. DES at rest and AES in motion
B. AES is a strong modern symmetric encryption algorithm that is appropriate for encrypting data at rest. TLS is frequently used to secure data when it is in transit. A virtual private network is not necessarily an encrypted connection and would be used for data in motion, while DES is an outdated algorithm and should not be used for data that needs strong security.
What does labeling data allow a DLP system to do?
A. The DLP system can detect labels and apply appropriate protections based on rules.
B. The DLP system can adjust labels based on changes in the classification scheme.
C. The DLP system can modify labels to permit requested actions.
D. The DLP system can delete unlabeled data.
A. Data loss prevention (DLP) systems can use labels on data to determine the appropriate controls to apply to the data. Most DLP systems won’t modify labels in real time and typically don’t work directly with firewalls to stop traffic. Deleting unlabeled data would cause big problems for organizations that haven’t labeled every piece of data!
Why is it cost effective to purchase high-quality media to contain sensitive data?
A. Expensive media is less likely to fail.
B. The value of the data often far exceeds the cost of the media.
C. Expensive media is easier to encrypt.
D. More expensive media typically improves data integrity.
B. The value of the data contained on media often exceeds the cost of the media, making more expensive media that may have a longer life span or additional capabilities like encryption support a good choice. While expensive media may be less likely to fail, the reason it makes sense is the value of the data, not just that it is less likely to fail. In general, the cost of the media doesn’t have anything to do with the ease of encryption, and data integrity isn’t ensured by better media.
Chris is responsible for workstations throughout his company and knows that some of the company’s workstations are used to handle both proprietary information and highly sensitive trade secrets. Which option best describes what should happen at the end of their life (EOL) for workstations he is responsible for?
A. Erasing
B. Clearing
C. Sanitization
D. Destruction
D. Destruction is the most complete method of ensuring that data cannot be exposed, and organizations often opt to destroy either the drive or the entire workstation or device to ensure that data cannot be recovered or exposed. Sanitization is a combination of processes that ensure that data from a system cannot be recovered by any means. Erasing and clearing are both prone to mistakes and technical problems that can result in remnant data and don’t make sense for systems that handled proprietary information.
Fred wants to classify his organization’s data using common labels: private, sensitive, public, and proprietary. Which of the following should he apply to his highest classification level based on common industry practices?
A. Private
B. Sensitive
C. Public
D. Proprietary
D. Common practice makes proprietary or confidential data the most sensitive data. Private data is internal business data that shouldn’t be exposed but that doesn’t meet the threshold for confidential or proprietary data. Sensitive data may help attackers or otherwise create risk, and public data is just that—data that is or can be made public.
What scenario describes data at rest?
A. Data in an IPsec tunnel
B. Data in an e-commerce transaction
C. Data stored on a hard drive
D. Data stored in RAM
C. Data at rest is inactive data that is physically stored. Data in an IPsec tunnel or part of an e-commerce transaction is data in motion. Data in RAM is ephemeral and is not inactive.
If you are selecting a security standard for a Windows 10 system that processes credit cards, what security standard is your best choice?
A. Microsoft’s Windows 10 security baseline
B. The CIS Windows 10 baseline
C. PCI DSS
D. The NSA Windows 10 Secure Host Baseline
C. The Payment Card Industry Data Security Standard (PCI DSS) provides the set of requirements for credit card processing systems. The Microsoft, NSA, and CIS baseline are all useful for building a Windows 10 security standard, but the PCI DSS standard is a better answer.
The Center for Internet Security (CIS) works with subject matter experts from a variety of industries to create lists of security controls for operating systems, mobile devices, server software, and network devices. Your organization has decided to use the CIS benchmarks for your systems. Answer the following questions based on this decision.
The CIS benchmarks are an example of what practice?
A. Conducting a risk assessment
B. Implementing data labeling
C. Proper system ownership
D. Using security baselines
D. The CIS benchmarks are an example of a security baseline. A risk assessment would help identify which controls were needed, and proper system ownership is an important part of making sure baselines are implemented and maintained. Data labeling can help ensure that controls are applied to the right systems and data.
The Center for Internet Security (CIS) works with subject matter experts from a variety of industries to create lists of security controls for operating systems, mobile devices, server software, and network devices. Your organization has decided to use the CIS benchmarks for your systems. Answer the following questions based on this decision.
Adjusting the CIS benchmarks to your organization’s mission and your specific IT systems would involve what two processes?
A. Scoping and selection
B. Scoping and tailoring
C. Baselining and tailoring
D. Tailoring and selection
B. Scoping involves selecting only the controls that are appropriate for your IT systems, while tailoring matches your organization’s mission and the controls from a selected baseline. Baselining is the process of configuring a system or software to match a baseline or building a baseline itself. Selection isn’t a technical term used for any of these processes.