Mock exam 1 - Qs got wrong Flashcards

(34 cards)

1
Q

Types of compliances recognized by the OSSTMM framework

A

Legislative
Contractual
Standards-based

The OSSTMM framework recognizes three types of compliance:

Legislative – Legislative compliance is enforced by regional regulatory bodies. It is mandatory to comply with regulatory requirements enforced by the government. Failing to comply with the regulatory requirements can lead to heavy fines and charges.

Examples of legislative requirements are Sarbanes-Oxley (SOX), the EU General Data Protection Regulation (GDPR), and the Health Insurance Portability and Accountability Act (HIPAA).

Contractual – Contractual compliance is enforced by groups such as customers and vendors through documented contractual requirements in master service agreements (MSAs). Parties signing the contract must comply with contractual requirements. Failure to comply with contractual requirements may lead to fines, penalties, and loss of reputation.

An example of a contractual requirement is the Payment Card Industry Data Security Standard (PCI DSS), enforced by VISA and Mastercard. Merchants who handle credit card data must comply with it.

Standards-based – Compliance with standards is enforced within the organization or by the customer to whom the organization is providing services. Failure to comply with standard requirements may lead to loss of reputation or dismissal of certification from the certifying body.

For example, ISO 27001 is the international standard for information security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Difference between internal and external audit

A

A security audit is used to evaluate the effectiveness of implemented security controls within your organization. Audits can be internal or external.

Internal audits are conducted by independent audit teams to evaluate the effectiveness of controls.

However, external audits are conducted by independent third-party organizations to evaluate the effectiveness of controls against regulatory or standard requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which asset management activity typically involves scanning to locate assets?

A

Enumeration

Unlike inventory management, which relies on existing records or information, enumeration actively scans systems and networks to identify and list all of the technology resources and devices within the organization.

This process helps ensure that all assets are discovered and accounted for, even if they were not previously documented in the inventory.

Inventory management involves keeping track of hardware, software, and data assets within the organization. It includes maintaining a comprehensive list of all assets, their configurations, and their locations.

While inventory management is essential for asset tracking, it does not necessarily involve scanning to locate assets. Instead, it focuses on cataloging and documenting assets based on existing records or information.

Ownership refers to the assignment of responsibility and accountability for specific assets within an organization. It involves identifying individuals or groups responsible for managing, maintaining, and securing assets throughout their lifecycle.

Ownership is crucial for ensuring that assets are properly managed, protected, and used according to organizational policies and guidelines. However, ownership does not involve scanning to locate assets but rather assigning responsibilities for existing assets.

Classification involves categorizing assets based on their importance, sensitivity, or criticality to the organization. It helps prioritize resources and allocate security controls based on the level of risk associated with different asset types. While classification is essential for understanding the security implications of assets, it does not directly involve scanning to locate assets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of these requirements would indicate that you needed to install a router as opposed to a NIPS/NIDS?

A) Anti-spoofing
B) Rules
C) in-band vs. out-of-band
D) Inline vs. passive

A

Antispoofing is a router function, where an application compares the incoming or outgoing IP address to an ACL.

(Access Control List).

An ACL on a router is a set of rules that filters network traffic, allowing or denying packets based on specific criteria like source/destination IP addresses or ports, acting as a network firewall.)

Other types of anti-spoofing perform similar functions on MAC addresses or switch ports. A NIDS or NIPS would not check IP address traffic for spoofing.

A MAC address, or Media Access Control address, is a unique identifier assigned to a network interface controller (NIC) for use in communication within a network segment. It’s a 12-digit hexadecimal number, often grouped into six pairs separated by hyphens, that identifies a device on a local network. Think of it as the device’s physical address on the network, while an IP address is its logical address.

Inline vs. passive are installation decisions are made when you choose between a network-based intrusion prevention system (NIPS) and a network-based intrusion detection system (NIDS).

A NIPS is an active device that monitors and reacts to network intrusions.

A NIDS is a passive device that only provides notification in the event of a security breach.

In-band or out-of-band would also indicate a decision between a NIDS or NIPS. In-band management of a NIDS/NIPS would refer to local management, whereas out-of-band management would be performed remotely.

Rules define what a NIDS/NIPS monitors with regard to incoming network traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

An attacker carried out an IP spoofing that included saturating your network with ICMP messages. Which attack occurred?

A) SYN flood
B) smurf
C) on-path
D) brute force

A

A smurf attack is a combination of Internet Protocol (IP) spoofing and the saturation of a network with Internet Control Message Protocol (ICMP) messages.

ICMP (Internet Control Message Protocol) messages are used for reporting errors and performing network diagnostics, primarily by devices like routers and hosts, according to Cloudflare. These messages are contained within IP packets and are used to communicate operational information and error messages, like “Destination Unreachable” or “Time Exceeded,” as explained by TechTarget and PubNub.
ICMP messages are crucial for network diagnostics and error reporting. They are used to determine if data packets are reaching their destination and to identify network issues. For example, a “Destination Unreachable” message indicates that a packet could not reach its intended destination.

To initiate a smurf attack, a hacker sends ICMP messages from a computer outside a network with a spoofed IP address of a computer inside the network. The ICMP message is broadcast on the network, and the hosts on the network attempt to reply to the spurious ICMP message. A smurf attack causes a denial-of-service (DoS) on a network because computers are busy responding to the ICMP messages.

The IP spoofing part of a smurf attack can be countered by configuring a router to ensure that messages with IP addresses inside the network originate on the private network side of the router.

A brute force attack occurs when a hacker tries all possible values for such variables as usernames and passwords.

An on-path (formerly known as man-in-the-middle) attack occurs when a hacker intercepts messages from a sender, modifies those messages, and sends them to a legitimate receiver. This type of attack often involves interrupting network traffic to insert malicious code.

A SYN flood attack occurs when an attacker exploits the three-packet Transmission Control Protocol (TCP) handshake.

A SYN flood attack is a type of denial-of-service (DoS) attack.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Your company decides to implement a RAID-5 array on several file servers. Which feature is provided by this deployment?

A) Elasticity
B) Scalability
C) High availability
D) Distributed allocation

A

A RAID-5 array provides high availability. Redundant Array of Independent Disks (RAID) combines multiple hard drives for redundancy, performance, and fault tolerance. There are several levels of RAID varying in configuration based on need.

RAID 5 includes 3 to 32 drives. A portion of each drive is reserved and combined into a “parity” drive, which stores data and drive rebuilding information. In the event of a drive failure, information is pulled from the parity drive to rebuild the failed drive, while the system remains operational.

RAID 0 combines the drives to appear as a single drive. This a great performance feature, but if one drive fails, they all fail.

RAID 1 is mirroring, writing to two drives simultaneously. If drive 1 fails, drive 2 keeps writing.

Elasticity is a cloud computing feature that allows the provider to add or delete (scale) resources as they are needed. If scaling can be accomplished easily, the system has high elasticity. Scalability is the ability of a system to grow (add resources) or shrink (remove resources). Scalability is a major factor when choosing a system or provider.

Distributive allocation is also known as load balancing. With distributive allocation, excessive traffic, and file requests on one system can be diverted to other systems that are not as busy.

Other strategies include redundancy, fault tolerance, and high availability.

Redundancy occurs when you have systems in place ready to come online when a system fails.

Fault tolerance allows a system to remain online if a component fails. Additional NICs, multiple power supplies, extra cooling fans, and RAID storage systems are examples of fault tolerance. High availability is the incorporation of multiple resiliency mechanisms to minimize the amount of system down time. The standard for high availability is to have the system up 99.999% of the time. That equates to a little over 5 minutes of down time per year.

Objective:
Security Architecture

Sub-Objective:
Compare and contrast security implications of different architecture models

References:

High availability Storage: https://networksandservers.blogspot.com/2011/09/high-availability-storage-i.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Match each type of device with the appropriate technique to harden it. Some techniques may apply to more than one device.

A

Every category of device should receive regular operating system, application, and firmware updates as they become available. While some specific systems (such as production servers) may not receive patches before they are tested in development, the overall category of device should be hardened in this way.

Implementing full-disk encryption and enabling remote wipe capabilities for mobile devices can help protect sensitive data in case the device is lost or stolen. Full-disk encryption ensures that all data stored on the device is encrypted, making it unreadable without the appropriate decryption key. Enabling remote wipe capabilities allows the device owner or administrator to remotely erase all data on the device, further safeguarding sensitive information. This feature is particularly useful in situations where the device cannot be physically recovered. Mobile devices should also be secured with biometrics and updated or patched as often as the manufacturer releases updates. Multifactor authentication (MFA) requires users to provide two or more identification items for authentication. Special PINs, swipe patterns, and biometrics such as fingerprint verification and facial recognition are all excellent examples of MFA for mobile devices.

To protect workstations, implement strong authentication methods such as biometric recognition or multi-factor authentication and use the principle of least privilege to lock down user access. Biometric recognition utilizes unique physical characteristics such as fingerprints, iris patterns, or facial features to authenticate users. This provides a highly secure method of user authentication since biometric traits are difficult to forge or replicate. The principle of least privilege (PoLP) dictates that users should only be granted the minimum level of access necessary to perform their job functions, which means disabling or limiting administrator-level access on accounts used to perform routine tasks.

You should also restrict user permissions and access rights on switches and routers. By restricting user permissions and access rights on switches, organizations can limit the potential damage caused by unauthorized access or malicious activities. This involves implementing role-based access control (RBAC) policies, which assign specific privileges to user roles based on their job responsibilities. RBAC ensures that users only have access to the resources and data required to fulfill their duties, reducing the risk of privilege escalation attacks.

You should also disable unnecessary services and ports, and utilize access control lists (ACLs) to restrict traffic. Disabling unnecessary services and ports on routers reduces the attack surface and minimizes the risk of exploitation by malicious actors. By disabling unused services and ports, organizations can prevent unauthorized access and mitigate the potential impact of security vulnerabilities. You should also make sure that all patches and updates are current. Perhaps most importantly, you should CHANGE THE DEFAULT PASSWORD ON ROUTERS AND SWITCHES! These credentials are widely published on the Internet “to assist administrators.” One example of a router password site is https://www.softwaretestinghelp.com/default-router-username-and-password-list/

ACLs are security mechanisms used to control the flow of traffic entering or exiting a network. ACLs define rules that specify which packets are allowed or denied based on criteria such as source/destination IP addresses, protocols, and ports. By configuring ACLs on routers, organizations can enforce granular control over network traffic and protect against unauthorized access or malicious activity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

hich element is created to ensure that your company is able to resume operation after unplanned downtime in a timely manner?

A) business impact analysis (BIA)
B) disaster recovery plan
C) business continuity plan
D) vulnerability analysis

A

B) disaster recovery plan

The disaster recovery plan is created to ensure that your company is able to resume operation in a timely manner. As part of the business continuity plan, it mainly focuses on alternative procedures for processing transactions in the short term. It is carried out when an emergency occurs and immediately following the emergency. The disaster recovery plan (DRP) should include a hierarchical list of critical systems. The first step in the development of the DRP is identification of critical systems.

A vulnerability analysis identifies your company’s vulnerabilities. It is part of the business continuity plan.

A business continuity plan is created to ensure that policies are in place to deal with long-term outages and disasters to sustain operations. Its primary goal is to ensure that the company maintains its long-term business goals both during and after the disruption, and mainly focuses on the continuity of the data, telecommunications, and information systems infrastructures. Multiple plans should be developed to cover all company locations. The business continuity plan is broader in focus than the disaster recovery plan and usually includes the following steps:

Policy statement initiation – includes writing the policy to give business continuity plan direction and creating business continuity plan committee, roles, and role definitions.
Business impact analysis (BIA) creation – includes identifying vulnerabilities, threats, and calculating risks. The risk management process is one of the core infrastructure and service elements required to support the business processes of the organization. This stage should also identify potential countermeasures associated with each threat. Recovery point objectives and recovery time objectives directly relate to the BIA.
Recovery strategies creation – includes creating plans to bring systems and functions online quickly.
Contingency plan creation – includes writing guidelines to ensure the company can operate at a reduced capacity.
Plan testing, maintenance, and personnel training – includes a formal test of the plan to identify problems training the parties who have roles in the business continuity plan to fulfill their role, and updating the plan as needed. The company should quantitatively measure the results of the test to ensure that the plan is feasible. This step ensures that the business continuity plan remains a constant focus of the company.
The major elements of the business continuity plan are the disaster recovery plan, BIA, risk management process, and contingency plan. Although a business continuity plan committee should be created, it is not considered a major element of the plan.

A BIA is created to identify the company’s vital functions and prioritize them based on need. It identifies vulnerabilities and threats and calculates the associated risks but does not include suggestions for how to address the risks.

One of the most critical elements in a business continuity plan is management support.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following has Firewall as a Service (FWaaS) as a component?

A) On-premises
B) Software-defined networking
C) Secure Access Service Edge
D) Network segmentation

A

C) Secure Access Service Edge (SASE) has Firewall as a Service (FWaaS) as one of its components.

Other components include secure web gateways (SWG), a cloud access security broker (CASB), and zero trust network access (ZTNA).

SASE is used to ensure security in a software-defined wide area network (SD-WAN) environment, particularly in a cloud environment. SASE is often associated with the zero-trust model.

On-premises network architecture allows an organization to maintain control of its architecture and resources by hosting it on-site. With on-premises hardware, the organization can even host its own private cloud.

Network segmentation involves dividing the network into either Layer 2 or Layer 3 to create desirable security barriers between devices in the network. It cannot route traffic from a device being flooded to a location where the traffic can be studied.

Software-defined networking (SDN) allows for dynamic reconfiguration of a network as a reaction to changes in volume, types of traffic, and security incidents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In a security investigation, which of the following would provide you with the best data source for detailed information about network transmissions?

A) IPS/IDS logs
B) Packet captures
C) Application logs
D) Dashboards

A

B) Packet captures

IDS/IPS focus on “what is happening,” while packet capture focuses on “what happened.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are

1) file integrity checks,
2) application allow listing,
3) removable media control,
4) DEP, WAFs, DLP, UTM, and advanced malware tools.

A

1) File integrity checks examine selected files to see if there have been any changes, or attempted changes. It is important to review the logs to know when attempts have been made, even if the file integrity product returned the file to the original state.

2) Application allow list is the practice of denying all applications except for those that are approved. Those approved applications are designated as on the allow list.

Several products are available that check for applications that are not on the allow list, including attempts to install those applications. The logs the allow list product generates would tell you if someone had attempted (for example) to install a keylogger.

3) Removable media control (RMC) is important in many environments. USB drives, SD cards, CDs, DVDs and BluRay devices can all present dangers to the system. As an example, someone can use a USB drive to copy sensitive information and deliver it to someone outside the organization. Another example could be a CD that appears to be a music CD but is actually installation media for unauthorized software. Examine the RMC logs to determine attempts to violate removable media policies.

4) Advanced malware tools check for malicious code that would otherwise slip by standard antivirus and antimalware tools. Data Loss Prevention (DLP) examines outbound traffic for sensitive data, keywords, and specific files leaving the organization. Data execution prevention (DEP) forces the user to approve an application before it executes or launches. Logs will record execution attempts, including failed attempts. Notification of failed attempts is important, as it could tell you that your antimalware application successfully blocked an attempt to install malware. A web application firewall uses a set of defined rules to manage incoming and outgoing web server traffic, as well as attack prevention. Organizations can define their own rules, based on their particular vulnerability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company wants to improve its ability to detect insider threats and identify anomalous behavior patterns among employees. Which item below would be the most appropriate to accomplish this?

A) SELinux
B) UBA
C) XDR
D) Group Policy

A

B) UBA

User behavior analytics (UBA) is the most appropriate solution for improving the ability to detect insider threats and identify anomalous behavior patterns among employees. By monitoring and analyzing user activities, UBA solutions can help organizations proactively identify and mitigate security risks, safeguard sensitive data, and protect against insider threats. User behavior analytics (UBA) involves monitoring and analyzing user activities to detect abnormal or suspicious behavior that may indicate a security threat. UBA solutions use machine learning algorithms and statistical analysis to identify patterns and deviations from normal behavior, allowing organizations to detect insider threats, compromised accounts, and other security incidents more effectively. By analyzing user actions, access patterns, and interactions with systems and data, UBA solutions can provide valuable insights into potential security risks and help organizations proactively mitigate threats.

Group Policy is a feature in Windows environments that helps with operating system security and allows administrators to manage user and computer configurations centrally. While Group Policy is effective for enforcing security settings and configurations, it focuses more on controlling access to resources and applying organizational policies rather than detecting insider threats or anomalous behavior patterns among users.

SELinux is an operating systems security feature available in various Linux distributions. SELinux provides mandatory access controls (MAC) to restrict the actions that users and processes can perform on the system. While SELinux enhances security by enforcing fine-grained access controls, it does not directly address the need for detecting insider threats or identifying anomalous behavior patterns among users.

Extended detection and response (XDR) is a security solution that integrates and correlates data from multiple security tools and sources to detect and respond to threats more effectively. While XDR solutions enhance overall security posture by providing comprehensive threat detection and response capabilities, they may not specifically focus on detecting insider threats or analyzing user behavior patterns within the organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You have found that your system for validating keys has a latency period of 24-48 hours. As a result, a key that had been breached was accepted. You want to provide a real-time solution that will reduce this latency period. Which technology should you implement?

A) OCSP
B) CRL
C) OID
D) CSR

A

A) OCSP

Online Certificate Status Protocol (OCSP) is a real-time protocol for validating keys. OCSP is replacing CRL, which takes 24-48 hours to broadcast.

Object identifiers (OID) are optional extensions for X.509 certificates. They are dotted with decimal numbers that would assist with identifying objects.

A certificate signing request (CSR) is typically one of the first steps in getting a certificate for authentication from a Certificate authority (CA).

A certificate revocation list (CRL) is a method for listing certificates that have expired, been replaced, or were revoked. A web browser, for example, would check a CRL to verify whether or not the responding server is authentic. A CRL takes 24-48 hours to broadcast, which could cause an invalid key to be accepted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which is the best way to ensure risk levels remain within acceptable limits of the organization’s risk appetite?

A) Threat modeling
B) Vulnerability assessments
C) Business impact analysis
D) Continuous monitoring

A

D) Continuous monitoring

Risk management is an ongoing, cyclical process that recognizes the dynamic nature of risk and the need for continuous monitoring and assessment. It is the best way to ensure risk levels are within acceptable limits of the organization’s risk appetite.

A business impact analysis is best used to quantify likelihood and impact of risk scenarios, not to assess risk levels as compared to risk appetite.

Vulnerability assessments are designed to identify vulnerabilities, without regard to risk appetite.

Threat modeling is used to examine the nature of threats and potential threat scenarios, without regard to risk appetite.

CompTIA lists four types of risk assessment in the Security+ objectives: ad hoc, recurring, one-time, and continuous.

Ad hoc risk assessments are conducted on a sporadic or irregular basis, typically in response to specific events, changes, or emerging risks. These assessments are often unplanned and may be initiated in situations where there is a perceived need to evaluate a particular risk or issue, particularly in response to an event. Ad hoc risk assessments allow organizations to address immediate concerns or uncertainties that may arise unexpectedly. However, they may lack the systematic approach and consistency of regularly scheduled assessments.

Recurring risk assessments are conducted at regular intervals, such as monthly, quarterly, or annually, to evaluate and reassess the risk landscape over time. These assessments follow a predefined schedule and involve the systematic review of risks, controls, and mitigation strategies to ensure ongoing risk management effectiveness. Recurring risk assessments help organizations maintain awareness of evolving threats, vulnerabilities, and changes in the business environment, allowing for timely adjustments to risk management practices.

A one-time risk assessment, also known as a point-in-time assessment, is conducted as a single evaluation of risks within a specific scope or context. Unlike recurring assessments, which are conducted periodically, one-time risk assessments are performed as standalone exercises to address a particular need or objective. For example, a one-time risk assessment may be conducted prior to the implementation of a new system or process, during a merger or acquisition, or in response to a significant event or incident. While one-time assessments provide valuable insights into existing risks, they may not capture changes or developments over time.

Continuous risk assessment is an ongoing and dynamic process that involves real-time monitoring and evaluation of risks, enabling organizations to detect and respond to emerging threats and vulnerabilities promptly. Unlike traditional periodic assessments, continuous risk assessment leverages automated tools, technologies, and data sources to collect and analyze risk-related information continuously. This approach allows organizations to adapt their risk management strategies quickly in response to changing circumstances and evolving risk profiles, improving resilience and agility in addressing emerging threats and opportunities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Your organization has recently undergone a hacker attack. You have been tasked with preserving the data evidence. You must follow the appropriate eDiscovery process. You are currently engaged in the Preservation and Collection process. Which of the following guidelines should you follow? (Choose all that apply.)

A)Hashing of acquired data should occur only when the data is acquired and when the data is modified.

B)The data acquisition should be from a live system to include volatile data when possible.

C)The data acquisition should include both bit-stream imaging and logical backups.

D)The chain of custody should be preserved from the data acquisition phase to the presentation phase.

A

B)The data acquisition should be from a live system to include volatile data when possible.
C)The data acquisition should include both bit-stream imaging and logical backups.
D)The chain of custody should be preserved from the data acquisition phase to the presentation phase.

When following the eDiscovery process guidelines, you should keep the following points in mind regarding the Preservation and Collection process:

The data acquisition phase should be from a live system to include volatile data when possible.
The data acquisition should include both bit-stream imaging and logical backups.
The chain of custody should be preserved from the data acquisition phase to the presentation phase.
While it is true that the hashing of acquired data should occur when the data is acquired and when the data is modified, these are not the only situations that require hashing. Hashing should also be performed when a custody transfer of the data occurs.

Other points to keep in mind during the Preservation and Collection process include the following:

A consistent process and policy should be documented and followed at all times.
Forensic toolkits should be used.
The data should not be altered in any manner, within reason.
Logs, both paper and electronic, must be maintained.
At least two copies of collected data should be maintained.
The eDiscovery process is similar to the Forensic Discovery process. However, the eDiscovery process is usually slower.

The stages of Forensic Discovery include the following:

Verification – Confirm that an incident has occurred.
System Description – Collect detailed descriptions of the systems in scope.
Evidence Acquisition – Acquire the relevant data in scope, minimizing data loss, in a manner that is legally defensible. This is primarily concerned with the minimization of data loss, the recording of detailed notes, the analysis of collected data, and reporting findings.
Data Analysis – This includes media analysis, string/byte search, timeline analysis, and data recovery.
Results Reporting – Provide evidence to prove or disprove statements of facts.
The stages of eDiscovery include the following:

Identification – Verify the triggering event that has occurred. Find and assign potential sources of data, subject matter experts, and other required resources.
Preservation and Collection – Acquire the relevant data in scope, minimizing data loss, in a manner that is legally defensible. This is primarily concerned with the minimization of data loss, the recording of detailed notes, the analysis of collected data, and reporting findings.
Processing, Review, and Analysis – Process and analyze the data while ensuring that data loss is minimized.
Production – Prepare and produce electronically stored information (ESI) in a format that has already been agreed to by the parties.
Presentation – Provide evidence to prove or disprove statements of facts.
When preparing an eDiscovery policy for your organization, you need to consider the following facets:

Electronic inventory and asset control – You must ensure that all assets involved in the eDiscovery process are inventoried and controlled. Unauthorized users must not have access to any assets needed in eDiscovery.
Data retention policies – Data must be retained as long as required. Organizations should categorize data and then decide the amount of time that each type of data is to be retained. Data retention policies are the most important policies in the eDiscovery process. They also include systematic review, retention, and destruction of business documents.
Data recovery and storage – Data must be securely stored to ensure maximum protection. In addition, data recovery policies must be established to ensure that data is not altered in any way during the recovery. Data recovery and storage is the process of salvaging data from damaged, failed, corrupted, or inaccessible storage when it cannot be accessed normally.
Data ownership – Data owners are responsible for classifying data. These data classifications are then assigned data retention policies and data recovery and storage policies.
Data handling – A data handling policy should be established to ensure that the chain of custody protects the integrity of the data.
A data breach is a specific type of security incident that results in organizational data being stolen. Sensitive or confidential information must be protected against unauthorized copying, transferring, or viewing.

16
Q

You need to install a network device or component that ensures the computers on the network meet an organization’s security policies. Which device or component should you install?

A) DMZ
B) NAT
C) NAC
D) IPSec

A

C) NAC

Network Access Control (NAC) ensures that the computers on the network meet an organization’s security policies. NAC user policies can be enforced based on the location of the network user, group membership, or some other criteria. Media access control (MAC) filtering is a form of NAC. NAC provides host health checks for any devices connecting to the network. Hosts may be allowed or denied access or placed into a quarantined state based on this health check.

When connecting to a NAC, the user should be prompted for credentials. If the user is not prompted for credentials, the user’s computer is missing the authentication agent.

NAC can be permanent or dissolvable. Permanent or persistent NAC is installed on a device and runs continuously, while dissolvable NAC, also referred to as portal-based, downloads and runs when required and then disappears.

NAC can also be Agent-based or agentless. With Agent-based, a piece of code is installed on the host that performs the NAC functions on behalf of the NAC server. Agentless NAC integrates with a directory service.

Network Address Translation (NAT) is an IEEE standard that provides a transparent firewall solution between an Internal network and outside networks. Using NAT, multiple Internal computers can share a single Internet interface and IP address.

Internet Protocol Security (IPSec) is a protocol that secures IP communication over a private or public network. IPSec allows a security administrator to implement a site-to-site VPN tunnel between a main office and a remote branch office.

A demilitarized zone (DMZ) is a section of a network that is isolated from the rest of the network with firewalls. Servers in a DMZ are more secure than those on the regular network.

17
Q

You are required to isolate vulnerabilities and minimize errors when securing your company’s network. You decide to use redundant technologies from various suppliers so that the company is not dependent on any single system. Which strategy does this decision describe?

A)Separation of duties
B)Control diversity
C)Vendor diversity
D)Defense-in-depth

A

C) Vendor diversity

Engaging with multiple vendors of the same items is an example of vendor diversity. This is recommended, so that there is not a single platform or vendor that is the source of failure or compromise.

Control diversity is the utilization of multiple control types or categories, such as having a compensating control in the form of a backup generator connected for an uninterruptable power supply (UPS).

Defense in depth is a concept that prescribes creating multiple barriers to hackers. In this concept, there are controls at the outer perimeter, within various Internal boundary groups and within each system of the organization to provide layered security.

Separation of duties is a concept that says that any fraud-prone activity should be broken up into two or more jobs and assigned to different people so fraud attempts can be more easily recognized. This may mean taking a critical operation and requiring one person to input the data and another person to interpret the results.

18
Q

Your company has decided to implement a virtual private network (VPN), which will be used by remote employees to access Internal network resources. Which two protocols could you use? (Choose two.)

A) PPTP
B) L2TP
C) RAS
D) PPP

A

A) PPTP
B) L2TP

Point-to-Point Tunneling Protocol (PPTP) was created by Microsoft to work with the Point-to-Point Protocol (PPP) to create a virtual Internet connection so that networks can use the Internet as their WAN link. This connectivity method creates a virtual private network (VPN), allowing for private network security. In effect, PPTP creates a secure WAN connection using dial-up access.

PPTP is known as a tunneling protocol because the PPTP protocol dials through the PPP connection, which results in a secure connection between client and server.

Layer Two Tunneling Protocol (L2TP) is an enhancement of PPTP and can also be used to create a VPN. L2TP is a combination of PPTP and Cisco’s Layer 2 Forwarding (L2F) tunneling protocols and operates at the Data Link layer (Layer 2) of the Open Systems Interconnection (OSI) model. L2TP uses User Datagram Protocol (UDP) for sending packets as well as for maintaining the connection. Internet Protocol Security (IPSec) is used in conjunction with L2TP for encryption of the data.

PPP is a protocol used to establish dial-up network connections.

Remote access Service (RAS) is a service provided by the network operating system that allows Remote access to the network via a dial-up connection.L2TP can be combined with Internet Protocol Security (IPSec) to provide enhanced security. Both PPTP and L2TP create a single point-to-point, client-to-server communication link.

19
Q

Which of the following are key phases in implementing security awareness practices? (Choose two)

A)Execution
B)Development
C)Phishing
D)Anomalous behavior recognition

A

A)Execution
B)Development

Development involves the creation of security awareness materials, training modules, policies, and procedures. This phase typically includes conducting a needs assessment, identifying target audiences, designing content, and tailoring materials to address specific security risks and organizational requirements. While development is essential for laying the groundwork for security awareness initiatives, it is not the phase focused on implementing those practices.

Execution is the implementation phase where security awareness practices are put into action. This includes delivering training sessions, disseminating educational materials, conducting simulated phishing exercises, and promoting a culture of security awareness. Effective execution involves engaging employees, tracking progress, and continuously evaluating and refining the security awareness program to address evolving threats and challenges.

Anomalous behavior recognition is not a phase, but a possible item to be addressed in security awareness. Anomalous behavior recognition refers to the ability to identify and respond to suspicious activities or deviations from normal patterns of behavior. This helps you identify something that is “not right” and outside the ordinary. While this is an important aspect of overall cybersecurity, it is not specifically related to the implementation of security awareness practices aimed at educating and empowering employees to recognize and mitigate security risks.

Phishing awareness is also an item to be addressed during security awareness training. Phishing awareness educates employees about the dangers of phishing and teaches them how to recognize and avoid phishing attempts. This includes understanding common phishing techniques, identifying phishing indicators, and knowing how to report suspicious emails or messages. While phishing awareness is a critical element of security awareness training, it is not the phase focused on implementing security awareness practices across the organization.

20
Q

You should also be familiar with email certificates, SAN fields, code signing certificates, extended validation certificates, root certificates, and domain validation certificates.

A

Extended validation certificates, as the name suggests, provide additional validation for HTTPS web sites. The certificate provides the name of the legal entity responsible for the web site. These certificates require the most effort by the CA to validate and provide a higher level of trust than domain validation because they are validated using more than the domain information.

Email certificates are used to secure email. One such example is Secure Multipurpose Internet Mail Extensions (S/MIME), which provides a digital “signature” for that email. Root certificates define the root CA and validate all other certificates issued by that CA. They are at the top of the CA hierarchy. They are self-signed and are closely guarded.

Subject Alternative Name (SAN) is a field in the certificate definition that allows you to stipulate additional information, such as an IP address or host name, associated with the certificate. Code signing certificates are used for code that is distributed over the Internet, including programs or applications. Code signing certificates verify the code’s origin and help the user trust that the claimed sender is indeed the originator.

21
Q

Matching attacks and mitigations

A

The attacks and their mitigations should be matched in the following manner:

Cross-site request forgery (CSRF) – Validate both the client and server side.

Cross-site scripting (XSS) – Implement input validation.

Session hijacking – Encrypt communications between the two parties.

Malicious add-ons – Implement application allow lists.

22
Q

which term refers to the capability of automation and scripting to effectively streamline tasks and processes, allowing security teams to accomplish more with existing resources?

A)Scaling in a secure manner
B)Reaction time
C)Workforce multiplier
D)Employee retention

A

c) Workforce multiplier

A workforce multiplier, also known as a “force multiplier,” is something that created efficiencies and allows a group to achieve more with the same or fewer resources. In essence, a workforce multiplier the effectiveness of a workforce. In the context of secure operations, automation and scripting serve as a workforce multiplier by enabling security teams to automate repetitive tasks, orchestrate complex processes, and scale their efforts to address a larger volume of security incidents and threats. By leveraging automation and scripting, organizations can effectively maximize the productivity and impact of their security teams, thereby enhancing overall security posture and resilience.

Employee retention refers to the ability of an organization to retain its employees over a certain period. While automation and scripting can contribute to employee satisfaction by reducing mundane tasks and allowing security professionals to focus on more strategic activities, this term does not directly relate to the benefits of automation in increasing efficiency and productivity within secure operations.

Reaction time refers to the speed at which an organization responds to security incidents or emerging threats. While automation and scripting can help improve reaction time by automating incident detection, response, and remediation processes, this term specifically addresses the timeliness of incident response rather than the broader benefits of automation in enhancing overall operational efficiency.

Scaling in a secure manner refers to the capability of automation and scripting to dynamically adjust resources and processes to accommodate changing workload demands while ensuring security and compliance. Automation and scripting enable security teams to scale their efforts effectively without sacrificing security controls or risking data breaches. By automating repetitive tasks and orchestrating complex workflows, organizations can scale their operations in a secure and efficient manner to address a larger volume of security incidents and threats.

23
Q

You are configuring network security measures to ensure secure communication between client devices and servers. Which of the following factors is MOST relevant?

A) Port selection
B) Transport method
C) Protocol selection
D) Connection establishment

A

B) Transport method

Configuring the transport method to use secure protocols such as HTTPS or TLS is the most appropriate solution for ensuring secure communication between client devices and servers.

The transport method refers to the technique used to transfer data between client devices and servers securely. By encrypting data in transit and providing secure communication channels, these protocols help mitigate the risk of data interception and unauthorized access, thereby enhancing network security.

Protocol selection involves choosing appropriate communication protocols to facilitate secure data transmission over a network. While selecting secure protocols is essential for ensuring the confidentiality, integrity, and authenticity of transmitted data, it may not directly address the need for configuring network security measures to establish secure communication between client devices and servers.

Port selection involves choosing specific network ports through which data will be transmitted between client devices and servers. While selecting appropriate ports can help prevent unauthorized access and mitigate the risk of network-based attacks, it may not directly address the need for configuring network security measures to ensure secure communication between client devices and servers.

Connection establishment involves the process of initiating and establishing secure connections between client devices and servers. While establishing secure connections is critical for ensuring secure communication, it is a broader concept that encompasses various security mechanisms and protocols, including protocol selection, port selection, and transport method configuration.

24
Provisioning requests for the IT department have been backlogged for months. You are concerned that employees are using unauthorized cloud services to deploy VMs and store company data. Which of the following services can be used to bring this shadow IT back under the corporate security policy? A) VPN B) CASB C) SLA D) SWG
B) CASB **A cloud access security broker (CASB** enforces proper security measures between a cloud solution and a customer organization. A CASB monitors user activities, notifies administrators about significant events, performs malware prevention and detection, and enforces compliance with security policies. A secure web gateway (SWG) is a cloud-based web gateway that combines features of a Next-generation Firewall (NGFW) and a Web Application Firewall (WAF). SWG provides an ongoing update to filters and detection databases and is designed to provide filtering services between cloud-based resources and on-premises resources. SWG uses standard WAF functions, TLS decryption, CASB functions, sandboxing features, and threat detection functions to protect enterprises from the ever-evolving cloud-based risks and attacks. A service level agreement (SLA) is an agreement between a company and a vendor in which the vendor agrees to provide certain functions for a specified period. Besides ensuring compliance with your security policy and reporting compliance issues in real-time, a CASB should report risks, see any shadow IT, and do it all from one platform. A CASB uses an array of strategies to protect an organization from cyberattacks. A CASB uses data loss prevention to protect against critical data leaks by labeling, tracking, and restricting access to files and specific information as it travels from a device to the cloud and beyond. Shadow IT is IT applications, systems, and resources (such as cloud services) that are installed or used by internal staff without the knowledge or consent of the IT department. Shadow IT is neither authorized nor approved, and therefore poses a risk to the organization's security posture, risk management, data integrity, and regulatory compliance.
25
Which of the following is not a cryptographic attack? A)Downgrade B)Spraying C)Birthday D)Collision
B)Spraying A spraying attack is not a cryptographic attack, but rather a type of brute-force password attack. A spraying attack has a couple of different forms. It may use a common or default password for an organization and test that against multiple accounts. “P@$$w0rd” is often used as a secure password, and in a larger organization, you are likely to find an account that uses this password. Another form would be to use a variation of a company’s slogan against a user list. A downgrade attack is a cryptographic attack that causes the system to use less-stringent security controls. When these less-stringent (downgraded) security controls, typically insecure protocols, are activated, the attacker takes advantage of those less-than-secure settings. An example would be an attack that disables HTTPS port 443. In order for web traffic to go through, HTTP port 80 is enabled. HTTP is less secure protocol than HTTPS, and the attacker exploits HTTP. A collision attack is a cryptographic attack that combines brute force attacks, each with a different input, to produce the same hash value. A birthday attack is a type of cryptographic attack. A birthday attack is named after the mathematical probability that two people in the same network have the same birthday.
26
Which aspect of effective security compliance primarily focuses on individuals' rights to control their personal information, including the ability to request its deletion? A)Data subject B)Data inventory and retention C)Right to be forgotten D)Legal implications
C) Right to be forgotten The right to be forgotten, also known as the right to erasure, grants individuals the right to request the deletion or removal of their personal data from an organization's systems or records. This right is enshrined in privacy regulations such as the GDPR and allows individuals to exercise control over their personal information, particularly in cases where the data is no longer necessary for the purposes for which it was collected or processed. Compliance with the right to be forgotten requires organizations to establish procedures for handling data deletion requests and ensure that personal data is promptly and securely erased when requested by the data subject. Legal implications of privacy compliance encompass the local/regional, national, and global laws and regulations that govern the collection, use, and protection of personal data. These legal frameworks define the rights and responsibilities of organizations regarding the processing of individuals' personal information and establish penalties for non-compliance. For example, in the European Union, the General Data Protection Regulation (GDPR) outlines strict requirements for data protection and privacy, with significant fines for violations. Similarly, laws such as the California Consumer Privacy Act (CCPA) in the United States impose legal obligations on organizations handling personal data, with potential legal consequences for non-compliance. The data subject refers to the individual to whom the personal data relates, whose privacy rights are protected by privacy regulations and laws. Data subjects have the right to control their personal information, including the right to access, correct, and delete their data, as well as the right to be informed about how their data is processed. Compliance with privacy regulations requires organizations to respect the rights of data subjects and implement measures to ensure the lawful and fair processing of their personal data. Data inventory and retention practices involve identifying and categorizing the personal data collected and stored by an organization, as well as establishing policies and procedures for its proper retention and disposal. Effective data inventory and retention practices enable organizations to manage personal data responsibly, minimize data collection, and comply with legal requirements regarding data retention periods. By maintaining an accurate inventory of personal data and implementing data retention schedules, organizations can reduce the risk of unauthorized access, misuse, or retention of unnecessary data.
27
Your company's network consists of multiple subnetworks that each implements its own authentication system. Often users must log in separately to each subnetwork to which they want access. You have been asked to implement technology that allows users to freely access all systems to which their account has been granted access after the initial authentication. Which of the following should you implement? A) smart cards B) single sign-on C) DAC D) MAC E) biometric device
B) single sign-on Single sign-on (SSO) allows users to freely access all systems to which their account has been granted access after the initial authentication. The SSO process addresses the issue of multiple usernames and passwords. It is based on granting users’ access to all the systems, applications, and resources they need when they start a computer session. This is considered both an advantage and a disadvantage. It is an advantage because the user only has to log in once and does not have to constantly re-authenticate when accessing other systems. Multiple directories can be browsed using single sign-on. It is a disadvantage because the maximum authorized access is possible if a user account and its password are compromised. All the systems that are enrolled in the SSO system are referred to as a federation. In most cases, transitive trusts are configured between the systems for authentication. Systems that can be integrated into an SSO solution include Kerberos, LDAP, smart cards, Active Directory, and SAML. A federated identity management system provides access to multiple systems across different enterprises. Discretionary access control (DAC) and mandatory access control (MAC) are access control models that help companies design their access control structure. They provide no authentication mechanism by themselves. Smart cards are authentication devices that can provide increased security by requiring insertion of a valid smart card to log on to the system. They do not determine the level of access allowed to a system. Most smart cards have expiration dates. If a user was reissued a smart card after the previous smart card had expired and the user is able to log into the domain but is now unable to send digitally signed or encrypted e-mail, you should publish the new certificates to the global address list. A biometric device can provide increased security by requiring verification of a personal asset, such as a fingerprint, for authentication. They do not determine the level of access allowed to a system. Single sign-on was created to dispose of the need to maintain multiple user accounts and passwords to access multiple systems. With single sign-on, a user is given an account and password that logs on to the system and grants the user access to all systems to which the user's account has been granted. User accounts and passwords are stored on each individual server in a decentralized privilege management environment.
28
An accounting job role requires separation of duties to reduce the risk of fraud, with tasks spread across two employees. Due to a staffing shortage, you only have one person available to perform all of the tasks. You ask your business’s bank to start sending you weekly statements instead of monthly, and to create an automated email that will alert you if a withdrawal above a certain threshold is made. Which type or category of control did you implement? Choose the BEST answer. A)Managerial category B)Preventative type C)Operational category D)Compensating type E)Deterrent type
D)Compensating type The BEST answer is that you implemented the compensating control type because the primary control, separation of duties, was not feasible due to a staffing shortage. You increased your auditing frequency and added a withdrawal alert to the account to compensate for the loss of the primary control. Note that both of these controls would be considered detective controls IF they were not being implemented in place of a primary control. A detective control alerts you to an incident that is occurring or has already occurred. In this scenario, the purpose of the new controls is to compensate for the absence of another control, which is separation of duties. The category of operational controls supports daily operations. These controls are implemented by people. A step-by-step procedure describing how to perform the duties of the job role so that the risk of fraud is minimized would be an example of operational control. Preventative controls, like firewalls and door locks, block access to security weaknesses. There is nothing in the scenario to indicate the employee knows about the increased auditing, so it is not a deterrent. Deterrent controls discourage behavior that would harm the enterprise, such as acceptable use policies (AUPS) and signs that announce the use of video surveillance. Managerial is a category of controls based on oversight and risk management. These controls are implemented with policies and plans, and guide behaviors and actions to align with the organization’s security goals. If auditing were the primary control for this job role instead of separation of duties, then it would fall in the managerial category. CompTIA defines four categories of access control: technical, operational, managerial, and physical. Technical control – a category of controls that use software or hardware to restrict access, such as firewalls, encryption, and multi-factor authentication. Physical control – a category of controls that are implemented in the physical realm, such as locks, fences, CCTV, backup media, and secured cabling. Managerial (sometimes called administrative) control – a category of controls that dictate how management uses oversight to meet the company's security goals. Managerial controls include risk assessments, performance reviews, background checks, personnel controls, a supervisory structure, security training, and auditing. Operational control – a category of controls that provide employees with best practices to follow and actions to implement to meet security goals. Examples are standard operating procedures, incident response policies, and password policies. Controls can also be classified according to six different control types: Preventative – A preventative control stops security issues before they occur. Deterrent – A deterrent control affects human behavior to make security issues less likely to occur. Detective – A detective control finds indicators of security issues that are occurring or have occurred. Corrective – A corrective control restores control and attempts to correct any damage that was inflicted during a security issue that occurred. Compensating – A compensating control is put into place when the recommended primary control cannot be used. Directive – A directive control provides behavioral guidance, guidelines, and policies to follow regarding potential, current, or past security issues.
29
In security architecture testing, which testing activities aid in the assessment of system resilience and performance when a failure might occur, or when the workload increases? (Choose two.) A)Failover testing B)Parallel processing C)Vulnerability scanning D)Penetration testing
A)Failover testing B)Parallel processing Failover testing and parallel processing would help in assessing system resilience and performance during a failure. **In failover testing, you would deliberately trigger a failure in a system component to evaluate the effectiveness of the failover mechanism.** This method assesses the system's ability to switch to backup components or systems seamlessly in the event of a failure, ensuring continuity and minimal disruption to operations. An example would be shutting off the power to a facility to test the backup generator. **In parallel processing, you would test the ability to handle increased workloads by simultaneously executing multiple tasks or processes to assess the system's ability to cope.** This method is particularly useful for evaluating performance under heavy loads and determining whether the system can efficiently distribute and process tasks in parallel. Parallel processing also ensures redundancy is built in to the system by duplicating components and workloads. In vulnerability scanning, you would identify and assess security vulnerabilities in a system. While this test is important for security, it does not test the architecture for capacity. Penetration testing would simulate cyberattacks to identify and exploit vulnerabilities in a system. This provides critically valuable information, but penetration testing does not evaluate capacity planning.
30
A healthcare organization wants to enhance the security of its electronic health record (EHR) system. Which solution, from the choices below, would be most appropriate for implementing multifactor authentication? A)Identification badges B)Security keys C)Biometrics D)Soft authentication tokens
B)Security keys Security keys would be the most appropriate solution to enhance the security of the EHR system. Security keys are physical devices that users insert into their computers or mobile devices to authenticate their identities. These devices contain cryptographic keys that are used to generate unique authentication codes for each login attempt. Security keys provide a strong level of security and are easy for users to use, making them suitable for protecting sensitive patient health information. Additionally, security keys can help prevent unauthorized access to EHR systems, reducing the risk of data breaches and ensuring compliance with healthcare privacy regulations such as HIPAA. Biometrics refers to the use of physiological or behavioral characteristics, such as fingerprints, iris scans, or facial recognition, to verify the identity of users. While biometrics offers a high level of security and convenience, it may not be the most appropriate solution for the scenario described. Biometric systems can be expensive to implement and may require specialized hardware and software. Additionally, there may be concerns about user privacy and data protection when implementing biometric authentication for healthcare systems. Soft authentication tokens involve the use of physical or software-based devices that generate one-time passwords (OTPs) or cryptographic keys for user authentication. The keys are sent to an application on a mobile device. While authentication tokens provide an additional layer of security, they may not be the most appropriate solution for the scenario described. Healthcare professionals often need quick and easy access to patient records, and requiring them to generate codes with a soft token that is read with another app would be less efficient than inserting a security key into a workstation. While identification badges are commonly used for physical access control in healthcare facilities, they are not suitable for implementing multifactor authentication for electronic systems like EHRs. Identification badges do not provide the same level of security as multifactor authentication solutions like biometrics, authentication tokens, or security keys. Additionally, requiring healthcare professionals to use identification badges for EHR access could introduce security risks if badges are lost or stolen.
31
As a security professional, you have been asked to advise an organization on which access control model to use. You decide that role-based access control (RBAC) is the best option for the organization. What are two advantages of implementing this access control model? (Choose two.) A)low security cost B)highly secure environment C)discretionary in nature D)user friendly E)easier to implement
A) low security cost E) easier to implement **Role-based access control (RBAC) has a low security cost because security is configured based on roles**. For this reason, it is also easier to implement than the other access control models. During the information gathering stage of a deploying RBAC model, you will most likely need a matrix of job titles with their required access privileges. RBAC is NOT the most user friendly option. **Discretionary access control (DAC) is more user friendly than RBAC because it allows the data owner to determine user access rights.** If a user needs access to a file, he only needs to contact the file owner. RBAC is NOT discretionary in nature. DAC is discretionary, meaning access to objects is determined at the discretion of the owner. RBAC is NOT a highly secure environment. Mandatory access control (MAC) is considered a highly secure environment because every subject and object is assigned a security label. **With RBAC, it is easy to enforce minimum privilege for general users.** You would create the appropriate role, configure its permissions, and then add the users to the role. A role is defined based on the operations and tasks that the role should be granted. Roles are based on the structure of the organization and are usually hierarchical. RBAC is a popular access control model used in commercial applications, especially large networked applications. Rule-based access control is often confused with RBAC because their names are similar. With rule-based access control, access to resources is based on a set of rules. The user is given the permissions of the first rule that he matches.
31
Your organization has a contract to provide networking services to a government agency. You are required to use certified hardware to build a secure network. Which of the following practices will help you avoid adversarial threats in the supply chain? (Choose all that apply.) A) Have a legally enforceable purchase order with the hardware vendor B) Only purchase hardware from authorized vendors or resellers C) Request proof of equipment certification from hardware vendors D) Integrate supply chain management into the overall risk management framework E) Inspect hardware for signs of tampering F) Source hardware from multiple vendors in case natural disasters disrupt availability
B) Only purchase hardware from authorized vendors or resellers C) Request proof of equipment certification from hardware vendors D) Integrate supply chain management into the overall risk management framework E) Inspect hardware for signs of tampering NIST defines cybersecurity risks throughout the supply chain as “the potential for harm or compromise that may arise from suppliers, their supply chains, their products, or their services” (NIST SP 800-161r1). Among the recommended techniques for minimizing adversarial threats from hardware purchases, you should: Only purchase hardware from authorized vendors or resellers Integrate supply chain management into the overall risk management framework Inspect hardware for signs of tampering Request proof of equipment certification from hardware vendors Conduct a supply chain analysis and audit the existing supply chain Have an inventory management system that includes asset tracking and secure disposal Natural disasters are a threat to the supply chain, but they are not an adversarial threat. Adversarial threats arise from a human vector, such as malicious insiders, organized crime, and nation-states. A legally enforceable purchase order with a hardware vendor will not protect your organization from hardware that has been tampered with once it left the vendor’s direct control. Hardware should be inspected for missing or broken seals, opened boxes, refurbished components, or counterfeit components.
32