Pocket Prep 18 Flashcards
(102 cards)
Client care representatives in your firm are now permitted to access and see customer accounts. For added protection, you’d like to build a feature that obscures a portion of the data when a customer support representative reviews a customer’s account. What type of data protection is your firm attempting to implement?
A. Encryption
B. Obfuscation
C. Tokenization
D. Masking
D. Masking
Explanation:
The organization is trying to deploy masking. Masking obscures data by displaying only the last four/five digits of a social security or credit card number, for example. As a result, the data is incomplete in the absence of the blocked/removed content. The rest of the information can appear to be there, but the user only sees “*” or a dot.
Tokenization is the process of removing data and placing a token in its place. The question is about part of the data being available, so that does not work.
Encryption takes the data and makes it unreadable. It’s unusual to encrypt, for example, the first part of a credit card number. So, this does not work, either.
Obfuscation is to “confuse.” If data has been obfuscated, the attacker would be left confused when looking at it. Think encryption. It is a way to obscure the data. There are other ways, though. Again, this is not going to work because the user sees a lot of asterisks or dots. That is not obfuscation, that is masking.
Which of the following types of testing focuses on software’s interfaces and the experience of the consumer?
A. Integration Testing
B. Regression Testing
C. Usability Testing
D. Unit Testing
C. Usability Testing
Explanation:
Functional testing is used to verify that software meets the requirements defined in the first phase of the SDLC. Examples of functional testing include:
Unit Testing: Unit tests verify that a single component (function, module, etc.) of the software works as intended. Integration Testing: Integration testing verifies that the individual components of the software fit together correctly and that their interfaces work as designed. Usability Testing: Usability testing verifies that the software meets users’ needs and provides a good user experience. Regression Testing: Regression testing is performed after changes are made to the software and verifies that the changes haven’t introduced bugs or broken functionality.
Non-functional testing tests the quality of the software and verifies that it provides necessary functionality not explicitly listed in requirements. Load and stress testing or verifying that sensitive data is properly secured and encrypted are examples of non-functional testing.
Which of the following is NOT one of the critical elements of a management plane?
A. Management
B. Orchestration
C. Scheduling
D. Monitoring
D. Monitoring
Explanation:
According to the CCSP, the three critical elements of a management plane are scheduling, orchestration, and management. Monitoring is not an element of the management plane.
Compliance with which of the following standards is OPTIONAL for cloud consumers and cloud service providers working in the relevant industry?
A. G-Cloud
B. PCI DSS
C. ISO/IEC 27017
D. FedRAMP
C. ISO/IEC 27017
Explanation:
Cloud service providers may have their environments verified against certain standards, including:
ISO/IEC 27017 and 27018: The International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) publishes various standards, including those describing security best practices. ISO 27017 and ISO 27018 describe how the information security management systems and related security controls described in ISO 27001 and 27002 should be implemented in cloud environments and how PII should be protected in the cloud. These standards are optional but considered best practices. PCI DSS: The Payment Card Industry Data Security Standard (PCI DSS) was developed by major credit card brands to protect the personal data of payment card users. This includes securing and maintaining compliance with the underlying infrastructure when using cloud environments. Government Standards: FedRAMP-compliant offerings and UK G-Cloud are cloud services designed to meet the requirements of the US and UK governments for computing resources. Compliance with these standards is mandatory for working with these governments.
Which of the following types of SOC reports provides high-level information about an organization’s controls intended for public dissemination?
A. SOC 1
B. SOC 3
C. SOC 2 Type II
D. SOC 2 Type I
B. SOC 3
Explanation:
Service Organization Control (SOC) reports are generated by the American Institute of CPAs (AICPA). The three types of SOC reports are:
SOC 1: SOC 1 reports focus on financial controls and are used to assess an organization’s financial stability. SOC 2: SOC 2 reports assess an organization's controls in different areas, including Security, Availability, Processing Integrity, Confidentiality, or Privacy. Only the Security area is mandatory in a SOC 2 report. SOC 3: SOC 3 reports provide a high-level summary of the controls that are tested in a SOC 2 report but lack the same detail. SOC 3 reports are intended for general dissemination.
SOC 2 reports can also be classified as Type I or Type II. A Type I report is based on an analysis of an organization’s control designs but does not test the controls themselves. A Type II report is more comprehensive, as it tests the effectiveness and sustainability of the controls through a more extended audit.
Reference:
Which stage of the IAM process relies heavily on logging and similar processes?
A. Identification
B. Authorization
C. Accountability
D. Authentication
C. Accountability
Explanation:
Identity and Access Management (IAM) services have four main practices, including:
Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering. Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in. Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this. Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.
Your organization wants to address baseline monitoring and compliance by restricting the duration of a host’s non-compliant condition. When the application is deployed again, the organization would like to decommission the old host and replace it with a new Virtual Machine (VM) constructed from the standard baseline image.
What functionality is described here?
A. Blockchain
B. Infrastructure as Code (IaC)
C. Virtual architecture
D. Immutable architecture
D. Immutable architecture
Explanation:
Immutable means unchanging over time or unable to be changed. Immutability of cloud infrastructure is a preferred state. It is feasible to easily decommission all virtual infrastructure components utilized by an older version of software and deploy a new virtual infrastructure in cloud settings. Immutable infrastructure is a solution to the problem of systems deviating from baseline settings over time. This is using a golden image to start virtual machines.
IaC is a virtual environment. The infrastructure is no longer physical routers, switches, and servers; it is now virtual routers, switches, and servers. That could also be called a virtual architecture, although IaC is the common language today.
Blockchain technology has an immutable element. It is, or should be, impossible to alter the record of who it belonged to, such as what we have with cryptocurrency. The FBI has been able to recover stolen bitcoins and return them to the rightful owner.
Information Rights Management (IRM) is the tool that a large manufacturing company has decided to use for their classroom books. It is important for them to control who has access to their content. One of the features that they are most interested in is the ability to recall their books and replace them with up-to-date content, since their training is very technical and they want to ensure that only their customers have access to the books.
Which phase of the data lifecycle does IRM fit best into?
A. Archive phase
B. Use phase
C. Share phase
D. Create phase
B. Use phase
Explanation:
IRM fits best into the use phase. It controls who has access to the content and allows for controls of copy, paste, print, and other features. It also allows the content to be expired, out of use, or replaced. It is about the control of how it is used by the customer.
The create phase does not fit the scenario because this phase is where the content is originally created by the authors. Once the data is created, it can be shared with the customers.
The share phase is not the primary phase of IRM. The exchange between the company and the user of the content is the share phase, but that is not where IRM focuses. The primary phase is the use phase because that is where the features fit.
The archive phase is incorrect because this phase is about long-term storage. IRM controls when the user is accessing it.
A cloud provider needs to ensure that the data of each tenant in their multitenant environment is only visible to authorized parties and not to the other tenants in the environment. Which of the following can the cloud provider implement to ensure this?
A. Network security groups (NSG)
B. Physical network segregation
C. Geofencing
D. Hypervisor tenant isolation
D. Hypervisor tenant isolation
Explanation:
In a cloud environment, physical network segregation is not possible unless it is a private cloud built that way. However, it’s important for cloud providers to ensure separation and isolation between tenants in a multitenant cloud. To achieve this, the hypervisor has the job of tenant isolation within machines.
An NSG is a virtual Local Area Network (LAN) behind a firewall, which is beneficial to use. It is used to control traffic within a tenant or from the internet to that tenant, not between tenants.
Geofencing is used to control where a user can connect from. It does not isolate tenants from each other. Rather, it restricts access from countries that you do not expect access to come from.
Which of the following SOC duties involves coordinating with a team focused on a particular task?
A. Threat Detection
B. Threat Prevention
C. Incident Management
D. Quality Assurance
C. Incident Management
Explanation:
The security operations center (SOC) is responsible for managing an organization’s cybersecurity. Some of the key duties of the SOC include:
Threat Prevention: Threat prevention involves implementing processes and security controls designed to close potential attack vectors and security gaps before they can be exploited by an attacker. Threat Detection: SOC analysts use Security Information and Event Management (SIEM) solutions and various other security tools to identify, triage, and investigate potential security incidents to detect real threats to the organization. Incident Management: If an incident has occurred, the SOC may work with the incident response team (IRT) to contain, investigate, remediate, and recover from the identified incident.
Quality Assurance is not a core SOC responsibility.
Ocean is the information security manager for a new company. They work with the company to ensure that it is in compliance with the specific laws that the company is worried about. Which of the following would they and the company be the least concerned with?
A. Data location
B. Containers
C. Type of data
D. Multi-tenancy
B. Containers
Explanation:
Containers are a type of virtualized storage that does not present significant compliance concerns on its own.
For regulated customers, type of data, data location, and multi-tenancy are frequently the primary compliance concerns. GDPR is a good example of this scenario.
A cloud provider has the capability to use a large pool of resources for numerous client hosts and applications. They are able to offer scalability and on-demand self-service. Which technology makes all this possible?
A. Software defined networking
B. Virtual media
C. Virtualization
D. Guest operating systems
C. Virtualization
Explanation:
Without virtualization, cloud environments as we know them would not be possible. This is because cloud environments are built on virtualization technology. It is virtualization that allows cloud providers to leverage a pool of resources for various customers and the ability to offer such scalability and on-demand self-service.
A Virtual Machine (VM) would be a guest operating system on top of a hypervisor, which is the host operating system. This is just part of what virtualization allows.
Virtual media or virtual Hard Disk Drives (HDD) or virtual Solid State Drives (SSD) is another benefit or part of what virtualization allows.
Software Defined Networking (SDN) is an advance in networking technology that can be used in a virtualized environment, but it is designed for a physical environment.
Halo, a cloud information security specialist, is working with the cloud data architect to design a secure environment for the corporation’s data in the cloud. They have decided, based on latency issues, that they are going to build a Storage Area Network (SAN) using Fibre Channel. Halo is working to identify the security mechanisms that need to be configured with the SAN.
Which of the following are security features that they should use to protect the storage controllers and all the sensitive data?
A. Authentication, LUN Masking, Transport Layer Security (TLS)
B. Authentication, Internet Protocol Security (IPSec), LUN masking
C. Authentication, LUN Masking, Secure Shell (SSH)
D. Kerberos, Internet Protocol Security (IPSec), LUN zoning
B. Authentication, Internet Protocol Security (IPSec), LUN masking
Explanation:
Security mechanisms that should be added to Fibre Channel include:
LUN Masking: Logical Unit Number (LUN) masking is another mechanism used to control access to storage devices on the SAN. LUN masking ensures that only authorized devices can access a particular LUN, helping to prevent unauthorized access or data theft. Authentication: Fibre Channel supports several methods of authentication, including Challenge-Handshake Authentication Protocol (CHAP) and Remote Authentication Dial-In User Service (RADIUS). These protocols ensure that only authorized devices are allowed to connect to the SAN. Encryption: Fibre Channel traffic can be encrypted using IPsec or other encryption protocols. Encryption helps to protect data in transit against eavesdropping or interception by unauthorized parties. Zoning: Fibre Channel switches support the concept of zoning, which allows administrators to control which devices can communicate with each other on the SAN. Zoning can be based on port, WWN (World Wide Name), or a combination of both. Auditing and logging: Fibre Channel switches and devices should be configured to generate logs and audit trails of all SAN activity. This can help identify potential security incidents or anomalies and provide a record of activity for compliance purposes.
Kerberos is not commonly used to authenticate to a Fibre Channel SAN, CHAP is more likely.
LUN masking is used, not LUN zoning. The word zoning by itself would have been sufficient.
TLS and SSH are not common for encryption. TLS can be used if the traffic is being tunneled, Fibre Channel over IP (FCIP).
Frankie has been tasked with finding and understanding how certain data keeps being leaked. She is analyzing the circumstances that data is being used and transmitted under. What type of analysis is she doing as part of Data Loss Prevention (DLP)?
A. Contextual analysis
B. Data permanence
C. Data classification
D. Content analysis
A. Contextual analysis
Explanation:
Data Loss Prevention (DLP) is the set of tools, technologies, and policies that a business can use to protect their sensitive data from being sent or used by the wrong people or the wrong place.
Content analysis is when the DLP tools are looking for keywords, patterns, metadata, or anything to identify sensitive data, such as social security numbers or credit card numbers.
Contextual analysis is the analysis of the circumstances (context) in which data is being used. For example, an email being transmitted or received from an external system versus internal.
Data classification is understanding, labeling, and applying policies based on how sensitive data is.
Data permanence would be simply how long the data exists. It is not a formal term.
Mateo is working with the cloud provider that his business has chosen to provide Platform as a Service (PaaS) for their server-based needs. It is necessary to specify the Central Processing Unit (CPU) requirements to ensure that this solution works as they require. CPU needs would be specified within the what?
A. Business Associate Agreement (BAA)
B. Service Level Agreement (SLA)
C. Master Services Agreement (MSA)
D. Privacy Level Agreement (PLA)
B. Service Level Agreement (SLA)
Explanation:
A Service Level Agreement (SLA) specifies the conditions of service that will be provided by the cloud provider, such as uptime or CPU needs.
The MSA is a document that covers the basic relationship between the two parties. In this case, the customer and the cloud provider. It does not specify metrics such as CPU needs.
The BAA is found within HIPAA. It informs the cloud provider of the requirements to protect health data. In Europe, this would be called a Data Processing Agreement (DPA) under GDPR. More generically, this would be called a Privacy Level Agreement (PLA).
Reference:
An organization has a team working on an audit plan. They have just effectively defined all the objectives needed to set the groundwork for the audit plan. What is the NEXT step for this team to complete?
A. Perform the audit
B. Define scope
C. Review previous audits
D. Conduct market research
B. Define scope
Explanation:
Audit planning is made up of four main steps, which occur in the following order:
Define objectives Define scope Conduct the audit Lessons learned and analysis
Which of the following events is likely to cause the initiation of a Disaster Recovery (DR) plan?
A. A failure in the supply chain for the manufacturing process
B. The loss of the Chief Executive Officer (CEO) and Chief Financial Office. (CFO) in a plane crash
C. A fire in the primary data center
D. The main Internet Service Provider (ISP) experiences a fiber cut
C. A fire in the primary data center
Explanation:
NIST defines Disaster Recovery Plans as a written plan for recovering one or more information systems at an alternate facility in response to a major hardware or software failure or the destruction of a facility in NIST SP 800-34, so the primary data center being destroyed by fire is the correct answer.
NIST defines a Business Continuity Plan as a written document with instructions or procedures that describe how an organization’s mission/business processes will be sustained during and after a significant disruption. Because the question is looking for a disaster, the answer about a fire is a better answer here. That has the potential of destroying the facility or at least causing damage to the hardware and software in the data center.
If the ISP has a fiber cut, that would/could distract communications to the data center. If this happens, it is unlikely to require a move to an alternate site. This is a BC issue, not a disaster, at least according to the NIST definitions.
These definitions from NIST work well around this exam. If you disagree with the definitions or use the terms in another way that is fine—just know for the exam that this is another way to look at the terms.
If the CEO’s and CFO’s lives are lost, that is a sad event for their families and for the business. A succession plan should be created if this is a concern for a business.
A potential failure in the supply chain is something that needs to be managed. ISO/IEC 28000 is a useful document that begins that work. However, a DR plan is not needed for this. Perhaps a BC plan would be useful though.
While building their virtualized data center in a public cloud Infrastructure as a Service (IaaS), a real estate corporation operating in Canada knows that they must be careful to care for all the data and personal information that they will be storing within the cloud. Since it is critical to protect the data that is in their possession, they are working to control access.
Which of the following is NOT a protection technique that they can use for their systems?
A. Privileged access
B. Standard configurations
C. Separation of duty
D. Least privilege
A. Privileged access
Explanation:
Privileged access must be strictly limited and should enforce least privilege and separation of duty. Therefore, it is not a virtualization system protection mechanism.
Least privilege means that a user of any kind should only be given as little access as possible. They should have access to what they need to access with only the permissions that they require and nothing more. This is a great idea to pursue but difficult to achieve in reality.
Separation of duty is the idea of taking a task and breaking it down into specific steps. They divide and assign those steps to a combination of at least two people. Those two people will have to perform their specific steps to accomplish that task. The purpose of this is to force collusion. This would require someone to convince someone else to help them commit fraud rather than being able to do it by themselves.
Standard configurations are agreed-upon baselines and aid in managing change, which provides protection for virtualization systems.
A large social media company that relies on public Infrastructure as a Service (IaaS) for their virtual Data Center (vDC) had an outage. They were not locatable through Domain Name Service (DNS) queries midafternoon one Thursday. In their virtual routers, a configuration was altered incorrectly. What did they fail to manage properly?
A. Input validation
B. Service level management
C. User training
D. Change enablement practice
D. Change enablement practice
Explanation:
ITIL defines change enablement practice as the practice of ensuring that risks are properly assessed, authorizing changes to proceed, and managing a change schedule to maximize the number of successful service and product changes. This is what happened to Facebook/Instagram/WhatsApp/Meta. They have their own network, but the effect would have been the same using AWS as an IaaS. This is change management.
Service level management is defined in ITIL as the practice of setting clear business-based targets for service performance so that the delivery of a service can be properly assessed, monitored, and managed against these targets.
Input validation needs to be performed by software to ensure that the values entered by the users are correct. The main goal of input validation is to prevent the submission of incorrect or malicious data and ensure that the software functions as intended. By checking for errors or malicious input, input validation helps to increase the security and reliability of software.
User training can help reduce the likelihood of errors occurring while using the software. By teaching users how to properly use the software, they become more aware of potential mistakes that may occur and can take measures to prevent them. This can help reduce the occurrence of mistakes, leading to less downtime, more accurate work, and improved outcomes.
Software Configuration Management (SCM) is widely used in all software development environments today. There are many practices that are part of a secure SCM environment. What are some of these practices?
A. Version control, build automation, release management, issue tracking
B. Secure Software Development Lifecycle, build automation, releas. management, issue tracking
C. Version control, build automation, Secure Software Development Lifecycle
D. Version control, release management, testing and tracking tools
A. Version control, build automation, release management, issue tracking
Explanation:
Correct answer: Version control, build automation, release management, issue tracking
Software Configuration Management (SCM) has many practices. Some common activities include:
Version control: This allows developers to track changes to code, collaborate, and revert changes if needed. Build automation: This automates the compiling of the source code into executable software. Jenkins and Travis CI are common build tools today. Release management: This helps to automate the process of deploying software and ensures releases are tested and approved. Issue tracking: This is used to track and manage bugs, feature requests, and other issues. Jira, Trello, and Asana are common tools.
Secure Software Development LifeCycle (SSDLC) is a related yet distinct process. SSDLC is about developing software with security in mind. SCM can support the SSDLC process. SSDLC would not be a practice of SCM.
Testing and tracking tools are found within SCM, but it is not so much the process.
Kathleen works at a large financial institution that has a growing software development group. They have a desire to “shift left” in their thinking as they build their Platform as a Service (PaaS) environment. The Developers and Operations (DevOps) are now working together to build and deploy a strong and secure cloud environment that will contain a Sofware as a Service (SaaS) product that will be used by the financial analysts. To ensure the software has the ability to withstand attack attempts that will surely happen, they need to hypothesis what can happen so as to do their best to prevent it.
What would you recommend that they do?
A. Threat modeling using both Damage, Reproducability, Exploitability, Affected users, Discoverability (DREAD) and vulnerability assessment
B. Threat modeling using both Spoofing, Tampering, Repudiation, Information disclosure, Elevation of privileges (STRIDE) and Damage, Reproducability, Exploitability, Affected users, Discoverability (DREAD)
C. Threat modeling using both Spoofing, Tampering, Repudiation, Information disclosure, Elevation of privileges (STRIDE) and open box testing
D. Threat modeling using both Damage, Reproducability, Exploitability, Affected users, Discoverability (DREAD) and penetration testing
B. Threat modeling using both Spoofing, Tampering, Repudiation, Information disclosure, Elevation of privileges (STRIDE) and Damage, Reproducability, Exploitability, Affected users, Discoverability (DREAD)
Explanation:
Correct answer: Threat modeling using both Spoofing, Tampering, Repudiation, Information disclosure, Elevation of privileges (STRIDE) and Damage, Reproducability, Expoloitability, Affected users, Discoverability (DREAD)
Threat modeling is the processing of finding threats and risks that face an application or system once it has gone live. This is an ongoing process that will change as the risk landscape changes and is, therefore, an activity that is never fully completed. DREAD and STRIDE, which were both conceptualized by Microsoft, are two prominent models recommended by OWASP. Together, they look at what could happen (STRIDE) and how bad it could be (DREAD).
Threat modeling techniques include STRIDE, DREAD, PASTA, ATASM, TRIKE, and a few others.
Open box testing is a type of software testing, where the code is known to the tester. It is not threat modeling; it is the actual test of actual code. Threat modeling is predictive, trying to understand how threats could be realized in the future.
The same is true with vulnerability assessment and penetration testing. They are looking for problems that exist, not predicting what could happen in the future.
Which of the following attributes of evidence deals with the fact that it is real and relevant to the investigation?
A. Convincing
B. Admissible
C. Authentic
D. Complete
C. Authentic
Explanation:
Typically, digital forensics is performed as part of an investigation or to support a court case. The five attributes that define whether evidence is useful include:
Authentic: The evidence must be real and relevant to the incident being investigated. Accurate: The evidence should be unquestionably truthful and not tampered with (integrity). Complete: The evidence should be presented in its entirety without leaving out anything that is inconvenient or would harm the case. Convincing: The evidence supports a particular fact or conclusion (e.g., that a user did something). Admissible: The evidence should be admissible in court, which places restrictions on the types of evidence that can be used and how it can be collected (e.g., no illegally collected evidence).
Which of the following risks associated with PaaS environments includes hypervisor attacks and VM escapes?
A. Interoperability Issues
B. Persistent Backdoors
C. Virtualization
D. Resource Sharing
C. Virtualization
Explanation:
Platform as a Service (PaaS) environments inherit all the risks associated with IaaS models, including personnel threats, external threats, and a lack of relevant expertise. Some additional risks added to the PaaS model include:
Interoperability Issues: With PaaS, the cloud customer develops and deploys software in an environment managed by the provider. This creates the potential that the customer’s software may not be compatible with the provider’s environment or that updates to the environment may break compatibility and functionality. Persistent Backdoors: PaaS is commonly used for development purposes since it removes the need to manage the development environment. When software moves from development to production, security settings and tools designed to provide easy access during testing (i.e. backdoors) may remain enabled and leave the software vulnerable to attack in production. Virtualization: PaaS environments use virtualized OSs to provide an operating environment for hosted applications. This creates virtualization-related security risks such as hypervisor attacks, information bleed, and VM escapes. Resource Sharing: PaaS environments are multitenant environments where multiple customers may use the same provider-supplied resources. This creates the potential for side-channel attacks, breakouts, information bleed, and other issues with maintaining tenant separation.
At which stage of the process of developing a BCP should an organization plan out personnel and resource requirements?
A. Implementation
B. Testing
C. Auditing
D. Creation
A. Implementation
Explanation:
Managing a business continuity/disaster recovery plan (BCP/DRP) has three main stages:
Creation: The creation stage starts with a business impact assessment (BIA) that identifies critical systems and processes and defines what needs to be covered by the plan and how quickly certain actions must be taken. Based on this BIA, the organization can identify critical, important, and support processes and prioritize them effectively. For example, if critical applications can only be accessed via a single sign-on (SSO), then SSO should be restored before them. BCPs are typically created first and then used as a template for prioritizing operations within a DRP. Implementation: Implementation involves identifying the personnel and resources needed to put the BCP/DRP into place. For example, an organization may take advantage of cloud-based high availability features for critical processes or use redundant systems in an active/active or active/passive configuration (dependent on criticality). Often, decisions on the solution to use depend on a cost-benefit analysis. Testing: Testing should be performed regularly and should consider a wide range of potential scenarios, including cyberattacks, natural disasters, and outages. Testing can be performed in various ways, including tabletop exercises, simulations, or full tests.
Auditing is not one of the three stages of developing a BCP/DRP.