Pocket Prep 13 Flashcards

1
Q

Eila works for a large government contractor. As their lead information security professional working on the business case for their potential move to the cloud, she knows that it is critical to define and defend her reasons for moving to the cloud. Of the following statements, which is the MOST accurate?

A. Cloud platforms offer increased scalability and performance
B. There are no security risks associated with moving to a cloud environment
C. Cloud platforms are always less expensive than on-prem solutions
D. Traditional data centers and cloud environments have the exact same risks

A

A. Cloud platforms offer increased scalability and performance

Explanation:
Cloud environments are attractive to organizations because they offer increased scalability and performance.

While it’s possible that moving to the cloud can be less expensive than traditional data centers, that is not always the case. Sometimes cloud platforms can come with hidden costs that weren’t initially expected. Cloud platforms come with their own set of security risks and, while some are the same as the risks you’d see in a traditional data center, some are different as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Multivendor network connectivity is MOST related to which of the following risk considerations of cloud computing?

A. General Technology Risks
B. Data Center Location
C. Downtime
D. Compliance

A

C. Downtime

Explanation:
Cloud computing risks can depend on the cloud service model used. Some risks common to all cloud services include:

CSP Data Center Location: The location of a CSP’s data center may impact its exposure to natural disasters or the risk of regulatory issues. Cloud customers should verify that a CSP’s locations are resilient against applicable natural disasters and consider potential regulatory issues.
Downtime: If a CSP’s network provider is down, then its services are unavailable to its customers. CSPs should use multivendor network connectivity to improve network resiliency.
Compliance: Certain types of data are protected by law and may have mandatory security controls or jurisdictional limitations. These restrictions may affect the choice of a cloud service model or CSP.
General Technology Risks: CSPs are a big target for attackers, who might exploit vulnerabilities or design flaws to attack CSPs and their customers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following terms is LEAST related to the others?

A. HA
B. Resiliency
C. IaC
D. Clustering

A

C. IaC

Explanation:
Clustering is commonly used as part of high availability (HA) schemes for resiliency and redundancy. IaC is for configuration management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Quinn has been hired as the new information security manager at a regional hospital. He has been reviewing the hospital’s information security policies. In reviewing the data handling policies, he has discovered that it is necessary to redefine what data would be considered sensitive and require protection under the Health Insurance Portability and Accountability Act (HIPAA).

Of the following, which is considered sensitive data that must be protected as Protected Health Information (PHI)?

A. Current street address
B. Political views
C. Passport number
D. Demographic information

A

D. Demographic information

Explanation:
Protected Health Information (PHI) covers items such as demographic information, medical history, physical and mental health information, lab results, physician notes, and other health related items.

Passport numbers, political views, and current street addresses would be considered Personally Identifiable Information (PII) rather than PHI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You see a value like XXXX XXXX XXXX 1234 in the credit card column of a database table. Which of the following data security techniques was used?

A. Anonymization
B. Encryption
C. Hashing
D. Masking

A

D. Masking

Explanation:
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:

Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules.
Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions.
Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number.
Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data.
Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Any information relating to past, present, or future medical status that can be tied to a specific individual is known as which of the following?

A. Gramm Leach Bliley Act (GLBA)
B. Payment Card Industry (PCI) information
C. Protected Health Information (PHI)
D. Health Information Portability Accountability Act

A

C. Protected Health Information (PHI)

Explanation:
Protected Health Information (PHI) is a subset of Personally Identifiable Information (PII). PHI applies to any entity defined under the U.S. Health Information Portability and Accountability Act (HIPAA) laws. Any information that can be tied to a unique individual as it relates to their past, current, or future health status is considered PHI.

The payment card industry defines the Data Security Standard (DSS) that we fully know as PCI-DSS. It demands that payment card information be protected.

GLBA is a U.S. act that ensures that personal data belonging to the customers of financial institutions must be protected. It is tied to Sarbanes Oxley (SOX).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of the following regulations deals with law enforcement’s access to data that may be located in data centers in other jurisdictions?

A. GLBA
B. SCA
C. US CLOUD Act
D. SOX

A

C. US CLOUD Act

Explanation:
A company may be subject to various regulations that mandate certain controls be in place to protect customers’ sensitive data or ensure regulatory transparency. Some examples of regulations that can affect cloud infrastructure include:

General Data Protection Regulation (GDPR): GDPR is a regulation protecting the personal data of EU citizens. It defines required security controls for their data, export controls, and rights for data subjects.
US CLOUD Act: The US CLOUD Act creates a framework for handling cross-border data requests from cloud providers. The US law enforcement and their counterparts in countries with similar laws can request data hosted in a data center in a different country.
Privacy Shield: Privacy Shield is a program designed to bring the US into partial compliance with GDPR and allow US companies to transfer EU citizen data outside of the US. The main reason that the US is not GDPR compliant is that federal agencies have unrestricted access to non-citizens’ data.
Gramm-Leach-Bliley Act (GLBA): GLBA requires financial services organizations to disclose to customers how they use those customers’ personal data.
Stored Communications Act of 1986 (SCA): SCA provides privacy protection for the electronic communications (email, etc.) of US citizens.
Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) Act: HIPAA and HITECH are US regulations that protect the protected health information (PHI) that patients give to medical providers.
Payment Card Industry Data Security Standard (PCI DSS): PCI DSS is a standard defined by major payment card brands to secure payment data and protect against fraud.
Sarbanes Oxley (SOX): SOX is a US regulation that applies to publicly-traded companies and requires annual disclosures to protect investors.
North American Electric Reliability Corporation/Critical Infrastructure Protection (NERC/CIP): NERC/CIP are regulations designed to protect the power grid in the US and Canada by ensuring that power providers have certain controls in place.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Structured and unstructured storage pertain to which of the three cloud service models?

A. DataBase as a Service (DBaaS)
B. Infrastructure as a Service (IaaS)
C. Platform as a Service (PaaS)
D. Software as a Service (SaaS)

A

C. Platform as a Service (PaaS)

Explanation:
Each cloud service model uses a different method of storage as shown below:

Platform as a Service (PaaS) uses the terms of structured and unstructured to refer to different storage types.
Infrastructure as a Service (IaaS) uses the terms of volume and object to refer to different storage types.
Software as a Service (SaaS) uses content and file storage and information storage and management to refer to different storage types.

The use of these terms begins with the Cloud Security Alliance, and it would be a good idea to read the CSA guidance document. As of the time this question was written in 2022, the CSA put out version 4. Version 5 is expected soon.

DBaaS is not one of the three cloud service models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following concepts in IAM is MOST relevant if an organization has a close partner that they share access to data, systems, and software with?

A. Multi-Factor Authentication
B. Single Sign-On
C. Federated Identity
D. Identity Providers

A

C. Federated Identity

Explanation:
Identity and Access Management (IAM) is critical to application security. Some important concepts in IAM include:

Federated Identity: Federated identity allows users to use the same identity across multiple organizations. The organizations set up their IAM systems to trust user credentials developed by the other organization.
Single Sign-On (SSO): SSO allows users to use a single login credential for multiple applications and systems. The user authenticates to the SSO provider, and the SSO provider authenticates the user to the apps using it.
Identity Providers (IdPs): IdPs manage a user’s identities for an organization. For example, Google, Facebook, and other organizations offer identity management and SSO services on the Web.
Multi-Factor Authentication (MFA): MFA requires a user to provide multiple authentication factors to log into a system. For example, a user may need to provide a password and a one-time password (OTP) sent to a smartphone or generated by an authenticator app.
Cloud Access Security Broker (CASB): A CASB sits between cloud applications and users and manages access and security enforcement for these applications. All requests go through the CASB, which can perform monitoring and logging and can block requests that violate corporate security policies.
Secrets Management: Secrets include passwords, API keys, SSH keys, digital certificates, and anything that is used to authenticate identity and grant access to a system. Secrets management includes ensuring that secrets are randomly generated and stored securely.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

HIPAA protects which of the following types of private data?

A. Payment Data
B. Protected Health Information
C. Personally Identifiable Information
D. Contractual Private Data

A

B. Protected Health Information

Explanation:
Private data can be classified into a few different categories, including:

Personally Identifiable Information (PII): PII is data that can be used to uniquely identify an individual. Many laws, such as the GDPR and CCPA/CPRA, provide protection for PII.
Protected Health Information (PHI): PHI includes sensitive medical data collected regarding patients by healthcare providers. In the United States, HIPAA regulates the collection, use, and protection of PHI.
Payment Data: Payment data includes sensitive information used to make payments, including credit and debit card numbers, bank account numbers, etc. This information is protected under the Payment Card Industry Data Security Standard (PCI DSS).
Contractual Private Data: Contractual private data is sensitive data that is protected under a contract rather than a law or regulation. For example, intellectual property (IP) covered under a non-disclosure agreement (NDA) is contractual private data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Rogelio is working with the deployment team to deploy 50 new servers as virtual machines (VMs). The servers that he will be deploying will be a combination of different Operating Systems (OS) and Databases (DB). When deploying these images, it is critical to make sure…

A. That the golden images are always used for each deployment
B. That the VMs are updated and patched as soon as they are deployed
C. That the VM images are pulled from a trusted external source
D. That the golden images are used and then patched as soon as it is deployed

A

A. That the golden images are always used for each deployment

Explanation:
The golden image is the current and up-to-date image that is ready for deployment into production. If an image needs patching, it should be patched offline and then the new, better version is turned into the new current golden image. Patching servers in deployment is not the best idea. Patching the image offline is the advised path to take.

The golden image should be built within a business, not pulled from an external source, although there are exceptions. It is critical to know the source of the image (IT or security) and to make` sure that it is being maintained and patched on a regular basis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Cloud environments call for high availability and resiliency. What can be done to ensure that there is no downtime?

A. Create backups of the most important servers in the environment
B. Ensure that there are no single points of failure
C. Only perform maintenance a couple times a year
D. Only perform updates and upgrades during non-business hours

A

B. Ensure that there are no single points of failure

Explanation:
Many cloud customers expect their systems to be available at all times. To maintain high availability, it’s critical to ensure that there are not any single points of failure. While it’s good practice to perform updates and upgrades outside a business’ normal operating hours, many organizations today have locations across the globe and operate 24 hours a day. This means that downtime at any time is going to be unacceptable. Cloud providers must find a way to perform updates and upgrades without causing any downtime.

Backing up systems is very important, but all systems must be backed up, not just a select few.

Maintenance can’t be scheduled only a couple times a year. It must be done whenever necessary, so it’s important to be able to do the maintenance without causing any downtime to the customer. Updates and upgrades during non-business hours are a little difficult if this is a global company. There are ways in the cloud to do upgrades in a way that does not cause the customer downtime. Orchestration is a good tool to begin that discussion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A large consulting firm has a hybrid cloud environment. They have their own private cloud that they manage on their premises, and they use a large public cloud provider for some of their Platform and Software as a Service (PaaS & SaaS) needs. Their Security Operations Center (SOC) has been processing a few high priority Indications of Compromise (IoC) that appear to point to a live incident.

For their response, what should they do?

A. Reconnaissance, Delivery, Exploitation
B. Observe, Orient, Decide, Act
C. Weaponization, Delivery, Exploitation
D. Reconnaissance, Execution, Evasion, Collection

A

B. Observe, Orient, Decide, Act

Explanation:
The OODA loop is Observe, Orient, Decide and Act. This is a common incident response concept. The OODA loop is iterative, meaning that after completing one cycle, individuals continuously loop back to the beginning to gather new information, reassess the situation, and make further decisions and actions. The loop emphasizes the importance of speed, adaptability, and learning from feedback to maintain a competitive advantage and effectively respond to dynamic and uncertain situations.

The other three answer options come from steps in cyber kill chains. One of those kill chains is the Lockheed-Martin Cyber Kill Chain, and the other is the MITRE ATT&CK cyber kill chain. Kill chains are the path that bad actors take in their attacks. They are good to be familiar with. In their entirety, they are as follows:

The Lockheed Martin Kill Chain is a comprehensive cybersecurity strategy that helps organizations identify and prevent advanced cyber attacks at various stages of the attack process. The concept is based on the idea of a chain, where each stage represents a link in the chain that can be broken or disrupted, effectively stopping the cyber attack from being successful. The stages are: Reconnaissance, Weaponization, Delivery, Exploitation, Installation, Command & Control, and Actions on Objectives.
The MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) framework, according to MITRE's website, "is a comprehensive knowledge base that describes the various Tactics, Techniques, and Procedures (TTPs) used by adversaries during cyberattacks. It provides a structured and standardized way of understanding and categorizing the different stages of an attack. One of the frameworks within MITRE ATT&CK is the "ATT&CK Kill Chain." The kill chain steps are: Reconnaissance, Resource Development, Inital Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Collection, Command and Control, Exfiltration and Impact."
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Piotr is the cloud administrator that is setting up several servers for his corporation’s software development projects. He has been selecting the servers that they need based on the number of Central Processing Units (CPU) and the amount of Random Access Memory (RAM) that they expect these servers to need.

In a cloud environment, these options can be described as which of the following?

A. Storage parameters
B. Reservations
C/ Compute parameters
D. Network parameters

A

C/ Compute parameters

Explanation:
There are three fundamental elements that need to be built to build a cloud environment. They are compute, storage, and network. The compute parameters and processing power of a cloud environment are made up by the number of CPUs and the amount of RAM in the system or environment.

The storage parameters would include HDD or SSD and amount of space. It would also include how often the data would be accessed to ensure bandwidth and CPU are sufficient.

Network parameters would include the number of bits per second needed to move the data back and forth from the clients to the servers as well as the uptime that the connection needs.

Reservations are the amount of CPU, RAM, network, etc. that the corporations believe is the minimum that they will need for this environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A covert government agency has hired highly skilled software developers to create a tool to infiltrate and control the power grid of an enemy state. The software is designed to slowly cause damage to the programmable logic computers (PLC) that control the physical systems of the power station. The software is also designed to send false information to the monitoring devices to reduce the chance that the damage will be noticed until it is too late.

What type of threat is this?

A. Denial of Service (DoS) attack
B. Command injection attack
C. Malicious insider
D. Advanced Persistent Threat (APT)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Baird is responsible for vendor management at his office. He works for a large bank that relies on several vendors for different services at different times. This includes a public cloud provider for their Infrastructure and Platform as a Service (IaaS & PaaS) deployments. He has learned that vendor management can be both difficult and fulfilling.

What international standard can he use to possibly make things easier?

A. International Standards Organization/International Electrotechnical Committee (ISO/IEC) 17788
B. International Standards Organization/International Electrotechnical Committee (ISO/IEC) 27002
C. International Standards Organization/International Electrotechnical Committee (ISO/IEC) 27036
D. International Standards Organization/International Electrotechnical Committee (ISO/IEC) 27050

A

C. International Standards Organization/International Electrotechnical Committee (ISO/IEC) 27036

Explanation:
ISO/IEC 27036 is a set of international standards that provides guidance on information security for supplier relationships. It focuses on establishing and maintaining secure relationships between organizations and their suppliers, ensuring the protection of information assets throughout the supply chain. It may not make things easier, but then again it might.

ISO/IEC 27002, also known as ISO 27002:2013 or simply ISO 27K2, is an international standard that provides guidelines and best practices for information security management. It is a part of the ISO/IEC 27000 series, which collectively defines the framework for implementing an Information Security Management System (ISMS).

ISO/IEC 17788, also known as ISO 17788:2014, is an international standard that provides guidelines and definitions for cloud computing. It aims to establish a common understanding of cloud computing concepts, terminology, and models, facilitating communication and interoperability among different stakeholders involved in cloud-related activities.

ISO/IEC 27050 is an international standard that provides guidelines and best practices for electronic Discovery (e-Discovery).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which of the following is NOT one of the main risks that needs to be assessed during the Business Impact Assessment (BIA) phase of developing a Disaster Recovery (DR) plan?

A. Migration of services to the alternate site
B. Legal and contractual issues from failures
C. Load capacity at the disaster recovery site
D. Budgetary constraints applied by management

A

D. Budgetary constraints applied by management

Explanation:
As with any new system or plan being implemented, it’s important to assess the risks of the changes. Budgetary constraints are not a main risk when developing a DR plan.

The main risks associated with developing a BCDR plan include the load capacity at the BCDR site, migration of services, and legal or contractual issues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Jada is currently vetting the tokenization process of her organization’s cloud provider. They are using this tokenization process to protect payment card data that will be tied to their own internally created application. What is one risk that Jada should ensure is limited during the tokenization process?

A. Vendor lock-in
B. Service Level Agreement (SLA) modifications
C. Price changes
D. File type changes

A

A. Vendor lock-in

Explanation:
Vendor lock-in is a scenario in which a cloud customer is tied and dependent on one cloud provider without the ability to move to another provider. Cloud customers should ensure that anything done with the cloud provider will not cause this type of vendor lock-in. If there is anything in how the tokenization is performed that locks them into that format after they adapt their internal application, it could prevent them moving easily to a different vendor in the future if needed.

Price changes are annoying but not a security risk. It is a financial risk. The focus here is information security.

SLA modifications can be annoying or helpful. It depends on what is being modified, why, and how. So, it’s not as critical a risk as vendor lock-in.

File type changes could be a problem somewhere, but it is not a potential problem here. The lock-in potential problem is not the change of the data file type. The problem is how the data is converted to a token and then back again.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Traditional encryption methods may become obsolete as the cloud’s computing power and innovative technology improve optimization issues. What kind of advanced technology is potentially capable of defeating today’s encryption methods?

A. Quantum computing
B. Blockchain
C. Artificial intelligence
D. Machine learning

A

A. Quantum computing

Explanation:
Quantum computing is capable of solving problems that traditional computers are incapable of solving. When quantum computing becomes widely accessible to the general public, it will almost certainly be via the cloud due to the substantial processing resources necessary to do quantum calculations.

A side note: The encryption we have today will likely be broken, especially algorithms such as RSA and Diffie-Hellman. NIST began a competition in 2016 to get ahead of this and design encryption algorithms that can be used in the age of quantum computers safely. For information about this, refer to NIST’s website (csrc) and look for post quantum cryptoography and post quantum cryptography standardization.

Machine learning is the ability we now have for computers to be able to process a lot of data and provide us with information. It could be that they aid us in verifying a hypothesis, or they determine the idea that we need to address, or can address.

Machine learning is arguably a subset of Artificial Intelligence (AI). We keep making advances in technology that are getting us closer to true AI. We have robots that can navigate terrain all on their own, and we have chatGPT that can answer questions as if it is thinking on its own rather than just citing or quoting a source.

Blockchains give us the ability to track something, such as cryptocurrency, with an immutable or unchangeable record.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Dezso and his team are planning on moving to the cloud in a Platform as a Service (PaaS) implementation. As they are evaluating the cloud vendors that they have to choose from, they are concerned about vendor lock-in. What would cause vendor lock in?

A. Overly expensive hardware
B. Proprietary requirements
C. Poorly written Service Level Agreements (SLA)
D. Undocumented software

A

B. Proprietary requirements

Explanation:
Vendor lock-in occurs when an organization is unable to leave the vendor. The most common reason for vendor lock-in would be proprietary formats for how data is stored. It is possible that some consider contracts that prevent a customer from leaving to be vendor lock-in as well. The proprietary requirements make it very expensive, difficult, and burdensome to move to a new provider.

Undocumented software occurs all the time. The biggest problem with that is that it is hard to understand how it works.

Poorly written SLAs would not cause lock-in. They are a problem. The SLAs specify the level of service that the customer can and should expect to receive from the cloud provider. If they are not well defined, the customer may not get the service they need, such as enough bandwidth.

Overly expensive hardware does not cause lock-in. It might lock money into the wrong products, but that is not vendor lock-in. That’s poor financial management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

An information security manager is weighing their options for protecting the organization’s external-facing applications from SQL injection, cross-site scripting, and cross-site forgery attacks. What type of solution has the IT manager selected to protect the external-facing applications?

A. eXtensible Markup Language (XML) gateway
B. Web Application Firewall (WAF)
C. Intrusion Prevention System (IPS)
D. Application Programming Interface (API) gateway

A

B. Web Application Firewall (WAF)

Explanation:
A Web Application Firewall (WAF) specifically addresses attacks on applications and external services. A WAF can assist in defending against SQL injection, cross-site scripting (XSS), and Cross-Site Request Forgery (CSRF) attacks.

API gateways analyze and monitor SOAP and ReST traffic. This includes XML and JavaScript Object Notation (JSON).

XML gateways focus on XML traffic.

IPS traffic watches for intrusions. It would not see XSS or CSRF attacks within the web traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A cloud information security manager is building the policies and associated documents for handling cloud assets. She is currently detailing how assets will be understood or listed so that access can be controlled, alerts can be created, and billing can be tracked. What tool allows for this?

A. Identifier
B. Key
C. Tags
D. Value

A

C. Tags

Explanation:
Tags are pervasive in cloud deployments. It is crucial that a plan is built for the corporation on how to tag assets. If it is not done consistently, it is not helpful. A tag is made up of two pieces, a key or name and a value. Key here is not the cryptographic key for encryption and decryption, but it is a word in English that was chosen by some to use here. It is really a name.

You can think of the tag as a type of identifier, but the tool needed to manage assets is called a tag.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Cloud providers that are at tier 3 must have multiple and independent power feeds to ensure redundancy. What else is needed in case of a power failure on one of the power feeds?

A. Third power feed and a generator
B. Generator and second power feed
C. Second power feed and Uninterruptible Power Supply (UPS)
D. Generator and Uninterruptible Power Supply (UPS)

A

D. Generator and Uninterruptible Power Supply (UPS)

Explanation:
Cloud providers will need to have multiple independent power feeds in case a power feed goes down. In addition, they will also typically have a generator or battery backup (UPS) to serve in the meantime when a power feed goes out.

The answers that contain “second power feed” are not correct because that already exists in the question with the word “multiple.” It is not necessary to have a third power feed. It may not be a bad idea, but it is not required

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Simulations and tabletop exercises are part of which stage of developing a BCP?

A. Auditing
B. Implementation
C. Testing
D. Creation

A

C. Testing

Explanation:
Managing a business continuity/disaster recovery plan (BCP/DRP) has three main stages:

Creation: The creation stage starts with a business impact assessment (BIA) that identifies critical systems and processes and defines what needs to be covered by the plan and how quickly certain actions must be taken. Based on this BIA, the organization can identify critical, important, and support processes and prioritize them effectively. For example, if critical applications can only be accessed via a single sign-on (SSO), then SSO should be restored before them. BCPs are typically created first and then used as a template for prioritizing operations within a DRP.
Implementation: Implementation involves identifying the personnel and resources needed to put the BCP/DRP into place. For example, an organization may take advantage of cloud-based high availability features for critical processes or use redundant systems in an active/active or active/passive configuration (dependent on criticality). Often, decisions on the solution to use depend on a cost-benefit analysis.
Testing: Testing should be performed regularly and should consider a wide range of potential scenarios, including cyberattacks, natural disasters, and outages. Testing can be performed in various ways, including tabletop exercises, simulations, or full tests.

Auditing is not one of the three stages of developing a BCP/DRP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Which of the following relates to an organization’s efforts to operate its cloud infrastructure in a way that complies with applicable laws and regulations?

A. Security
B. Auditability
C. Governance
D. Privacy

A

C. Governance

Explanation:
When deploying cloud infrastructure, organizations must keep various security-related considerations in mind, including:

Security: Data and applications hosted in the cloud must be secured just like in on-prem environments. Three key considerations are the CIA triad of confidentiality, integrity, and availability.
Privacy: Data hosted in the cloud should be properly protected to ensure that unauthorized users can’t access the data of customers, employees, and other third parties.
Governance: An organization’s cloud infrastructure is subject to various laws, regulations, corporate policies, and other requirements. Governance manages cloud operations in a way that ensures compliance with these various constraints.
Auditability: Cloud computing outsources the management of a portion of an organization’s IT infrastructure to a third party. A key contractual clause is ensuring that the cloud customer can audit (directly or indirectly) the cloud provider to ensure compliance with contractual, legal, and regulatory obligations.
Regulatory Oversight: An organization’s responsibility for complying with various regulations (PCI DSS, GDPR, etc.) also extends to its use of third-party services. Cloud customers need to be able to ensure that cloud providers are compliant with applicable laws and regulations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Which of the following organizations publishes security standards applicable to any systems used by the federal government and its contractors?

A. Service Organization Controls (SOC)
B. National Institute of Standards and Technology (NIST)
C. International Standards Organization (ISO)
D. Information Systems Audit and Control Association (ISACA)

A

B. National Institute of Standards and Technology (NIST)

Explanation:
The National Institute of Standards and Technology (NIST) is a part of the United States government, which is responsible for publishing security standards applicable to any systems used by the federal government and its contractors although they are available to anyone to use.

SOC is the type of audit report that is the result of SSAE 16/18 or ISAE 3402 audits. ISACA is the company behind the CISM and CISA certifications. They are fundamentally a company of IT auditors although they have expanded greatly over the years. ISO is the international body that creates standards for the world to use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Which of the Trust Services principles must be included in a Service Organization Controls (SOC) 2 audit?

A. Privacy
B. Security
C. Availability
D. Confidentiality

A

B. Security

Explanation:
The Trust Service Criteria from the American Institute of Certified Public Accountants (AICPA) for the Security Organization Controls (SOC) 2 audit report is made up of five key principles: Availability, Confidentiality, Process integrity, Privacy, and Security. Security is always required as part of a SOC 2 audit. The other four principles are optional.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Which of the following is a strategy for maintaining operations during a business-disrupting event?

A. Disaster recovery plan
B. Operational continuity plan
C. Business continuity plan
D. Ongoing operations plan

A

C. Business continuity plan

Explanation:
A business continuity plan is a strategy for maintaining operations during a business-disrupting event. A disaster recovery plan is a strategy for restoring normal operations after such an event.

Ongoing operations and operational continuity plans are fabricated terms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Rashid has been working with his customer to understand the Indication of Compromise (IoC) that they have seen within their Security Information and Event Manager (SIEM). The logs show that a bad actor infiltrated their organization through a phishing email. Once the bad actor was in, they traversed the network till they gained access to a firewall. Once they were in the firewall, the bad actor assumed the role the firewall had to access the database. The database was then copied by the bad actor.

This is an example of which type of threat?

A. Data breach
B. Account hijacking
C. Command injection
D. Advanced persistent threat (APT)

A

A. Data breach

Explanation:
A data breach occurs when data is leaked or stolen, either intentionally or unintentionally. This is not an Advanced Persistent Threat (APT). An APT requires an advanced level of skill from bad actors who usually will be attacking for one nation state against another.

Account hijacking is a step along the way when the bad actor assumed the role that the firewall had to access the database. The whole attack was for the purpose of stealing the data, which is a data breach.

Command injection occurs when a bad actor types a command into a field that is interpreted by the server. This is similar to an SQL injection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

A cloud data center is being built by a new Cloud Service Provider (CSP). The CSP wants to build a data center that has a level of resilience that will classify it as a Tier III. At which tier is it expected to add generators to backup the power supply?

A. Tier I
B. Tier III
C. Tier IV
D. Tier II

A

A. Tier I

Explanation:
Generators are added to the requirements from the lowest level, Tier I.

Tier II and above also require those generators to be there. Tier I and II also require Uninterruptible Power Supply (UPS) units.

Tier III requires a redundant distribution path for the data.

Tier IV requires several independent and physically isolated power supplies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

An organization is looking to balance concerns about data security with the desire to leverage the scalability and cost savings of the cloud. Which of the following cloud models is the BEST choice for this?

A. Private Cloud
B. Hybrid Cloud
C. Public Cloud
D. Community Cloud

A

B. Hybrid Cloud

Explanation:
Cloud services are available under a few different deployment models, including:

Private Cloud: In private clouds, the cloud customer builds their own cloud in-house or has a provider do so for them. Private clouds have dedicated servers, making them more secure but also more expensive.
Public Cloud: Public clouds are multi-tenant environments where multiple cloud customers share the same infrastructure managed by a third-party provider.
Hybrid Cloud: Hybrid cloud deployments mix both public and private cloud infrastructure. This allows data and applications to be hosted on the cloud that makes the most sense for them. For example, sensitive data can be stored on the private cloud, while less-sensitive applications can take advantage of the benefits of the public cloud.
Multi-Cloud: Multi-cloud environments use cloud services from multiple different cloud providers. This enables customers to take advantage of price differences or optimizations offered by different providers.
Community Cloud: A community cloud is essentially a private cloud used by a group of related organizations rather than a single organization. It could be operated by that group or a third party, such as FedRAMP-compliant cloud environments operated by cloud service providers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

A cloud provider needs to ensure that the data of each tenant in their multitenant environment is only visible to authorized parties and not to the other tenants in the environment. Which of the following can the cloud provider implement to ensure this?

A. Network security groups (NSG)
B. Geofencing
C. Physical network segregation
D. Hypervisor tenant isolation

A

D. Hypervisor tenant isolation

Explanation:
In a cloud environment, physical network segregation is not possible unless it is a private cloud built that way. However, it’s important for cloud providers to ensure separation and isolation between tenants in a multitenant cloud. To achieve this, the hypervisor has the job of tenant isolation within machines.

An NSG is a virtual Local Area Network (LAN) behind a firewall, which is beneficial to use. It is used to control traffic within a tenant or from the internet to that tenant, not between tenants.

Geofencing is used to control where a user can connect from. It does not isolate tenants from each other. Rather, it restricts access from countries that you do not expect access to come from.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Leodis has been working on the setup of a new application. They have been trying to decide how to determine who the users are and what permissions they should be given, if any. There are several protocols available to make this happen in a cloud environment. Which protocol allows the communication of the users’ permissions?

A. OAuth (Open Authorization)
B. Web Services Federation (WS-Federation)
C. Kerberos
D. Open Identification (OpenID)

A

A. OAuth (Open Authorization)

Explanation:
Open Authorization (OAuth) is an open standard protocol that allows secure authorization and delegation of user permissions between different applications or services. It provides a framework for users to grant limited access to their resources on one website or application to another website or application without sharing their login credentials.

OpenID is an open standard and decentralized authentication protocol that allows users to authenticate themselves on multiple websites or applications using a single set of credentials. It provides a convenient and secure way for users to log in to various websites without the need to create and remember separate usernames and passwords for each site.

Kerberos is a network authentication protocol that provides secure authentication for client-server communication over an insecure network. It was developed by MIT and has become an industry-standard protocol for authentication in many systems and applications. This is used, or has been used, for LAN environments, not the cloud.

Web Services Federation (WS-Federation) is an industry-standard protocol that provides a framework for identity federation and Single Sign-On (SSO) across different web services and security domains. It is based on XML and relies on other web service standards, such as Simple Object Access Protocol (SOAP), to enable secure communication and identity exchange between participating entities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Defining clear, measurable, and usable metrics is a core component of which of the following operational controls and standards?

A. Continuity Management
B. Continual Service Improvement Management
C. Information Security Management
D. Change Management

A

B. Continual Service Improvement Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

If an application accepts XML directly or XML uploads, especially from untrusted sources, or inserts untrusted data into XML documents, which is then parsed by an XML processor, it is susceptible to which attack?

A. Security misconfiguration
B. Cross-site scripting
C. Injection
D. Server-side request forgery

A

A. Security misconfiguration

Explanation:
Security misconfiguration includes the older XML external entities. An application is susceptible if it accepts XML directly, among other conditions.

Cross-Site Scripting (XSS) involves invalidated user-controlled input. There are three types of XSS: reflected, stored, and DOM.

Server-Side Request Forgery (SSRF) occurs when a server accepts content in the user-supplied URL. This forces the server to send a crafted request to another site.

Injection includes SQL and command injection. It happens when user input is not validated. A user should not enter any SQL commands in the application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Leelo works for a corporation that assists both cloud service providers (CSP) and cloud service customers (CSC). They assist in the negotiation of services as well as the management of those services. They also have some of their own software to help with this management.

What term is used to describe an individual or organization that serves as an intermediary between cloud customers and a cloud service provider?

A. Cloud service partner
B. Cloud service broker
C. Cloud service user
D. Cloud service auditor

A

B. Cloud service broker

Explanation:
A cloud service broker is an individual or organization which serves as the go-between or intermediary between cloud customers and cloud service providers. Brokers can negotiate and manage the services between the customer and the provider. They do have some of their own software to help with this management.

Cloud service auditors are the auditors who go into the cloud service provider’s datacenter as the third party to verify their controls.

Cloud service users are the customers of the cloud provider. This would include the end user as well as the corporations that they work for.

A cloud service partner is a company that helps either the customer or the partner. It is the more generic term that can include the brokers and the auditors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

JoAnn has been configuring a server that will handle all network forwarding decisions, which allows the network device to simply perform frame forwarding. This allows for dynamic changes to traffic flows based on customer needs and demands. What is the name of the network approach described here?

A. Virtual Private Cloud (VPC)
B. Dynamic Host Configuration Protocol (DHCP)
C. Dynamic Name System Security (DNSSec)
D. Software-defined networking (SDN)

A

D. Software-defined networking (SDN)

Explanation:
In software defined networking, decisions regarding where traffic is filtered and sent are separate from the actual forwarding of the traffic. This separation allows network administrators to quickly and dynamically adjust network flows based on the needs of customers. Software defined networking is often referred to as Software Defined - Wide Area Network (SD-WAN) when it is used as the backbone network.

DNSSec is an extension to DNS. DNS converts domain names, such as pocketprep.com to IP addresses. DNS is a hierarchically organized set of servers within the internet and corporate networks. DNSSec adds authentication to allow verification of the source of DNS information.

DHCP is used to dynamically allocate IP addresses to devices when they join a network.

VPC is a simulation of a private cloud within a public cloud environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Which of the following considerations MOST closely relates to ensuring that customers’ personal data is not accessed by unauthorized users?

A. Regulatory Oversight
B. Privacy
C. Security
D. Governance

A

B. Privacy

Explanation:
When deploying cloud infrastructure, organizations must keep various security-related considerations in mind, including:

Security: Data and applications hosted in the cloud must be secured just like in on-prem environments. Three key considerations are the CIA triad of confidentiality, integrity, and availability.
Privacy: Data hosted in the cloud should be properly protected to ensure that unauthorized users can’t access the data of customers, employees, and other third parties.
Governance: An organization’s cloud infrastructure is subject to various laws, regulations, corporate policies, and other requirements. Governance manages cloud operations in a way that ensures compliance with these various constraints.
Auditability: Cloud computing outsources the management of a portion of an organization’s IT infrastructure to a third party. A key contractual clause is ensuring that the cloud customer can audit (directly or indirectly) the cloud provider to ensure compliance with contractual, legal, and regulatory obligations.
Regulatory Oversight: An organization’s responsibility for complying with various regulations (PCI DSS, GDPR, etc.) also extends to its use of third-party services. Cloud customers need to be able to ensure that cloud providers are compliant with applicable laws and regulations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Imani is working with their cloud data architect to design a Storage Area Network (SAN) that will provide the cloud storage needed by the users. They want users to be able to have access to mountable volumes within the Fibre Channel (FC) SAN.

Of the following, which term describes the allocated storage space that is presented to the user as a mountable drive?

A. World Wide Port Name (WWPN)
B. Logical Unit Number (LUN)
C. World Wide Names (WWN)
D. World Wide Node Name (WWNN)

A

B. Logical Unit Number (LUN)

Explanation:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

To access their cloud environment remotely, a cloud administrator sets up a web server in a demilitarized zone (DMZ) that is publicly accessible from the internet. She made it so that the server has been hardened to prevent attacks. Which of the following did the cloud administrator create?

A. Firewall
B. Virtual Private Cloud (VPC)
C. Micro-segmentation
D. Bastion host

A

D. Bastion host

Explanation:
A bastion host is a hardened and fortified device. To harden, you change the default password, close unnecessary ports, disable unnecessary services, etc.

A VPC is a virtualized environment that is isolated to make it harder for bad actors to interfere with business processes.

Micro-segmentation is when a virtual network is created that has one or just a few virtual machines behind its own firewall.

A firewall is a security device that blocks or allows traffic. It should be a hardened device as well, hopefully by design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Shai has been working with the Disaster Recovery (DR) teams to build the DR Plans (DRP) for their critical transaction database. They process a great deal of commercial transactions in an hour. They have been able to determine that they need a configuration that will nearly eliminate the risk of the loss of any transactions that are performed. They have a Recovery Point Objective (RPO) that is sub one second.

What technology should they implement?

A. Redundant servers that are served through multiple data centers
B. A server cluster that spans multiple availability zones with load balancers
C. A set of redundant servers across multiple availability zones
D. Load balancers that span multiple servers in a single data center

A

B. A server cluster that spans multiple availability zones with load balancers

Explanation:
The Recovery Point Objective (RPO) is defined as the period of time during which an organization is willing to accept the risk of missing transactions. With server clusters in a cloud environment that span multiple availability zones that are handled by load balancers, it is unlikely that a single completed transaction would be lost. Incomplete transactions may be lost, but that is probably acceptable for this business.

Redundant servers are not as robust as clusters. Clusters have all the servers active all the time. Redundant servers are not actively processing data unless the primary fails, then it will take over. Redundant servers are often stated as active-passive, where clusters are active-active.

Having multiple servers in a single data center is not as robust as ones that are in different availability zones. If a massive fire happens in one data center, the customer would be offline for awhile (depending on additional configurations) and likely to lose some of the transactions. There are configurations to help with this, but they are missing from the answers, so they cannot be assumed to be there. One configuration would be data mirroring or database shadows, which are nearly instantaneous backups to another server or drive.

42
Q

Vaeda is an information security professional working with the risk management team for a medium-sized manufacturing company. They have spent a great deal of time performing quantitative and qualitative risk assessments. They are now in the risk mitigation phase. What they want to know is how can they ensure that all risk is fully mitigated?

A. Purchasing various insurance policies
B. Risk is never fully mitigated
C. Constantly monitoring all systems
D. Enforcing a strict risk management plan

A

B. Risk is never fully mitigated

Explanation:
There is no way to ensure that risk is fully mitigated in a system or application. Organizations should seek to lower the possibility of risk or lower the impact that a risk could have on systems, but there is no way to completely mitigate it. Any system that has users and access will always maintain some level of risk, even if the risk level is low.

43
Q

What protocol is similar to HTML but is stricter in its formatting requirement and is commonly used for data exchange?
A. eXtensible Markup Langugue (XML)
B. Java Script Object Notation (JSON)
C. Binary JSON (BSON)
D. REpresentation State Transfer (REST) API

A

A. eXtensible Markup Langugue (XML)

Explanation:
eXtensible Markup Language (XML) is a standard information exchange format that employs tags to define data that is similar to Hyper Text Markup Language (HTML).

JSON is a lightweight data exchange protocol that is commonly used today, but it is not similar in structure to HTML. BSON is a binary form of JSON. REST APIs typically use JSON, although it can use XML to request information and receive responses.

44
Q

A DLP solution is inspecting the contents of an employee’s email. What stage of the DLP process is it MOST likely at?

A. Monitoring
B. Enforcement
C. Discovery
D. Mapping

A

A. Monitoring

Explanation:
Data loss prevention (DLP) solutions are designed to prevent sensitive data from being leaked or accessed by unauthorized users. In general, DLP solutions consist of three components:

Discovery: During the Discovery phase, the DLP solution identifies data that needs to be protected. Often, this is accomplished by looking for data stored in formats associated with sensitive data. For example, credit card numbers are usually 16 digits long, and US Social Security Numbers (SSNs) have the format XXX-XX-XXXX. The DLP will identify storage locations containing these types of data that require monitoring and protection.
Monitoring: After completing discovery, the DLP solution will perform ongoing monitoring of these identified locations. This includes inspecting access requests and data flows to identify potential violations. For example, a DLP solution may be integrated into email software to look for data leaks or monitor for sensitive data stored outside of approved locations.
Enforcement: If a DLP solution identifies a violation, it can take action. This may include generating an alert for security personnel to investigate and/or block the unapproved action.

Mapping is not a stage of the DLP process.

45
Q

Rachel is in need of a method of confirming the authenticity of data coming into the application that is being created by the software developers. As the information security professional, she is able to recommend several techniques that can be built into the software. If they achieve their goals, they will be able to hold their users accountable for their actions.

What will they have achieved?

A. Public key
B. Nonrepudiation
C. Encryption
D. Hashing

A

B. Nonrepudiation

Explanation:
Nonrepudiation is the ability to confirm the origin or authenticity of data to a high degree of certainty. Nonrepudiation is typically done through methods such as hashing and digital signatures.

A hash, or hashing, provides information that the bits are correct. All the ones should be ones and all the zeros should be zeros. It does not provide authenticity. There is no way to confirm where the data came from or who created it with just the hash. A digital signature needs to be added by encrypting the hash with a private key.

A digital signature is verified by decrypting it with a public key. This is not the correct answer to the question because the question is about what they will achieve. They will not achieve a public key.

Encryption is altering data to an unreadable format. This does not achieve nonrepudiation unless we specifically talk about asymmetric public and private keys.

46
Q

A SOC report is MOST related to which of the following common contractual terms?

A. Right to Audit
B. Litigation
C. Compliance
D. Metrics

A

A. Right to Audit

Explanation:
A contract between a customer and a vendor can have various terms. Some of the most common include:

Right to Audit: CSPs rarely allow customers to perform their own audits, but contracts commonly include acceptance of a third-party audit in the form of a SOC 2 or ISO 27001 certification.
Metrics: The contract may define metrics used to measure the service provided and assess compliance with service level agreements (SLAs).
Definitions: Contracts will define various relevant terms (security, privacy, breach notification requirements, etc.) to ensure a common understanding between the two parties.
Termination: The contract will define the terms by which it may be ended, including failure to provide service, failure to pay, a set duration, or with a certain amount of notice.
Litigation: Contracts may include litigation terms such as requiring arbitration rather than a trial in court.
Assurance: Assurance requirements set expectations for both parties. For example, the provider may be required to provide an annual SOC 2 audit report to demonstrate the effectiveness of its controls.
Compliance: Cloud providers will need to have controls in place and undergo audits to ensure that their systems meet the compliance requirements of regulations and standards that apply to their customers.
Access to Cloud/Data: Contracts may ensure access to services and data to protect a customer against vendor lock-in.
47
Q

Cloud service providers will have clear requirements for items such as uptime, customer service response time, and availability. Where would these requirements MOST LIKELY be outlined for the client?

A. Privacy Level Agreement (PLA)
B. Service Level Agreement (SLA)
C. Business Associate Agreement (BAA)
D. Data Processing Agreement (DPA)

A

B. Service Level Agreement (SLA)

Explanation:
Requirements such as uptime, customer service response time, and availability should be outlined in a Service Level Agreement (SLA). When a provider doesn’t meet their SLA requirements, it could lead to termination of the contract or financial benefits to the cloud customer.

PLAs, BAAs, and DPAs are all fundamentally the same. They are all agreements like an SLA but about the privacy requirements to protect the personal data or Personally Identifiable Information (PII) that will be stored and processed on the cloud provider’s equipment. DPA is the term used in the European Union under the General Data Protection Regulation (GDPR). HIPAA in the USA requires a BAA. The PLA is a generic term for anywhere else.

48
Q

Which of the following techniques uses context and the meaning of text to identify sensitive data in unstructured data?

A. Pattern Matching
B. Lexical Analysis
C. Hashing
D. Schema Analysis

A

B. Lexical Analysis

Explanation:
When working with unstructured data, there are a few different techniques that a data discovery tool can use:

Pattern Matching: Pattern matching looks for data formats common to sensitive data, often using regular expressions. For example, the tool might look for 16-digit credit card numbers or numbers structured as XXX-XX-XXXX, which are likely US Social Security Numbers (SSNs).
Lexical Analysis: Lexical analysis uses natural language processing (NLP) to analyze the meaning and context of text and identify sensitive data. For example, a discussion of “payment details” or “card numbers” could include a credit card number.
Hashing: Hashing can be used to identify known-sensitive files that change infrequently. For example, a DLP solution may have a database of hashes for files containing corporate trade secrets or company applications.

Schema analysis can’t be used with unstructured data because only structured databases have schemas.

49
Q

Which of the following NIST controls for system and communication protection is MOST closely related to management of tasks such as encryption and logging configurations?

A. Security Function Isolation
B. Cryptographic Key Establishment and Management
C. Boundary Protection
D. Separation of System and User Functionality

A

A. Security Function Isolation

Explanation:
NIST SP 800-53, Security and Privacy Controls for Information Systems and Organizations defines 51 security controls for systems and communication protection. Among these are:

Policy and Procedures: Policies and procedures define requirements for system and communication protection and the roles, responsibilities, etc. needed to meet them.
Separation of System and User Functionality: Separating administrative duties from end-user use of a system reduces the risk of a user accidentally or intentionally misconfiguring security settings.
Security Function Isolation: Separating roles related to security (such as configuring encryption and logging) from other roles also implements separation of duties and helps to prevent errors.
Denial-of-Service Prevention: Cloud resources are Internet-accessible, making them a prime target for DoS attacks. These resources should have protections in place to mitigate these attacks as well as allocate sufficient bandwidth and compute resources for various systems.
Boundary Protection: Monitoring and filtering inbound and outbound traffic can help to block inbound threats and stop data exfiltration. Firewalls, routers, and gateways can also be used to isolate and protect critical systems.
Cryptographic Key Establishment and Management: Cryptographic keys are used for various purposes, such as ensuring confidentiality, integrity, authentication, and non-repudiation. They must be securely generated and secured against unauthorized access.
50
Q

A security incident occurred within an organization that affected numerous servers and network devices. A security engineer was able to use the Security Information Event Manager (SIEM) to see all the logs pertaining to that event, even though they occurred on different devices, by using the IP address of the source.

Which function of a SIEM is being described in this scenario?

A. Aggregation
B. Correlation
C. Reporting
D. Analysis

A

B. Correlation

Explanation:
Security Information and Event Management (SIEM) systems are very useful because they are able to correlate data. This means that not only can the data be stored in one place through aggregation but also be searched using specific items such as an IP address or timestamp.

Reporting is the function of the SIEM, creating alerts and reports regarding suspicious activity and Indication of Compromise (IoC).

Analysis is a reasonable word here, but the correct word is correlation regarding SIEM.

51
Q

Baird is responsible for vendor management at his office. He works for a large bank that relies on several vendors for different services at different times. This includes a public cloud provider for their Infrastructure and Platform as a Service (IaaS & PaaS) deployments. He has learned that vendor management can be both difficult and fulfilling.

What international standard can he use to possibly make things easier?

A. International Standards Organization/International Electrotechnical Committee (ISO/IEC) 27036
B. International Standards Organization/International Electrotechnical Committee (ISO/IEC) 17788
C. International Standards Organization/International Electrotechnical Committee (ISO/IEC) 27002
D. International Standards Organization/International Electrotechnical Committee (ISO/IEC) 27050

A

A. International Standards Organization/International Electrotechnical Committee (ISO/IEC) 27036

Explanation:

52
Q

It is necessary to protect personally identifiable information (PII), protected health information (PHI) as well as credit card numbers when stored within a customer database. If the immediate concern is to ensure that the credit card number is not fully visible to someone in customer service, what tool could be used for this?

A. Access control
B. Encryption
C. Masking
D. Least privilege

A

C. Masking

Explanation:
Masking is to cover. When you type in your password into a system, but you only see stars on the screen, that is masking. So, if you do not want customer service to fully see the credit card information because they only need to see the last four digits for card confirmation, then masking is perfect.

Encryption would block visibility of the entire card. The question asks about not having the credit card number fully visible to someone in customer service, so presumably we only want customer service to see those last four or five digits.

Least privilege is actually a concept within access control. This would be used to control visibility of the card entirely. Either customer service would be able to see all of it or none of it.

53
Q

Ines is working with the Disaster Recovery (DR) team. They have been able to determine that they can only tolerate losing the last two hours worth of data at the most critical point in the work day/year. At the least critical, they could tolerate losing 24 hours worth of data. They have settled on the most cost effective backup solution that will ensure that they will not lose more than four hours of data.

What have they defined?

A. Maximum Tolerable Downtime (MTD)
B. Service Delivery Objective (SDO)
C. Recovery Time Objective (RTO)
D. Recovery Point Objective (RPO)

A

D. Recovery Point Objective (RPO)

Explanation:
A Recovery Point Objective (RPO) is defined in terms of data loss, not time period; it defines the maximum quantity of data that may be tolerated during a disaster recovery incident. Increasing scheduled backups can decrease the value.

The RTO is the time window that the team has to do the work of bringing the recovery site online.

The MTD is a combination of the RTO plus time needed for emergency evacuations, life-safety issues, chaos, damage assessment, and so on. That is the total amount of time that a server or service can be offline before causing a great deal of damage to the business.

The SDO is the recovery level. Once a failover has occurred to the back-up systems/cloud/site/etc., it must be functional to a certain level to ensure that it will help the business. It is not normally expected that functionality would be completely normal on the back-up systems. So the SDO would be around 80%. This would mean something like the following: If the server normally processes 100 calls an hour, it must be able to process 80 or the business is likely to still experience a great deal of damage.

54
Q

In a cloud implementation that uses Security Assertion Markup Language (SAML) to authenticate a user, what is the appropriate name for the authenticator?

A. Identity Provider
B. Relying Party
C. Service Provider
D. Relaying Party

A

A. Identity Provider

Explanation:
Usually, the identity provider is able to authenticate the user through the use of User IDentification (User ID) and passwords and software tokens. These are familiar to most people who use, for example, Facebook/Meta, Google, and Microsoft. Corporations can set up their Active Directory as the identity provider for users logging into Software as a Service (SaaS), such as SalesForce.

The service provider is the website that the user is trying to use (e.g., SalesForce).

The relaying party is the user and their computer. The token that the identity provider sends out goes through the user’s computer on its way to the service provider.

The relying party is the service provider. They rely on the identity provider to authenticate the user.

55
Q

Which technology allows an association of organizations to share information while having authentication of users to be done by the home organization?

A. Security Assertion Markup Language (SAML)
B. Zero trust architecture
C. Role Based Access Control (RBAC)
D. Federation

A

D. Federation

Explanation:
Federation allows organizations to share information and services yet have the users authenticate with their home organization because that organization knows them best.

SAML is a technology that allows the assertion of an identity to another organization. This can be used by federation, but with the wording in the question, federation is a better answer.

RBAC is the logic of grouping people based on their role or their job. This facilitates the process of granting access in large organizations but that is not the nature of the question. The question is asking about multiple companies working together somehow.

Zero trust architecture is the logic of essentially not trusting anyone until they are positively identified. This is an essential logic to security everywhere, but it does not have any specific relationship to multiple companies working together.

56
Q

Analese is an information security manager working with the DevOps teams as they build and deploy their Infrastructure as a Service (IaaS) environment. She is describing a problem to them that needs to be addressed. It can be addressed by using two-factor authentication, and because of the severity of the threat, she is advising them that they should use a hard token as the second factor of authentication. She is concerned that if this is not protected, someone could log in and delete all their images, instances, data, systems, etc., which would damage the business.

What is she concerned about that could be compromised?

A. Hypervisor
B. Router
C. Virtual machine
D. Management plane

A

D. Management plane

Explanation:
The management plane provides access to manage all the hosts within a cloud environment. If the management plane were to be compromised, the attacker would then have full control over the cloud environment. The severity of a compromised management plane outweighs that of a compromised hypervisor, virtual machine, or router. For this reason, only a limited and highly vetted group of administrators should have access to the management plane, and access should be audited regularly.

If the hypervisor is compromised, that would be a huge problem for the cloud provider and all their customers. It too should be protected with two-factor authentication. However, the question focuses on a customer and their IaaS environment. The customer needs to protect their management plane; they cannot protect the hypervisor. That is the responsibility of the cloud provider.

A compromise of a virtual machine could be a big problem, and it too should be protected with two-factor authentication. However, this is just one of many different virtual machines within IaaS, so the management plane is a bigger concern.

If a router is compromised, the data flowing through it is at risk. It is even plausible that the bad actor could redirect traffic. But, like the virtual machine, this is just part of the environment, where the management plane gives access to all of the environment (depending on configuration and account setup).

57
Q

Which SIEM feature may rely on the fact that a SIEM is installed on a standalone system?

A. Log Centralization and Aggregation
B. Automated Monitoring
C. Normalization
D. Data Integrity

A

D. Data Integrity

Explanation:
Security information and event management (SIEM) solutions are useful tools for log analysis. Some of the key features that they provide include:

Log Centralization and Aggregation: Combining logs in a single location makes them more accessible and provides additional context by drawing information from multiple log sources.
Data Integrity: The SIEM is on its own system, making it more difficult for attackers to access and tamper with SIEM log files (which should be write-only).
Normalization: The SIEM can ensure that all data is in a consistent format, converting things like dates that can use multiple formats.
Automated Monitoring or Correlation: SIEMs can analyze the data provided to them to identify anomalies or trends that could be indicative of a cybersecurity incident.
Alerting: Based on their correlation and analysis, SIEMs can alert security personnel of potential security incidents, system failures, and other events of interest.
Investigative Monitoring: SIEMs support active investigations by enabling investigators to query log files or correlate events across multiple sources.
58
Q

Insecure services such as File Transfer Protocol (FTP) are disallowed on all virtual machines within the Infrastructure as a Service (IaaS) public cloud deployment. However, an FTP client is found on one of the virtual machines. What can the organization do to ensure there are no other virtual machines responding to FTP requests?

A. Penetration test
B. Vulnerability scan
C. Trusted Platform Module (TPM)
D. Audit

A

B. Vulnerability scan

Explanation:
By running a vulnerability scan, the organization can easily identify a virtual machine responding to FTP requests. This vulnerability indicates a system that does not conform to the baseline configuration, which requires immediate remediation action.

A penetration test takes a vulnerability scan a step further. With a penetration test, the security professional attempts to exploit the potential vulnerabilities found within the vulnerability scan.

An audit could have been an answer to this question, especially if a vulnerability scan was not one of the potential answers. An audit is a formal process that is used to determine if the environment is in compliance with a policy or a standard (e.g., ISO/IEC 27001 or SOC 2). The question is asking about one technical configuration, so it is easier and probably cheaper to do the vulnerability scan.

A TPM is a rackmountable device that is designed to store encryption keys.

59
Q

Belle has been working to improve how data is retained and stored within the business. One thing that she is working to determine is the best time in the cloud data lifecycle to classify data. When would be the best time to classify data?

A. Share phase
B. Store phase
C. Use phase
D. Create phase

A

D. Create phase

Explanation:
Data should be classified during the create phase of the data lifecycle. This is the best time to classify data because its value and sensitivity are known by the creator.

It is possible that data will actually be classified at a later time. However, it is best to classify it from the beginning. The cloud data lifecycle is Create, Store, Use, Share, Archive, and Destroy.

60
Q

Joan is working as a contractor for a small business that is building their infrastructure so that they can begin their new business, a coffee shop. They are looking for the right encryption technology to protect their connection between the point of sale technology built into their computer and the server in the cloud. The connection that they are building is connecting through a web interface.

Which of the following would be the most appropriate encryption technology?

A. Transport Layer Security (TLS)
B. Internet Protocol Security (IPSec) transport mode
C. Secure Shell (SSH)
D. Internet Protocol Security (IPSec) tunnel mode

A

A. Transport Layer Security (TLS)

Explanation:
Since this connection is between a computer and a web interface, TLS is the most appropriate. TLS in its original form, SSL, was developed for web connections.

IPSec is a layer 3 network layer protocol that can be used in an end-to-end format (transport mode) or across a single link/hop (tunnel mode). It can be used to connect any traffic. Tunnel mode is commonly used to encrypt traffic between two routers that are connected across a Wide Area Network (WAN). Or, it can be used in transport mode, which could be used within a user’s remote access technology such as Cisco AnyConnect.

SSH is commonly used by administrators to remotely connect to devices for configuration and management.

61
Q

Joan is working as a contractor for a small business that is building their infrastructure so that they can begin their new business, a coffee shop. They are looking for the right encryption technology to protect their connection between the point of sale technology built into their computer and the server in the cloud. The connection that they are building is connecting through a web interface.

Which of the following would be the most appropriate encryption technology?

A. Transport Layer Security (TLS)
B. Internet Protocol Security (IPSec) transport mode
C., Secure Shell (SSH)
D. Internet Protocol Security (IPSec) tunnel mode

A

A. Transport Layer Security (TLS)

Explanation:
Since this connection is between a computer and a web interface, TLS is the most appropriate. TLS in its original form, SSL, was developed for web connections.

IPSec is a layer 3 network layer protocol that can be used in an end-to-end format (transport mode) or across a single link/hop (tunnel mode). It can be used to connect any traffic. Tunnel mode is commonly used to encrypt traffic between two routers that are connected across a Wide Area Network (WAN). Or, it can be used in transport mode, which could be used within a user’s remote access technology such as Cisco AnyConnect.

SSH is commonly used by administrators to remotely connect to devices for configuration and management.

62
Q

Which of the following is MOST related to Infrastructure as Code (IaC)?

A. Redundancy
B. Configuration Management and Change Management
C. Logging and Monitoring
D. Scheduled Downtime and Maintenance

A

B. Configuration Management and Change Management

Explanation:
Some best practices for designing, configuring, and securing cloud environments include:

Redundancy: A cloud environment should not include single points of failure (SPOFs) where the outage of a single component brings down a service. High availability and duplicate systems are important to redundancy and resiliency.
Scheduled Downtime and Maintenance: Cloud systems should have scheduled maintenance windows to allow patching and other maintenance to be performed. This may require a rotating maintenance window to avoid downtime.
Isolated Network and Robust Access Controls: Access to the management plane should be isolated using access controls and other solutions. Ideally, this will involve the use of VPNs, encryption, and least privilege access controls.
Configuration Management and Change Management: Systems should have defined, hardened default configurations, ideally using infrastructure as code (IaC). Changes should only be made via a formal change management process.
Logging and Monitoring: Cloud environments should have continuous logging and monitoring, and vulnerability scans should be performed regularly.
63
Q

SOAP encapsulates its information in a SOAP:

A. Frame
B. Envelope
C. Packet
D. Cell

A

B. Envelope

Explanation:
SOAP (formerly Simple Object Access Protocol) is used to exchange information between web services. It does this by encapsulating its data in what is called a SOAP envelope and then using common communication protocols such as HTTP for transmission.

The terms cell, frame, and packet are used by different networking protocols to describe the grouping of bits that it has created by adding headers or trailers to an already existing form or some sort or another.

Cell is used by ATM. Frame is used by frame relay, Ethernet, token ring, and the like. Packets are used by UPD and TCP. This line of text is not truly needed for the exam. This is not something you have to study to be ready for the test. However, the more you know, the better off you are in this test. So, this is a bit of bonus information for you.

64
Q

Issues like side-channel attacks and information bleed are MOST closely related to which of the following PaaS environment risks?

A. Interoperability Issues
B. Resource Sharing
C. Persistent Backdoors
D. Virtualization

A

B. Resource Sharing

Explanation:
Platform as a Service (PaaS) environments inherit all the risks associated with IaaS models, including personnel threats, external threats, and a lack of relevant expertise. Some additional risks added to the PaaS model include:

Interoperability Issues: With PaaS, the cloud customer develops and deploys software in an environment managed by the provider. This creates the potential that the customer’s software may not be compatible with the provider’s environment or that updates to the environment may break compatibility and functionality.
Persistent Backdoors: PaaS is commonly used for development purposes since it removes the need to manage the development environment. When software moves from development to production, security settings and tools designed to provide easy access during testing (i.e. backdoors) may remain enabled and leave the software vulnerable to attack in production.
Virtualization: PaaS environments use virtualized OSs to provide an operating environment for hosted applications. This creates virtualization-related security risks such as hypervisor attacks, information bleed, and VM escapes.
Resource Sharing: PaaS environments are multitenant environments where multiple customers may use the same provider-supplied resources. This creates the potential for side-channel attacks, breakouts, information bleed, and other issues with maintaining tenant separation.
65
Q

An organization may review SOC 2 and ISO 27001 as part of its efforts to
manage concerns about which of the following?

A. Open Source Software
B. Third-Party Software
C. API Security
D. Supply Chain Security

A

D. Supply Chain Security

Explanation:
Some important considerations for secure software development in the cloud include:

API Security: In the cloud, the use of microservices and APIs is common. API security best practices include identifying all APIs, performing regular vulnerability scanning, and implementing access controls to manage access to the APIs.
Supply Chain Security: An attacker may be able to access an organization’s systems via access provided to a partner or vendor, or a failure of a provider’s systems may place an organization’s security at risk. Companies should assess their vendors’ security and ability to provide services via SOC2 and ISO 27001 certifications.
Third-Party Software: Third-party software may contain vulnerabilities or malicious functionality introduced by an attacker. Also, the use of third-party software is often managed via licensing, with whose terms an organization must comply. Visibility into the use of third-party software is essential for security and legal compliance.
Open Source Software: Most software uses third-party and open-source libraries and components, which can include malicious functionality or vulnerabilities. Developers should use software composition analysis (SCA) tools to build a software bill of materials (SBOM) to identify any potential vulnerabilities in components used by their applications.
66
Q

An untrusted server has a value that uniquely represents a particular user. If that value is sent to a certain trusted system, the user’s record can be retrieved. Which of the following does this BEST describe?

A. Tokenization
B. Hashing
C. Anonymization
D. Encryption

A

A. Tokenization

Explanation:
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:

Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules.
Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions.
Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number.
Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data.
Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
67
Q

Celyse is working for a large social media company that wants to expand their offerings to their customers. They want to become an Identity Provider (IdP) that can verify the identification of their customers to other websites. Which of the following is an authentication protocol they would likely use?

A. Security Assertion Markup Language (SAML)
B. WS (Web Services)-Federation
C. Open Identification (OpenID) Connect
D. Open Authorization (OAuth)

A

C. Open Identification (OpenID) Connect

Explanation:
OpenID Connect is an authentication protocol that can be used to authenticate the user of a browser for the web server. OpenID Connect provides an easy and flexible way for developers to support authentication across an organization because it uses JSON Web Tokens. It provides web-based applications with a method for authentication that is not dependent on particular devices or clients to access it.

OAuth can be used with OpenID Connect. It can be used to provide secure delegated access. For example, when a user grants permission to an application to access contacts.

SAML can be used for authentication purposes as well and is commonly used within the workplace to access the intranet or SalesForce, Box, or others.

WS-A bank has established many roles within its business. This includes the roles of the European Union’s (EU) General Data Processing Regulation (GDPR). Which role name applies to anyone who uses or consumes data which is owned by another?
A

Data processor
B

Data custodian
C

Data controller
D

Data owner
Federation can be used to authenticate “for your applications (such as a Windows Identity Foundation-based app) and for identity providers (such as Active Directory Federation Services or Azure AppFabric Access Control Service),” according to auth0’s website.

68
Q

A bank has established many roles within its business. This includes the roles of the European Union’s (EU) General Data Processing Regulation (GDPR). Which role name applies to anyone who uses or consumes data which is owned by another?

A. Data processor
B. Data custodian
C. Data controller
D. Data owner

A

B. Data custodian

Explanation:
A data custodian is defined by the Cloud Security Alliance (CSA) in the guidance 4.0 document as someone in possession of the data. This could include anyone, including the data owner, steward, controller, processor, etc. The data custodians must adhere to any policies set forth by the data owner regarding the use of the data.

The data owner is also defined by the CSA as someone responsible for a piece or a set of data. The company or CEO would own the data but that does not make them the “data owner.” The data owner has the responsibility of classifying the piece of data, such as a word document, or a set of data, like a database. The owners must be identified and taught about the data policies of the business.

The data controller is defined in the European Union’s (EU) General Data Protection Regulation (GDPR) as “a person who alone or jointly with others processes or controls or authorizes the processing of data.” The one who authorizes the processing of data would be at a higher level within the business such as the CEO.

The data processor is also defined in GDPR as “a person who processes data solely on behalf of the controller, excluding the employees of the data controller.” This could be a payroll processing company, for example. GDPR defines processing to include storage of data. So, cloud providers are data processors if they are storing personal data.

69
Q

Which of the following is TRUE in terms of maintenance and versioning in the cloud?

A. The Cloud Service Customer (CSC) is responsible for the maintenance and versioning of all components in a Software as a Service (SaaS) product. The Cloud Service Provider (CSP) is responsible for the Virtual Machines (VM) and their patches.
B. The Cloud Service Customer (CSC) is responsible for the maintenance and versioning of the network and storage as well as the virtualization software in an Infrastructure as a Service (IaaS) solution. The Cloud Service Provider (CSP) is responsible for the physical security of the Data Center (DC).
C. Updates and patches are scheduled with the customer in the Software as a Service (SaaS) and Platform as a Service (PaaS) model. The Cloud Service Provider (CSP) is responsible for the virtualization software and the underlying infrastructure.
D. The Cloud Service Customer (CSC) is responsible for the maintenance and versioning of the apps they acquire and develop in a Platform as a Service (PaaS) solution. The Cloud Service Provider (CSP) is responsible for the platform, tools, and underlying infrastructure.

A

D. The Cloud Service Customer (CSC) is responsible for the maintenance and versioning of the apps they acquire and develop in a Platform as a Service (PaaS) solution. The Cloud Service Provider (CSP) is responsible for the platform, tools, and underlying infrastructure.

Explanation:
Correct answer: The Cloud Service Customer (CSC) is responsible for the maintenance and versioning of the apps they acquire and develop in a Platform as a Service (PaaS) solution. The Cloud Service Provider (CSP) is responsible for the platform, tools, and underlying infrastructure.

In IaaS, the cloud provider is responsible for the security of the DC, the underlying infrastructure of routers, switches, and servers. The CSC is responsible for the Operating Systems (OS) that make up the virtual infrastructure, the middleware, software, and data.

In PaaS, the cloud provider is still responsible for the above mentioned and the platform and tools. The CSC is responsible for all the middleware and software they add to the platform (if server-based PaaS) and their data.

In SaaS, the cloud provider is also responsible for what was mentioned two paragraphs above plus the software that the customer uses. The CSC is responsible for their data.

The answer “The Cloud Service Customer (CSC) is responsible for the maintenance and versioning of all components in a Software as a Service (SaaS) product. The Cloud Service Provider (CSP) is responsible for the Virtual Machines (VM) and their patches” is wrong because the CSC is not responsible for the software.

The answer “The Cloud Service Customer (CSC) is responsible for the maintenance and versioning of the network and storage as well as the virtualization software in an Infrastructure as a Service (IaaS) solution. The Cloud Service Provider (CSP) is responsible for the physical security of the Data Center (DC)” is wrong because the CSC is not responsible for the virtualization software. If the network and storage here are physical, they are not responsible. If they are virtual, the CSC is responsible.

The answer “Updates and patches are scheduled with the customer in the Software as a Service (SaaS) and Platform as a Service (PaaS) model. The Cloud Service Provider (CSP) is responsible for the virtualization software and the underlying infrastructure” is wrong because the provider does not schedule software updates with the CSC in a SaaS. In PaaS, the CSC is responsible for the software updates, and the provider may or may not be responsible for the OS updates.

70
Q

Which of the following risk treatment strategies carries opportunity costs due to missed opportunities?

A. Acceptance
B. Mitigation
C. Avoidance
D. Transference

A

C. Avoidance

Explanation:
Risk treatment refers to the ways that an organization manages potential risks. There are a few different risk treatment strategies, including:

Avoidance: The organization chooses not to engage in risky activity. This creates potential opportunity costs for the organization.
Mitigation: The organization places controls in place that reduce or eliminate the likelihood or impact of the risk. Any risk that is left over after the security controls are in place is called residual risk.
Transference: The organization transfers the risk to a third party. Insurance is a prime example of risk transference.
Acceptance: The organization takes no action to manage the risk. Risk acceptance depends on the organization’s risk appetite or the amount of risk that it is willing to accept.
71
Q

Alika is working for a multinational bank as one of their cloud operators. He is managing some virtual servers within their Infrastructure as a Service (IaaS) environment. What protocol is he likely using for this access?

A. Dynamic Host Configuration Protocol (DHCP)
B. Advanced Encryption Standard (AES)
C. Internet Protocol Security (IPSec)
D. Remote Desktop Protocol (RDP)

A

D. Remote Desktop Protocol (RDP)

Explanation:
RDP is a Microsoft created protocol that can be used for remote access when managing machines such as virtual servers. There are other alternatives that could be used, such as Secure Shell (SSH), but of the options listed, it is the best answer.

IPSec is commonly used for site-to-site connections. For example, to connect a corporate worksite to the cloud local area network within the IaaS.

DHCP is used by connected devices to request an Internet Protocol (IP) address.

AES could be used to encrypt the RDP session, but it is RDP that allows for the remote access.

72
Q

Daniel is working at a relatively new software company that has succeeded in building an application that served a critical need in his market vertical. In determining how their customers are going to access this application, which is being offered as a Software as a Service (SaaS) product, they have determined that the customer needs to verify their users and then communicate the levels of privileges each should have within the individual accounts.

What solution would work the best here?

A. Open Identification (OpenID) and Open Authorization (OAuth) together
B. Web Services Federation (WS-Federation) combined with Security Assertion Markup Language (SAML)
C. Open Identification (OpenID) alone will handle what the customer needs
D. Security Assertion Markup Language (SAML) with Open Identification (OpenID)

A

A. Open Identification (OpenID) and Open Authorization (OAuth) together

Explanation:
OpenID can be used to identify and authenticate each user. Then OAuth can be used to specify the level of privileges each user has. The full procedure is Identification, Authentication, Authorization, Accountability (IAAA). OpenID handles identification and authentication. Then OAuth for authorization.

SAML and WS-Federation are two more protocols that also perform identification and authentication. So, combining SAML with OpendId or WS-Federation does not provide a complete solution. OpenID by itself does not either.

To accomplish the needs presented in the scenario, both authentication and authorization are needed.

73
Q

Which of the following is a privacy framework with ten principles that can be included as part of a SOC audit?

A. PCI DSS
B. GAPP
C. PIA
D. ISO 27018

A

B. GAPP

Explanation:
The Generally Accepted Privacy Principles (GAPP) is a framework for privacy that can optionally be included in a SOC2 audit. The ten main privacy principles include:

Management: The organization has defined and documented privacy policies and procedures and assigns accountability to them.
Notice: The organization notifies data subjects of its privacy policies and procedures and how their data is collected, used, and stored.
Choice and Consent: The organization obtains explicit or implicit consent for data collection, processing, and storage after informing the customer of available options.
Collection: Data collection is restricted to what the user has consented to.
Use, Retention, and Disposal: The organization only uses data for approved purposes and securely disposes of it after it is no longer needed for that purpose.
Access: Users can review and update their personal data on request.
Disclosure to Third Parties: Third-party data sharing is only done with explicit or implicit consent and only for purposes that the user has consented to.
Security for Privacy: The organization protects private data against physical and logical unauthorized access.
Quality: The organization ensures that data is accurate, complete, and relevant before using it for an approved purpose.
Monitoring and Enforcement: The organization has processes in place to monitor and enforce its privacy policies.
74
Q

Demi is a cloud architect working for a manufacturing company. They are preparing to move into the public cloud. They will need full access to and control over the operating systems, storage, and applications. They are not interested in managing physical data centers anymore since it is not their core business. They are willing to relinquish control over the physical infrastructure as a trade off.

What cloud category would suit their needs the best?

A. Communication as a Service (CaaS)
B. Platform as a Service (PaaS)
C. Software as a Service (SaaS)
D. Infrastructure as a Service (IaaS)

A

D. Infrastructure as a Service (IaaS)

Explanation:
Infrastructure as a Service (IaaS) is often also referred to as “data center as a service.” This is because the cloud provider is the one to provide and maintain all of the infrastructure devices. However, the cloud customer is responsible for the management of all virtual devices created, including routers, switches, servers, firewalls, etc., from the storage to the operating systems and applications installed.

In PaaS, the customer is effectively renting a virtual server at the most. The customer does not control the virtual routers, switches, or security devices in this category. The customer can build software, deploy software, manage their software, but not the operating systems.

In SaaS, the customer can only utilize the software. They are responsible for controlling access and protecting their data.

In CaaS, the customer will have access at either a PaaS or SaaS level to software that enables real-time communication (ISO/IEC 17788).

75
Q

A large corporation has made the decision to use a variety of Software as a Service (SaaS) options as part of their software infrastructure for their employees. They want to make sure that the identities are controlled and only the employees will be able to access the systems and the data. They also want to ensure that the users have the correct permission on each of the SaaS offerings.

What type of tool would they need for this?

A. Single Sign-On (SSO) using Security Assertion Markup Language (SAML)
B. Federated Identity Management with Retinal Scan/Multi-Factor Authentication (MFA)
C. Federated identity using Transport Layer Security (TLS)
D. Cloud Access Security Broker using Windows System Federation (WS-Federation)

A

A. Single Sign-On (SSO) using Security Assertion Markup Language (SAML)

Explanation:
Through the usage of Single Sign-On (SSO), an organization’s users can authenticate once and share their identity attributes across numerous cloud services. This enables users to submit their credentials only once rather than each time an application is accessed. Configuration of single sign-on occurs at two levels: the identity provider and the application. Applications are configured to have a high level of trust in identity providers. A successful authentication results in the issuance of a digital security token signed with the identity provider’s private key. This means that the application does not receive the user’s actual credentials. Instead, apps will use the public key of the identity provider to authenticate the digitally signed token. This is commonly done through the use of SAML.

Federated identity using Transport Layer Security (TLS) is close to a good answer, but a bit odd. Federated identity is when the user is authenticated by their own corporation, and it is good to encrypt that transmission with TLS. However, it does not match the question very well. The focus of the question is about identification, authentication, and authorization, not encryption.

Cloud Access Security Broker using Windows System Federation (WS-Federation) is not a good answer mainly because of the Cloud Access Security Broker (CASB). The CASB does not control access. The CASB is to understand what users are accessing. It can block access. It can also perform other functions today, such as Data Loss Prevention (DLP), but it does not answer the question of how to control the user access and permissions level.

Federated Identity Management with Retinal Scan/Multi-Factor Authentication (MFA) is not correct because of the retinal scan. Recommending MFA is a good idea, but to narrow the focus to retinal scan without anything in the question driving us to that need is oddly specific.

76
Q

A cloud administrator is going through an old file server and moving data to a repository where it can be preserved for the next couple of years in case it’s needed again. Which step of the cloud secure data lifecycle is this?

A. Archive
B. Use
C. Store
D. Destroy

A

A. Archive

Explanation:
Step 5 of the cloud secure data lifecycle is archive. At this step, the data is taken from a location of active access and moved to a static repository. Here, the data can be preserved for a long period of time in case it is needed in the future.

The first step is create. This is the creation of a new file or data of some type, including voice and video. When a file is altered and saved, it is effectively saving a new file. So, according to the Cloud Security Alliance, this phase includes modification or alteration of the data.

As soon as the data is created, it should be stored somewhere. Ephemeral storage in a virtual machine is temporary. It needs to be moved to persistent storage somewhere.

The use phase is when data is accessed and utilized by a user of some kind.

The share phase is when it is exchanged or sent to someone else.

The last phase is destroy. If data is destroyed through overwriting, degaussing a drive, shredding a drive, cryptographic erasure, etc., then it is gone for good. These techniques may not be perfect, which is the point of this phase.

77
Q

A cloud administrator has just implemented a new hypervisor that is completely dependent on the host operating system for all operations. What type of hypervisor has this administrator implemented?

A. Type 1 hypervisor
B. Full-service hypervisor
C. Bare metal hypervisor
D. Type 2 hypervisor

A

D. Type 2 hypervisor

Explanation:
Type 2 hypervisors are dependent and run off of the host operating system rather than being tied directly into the hardware in the way Type 1 hypervisors are.

Bare metal hypervisors are another name used for Type 1 hypervisors. Full-service hypervisors are not an actual type of hypervisor.

78
Q

Alix has been working with the Business Continuity and Disaster Recovery (BC/DR) teams for a couple of years now. Working for a financial institution brings specific challenges with it. There is a need to ensure that the systems continue to operate in a variety of different disaster level scenarios.

When developing a Business Continuity Plan (BCP) or a Disaster Recovery Plan (DRP), which of the following can be done to identify which systems are the most important?

A. Remediation recommendations
B. Vulnerability assessment
C. Business Impact Analysis (BIA)
D. Penetration testing

A

C. Business Impact Analysis (BIA)

Explanation:
Business Impact Analysis (BIA) is a process that organizations use to identify and assess the potential impact of disruptions or incidents on their business operations. BIA is a crucial component of business continuity planning and helps organizations prioritize their recovery efforts and allocate resources effectively. The first step of a BIA is to identify the critical business functions.

Vulnerability assessment is a systematic process of identifying and evaluating vulnerabilities within a system, network, or application. It involves the use of various tools, techniques, and methodologies to discover and analyze potential weaknesses that could be exploited by attackers.

Penetration testing, also known as ethical hacking or pen testing, is a proactive security assessment technique that involves simulating real-world attacks on a system, network, or application. The objective of penetration testing is to identify vulnerabilities and weaknesses in the target environment that could be exploited by malicious actors.

Remediation recommendations is one of the key steps in a vulnerability assessment. Along with identifying vulnerabilities, a vulnerability assessment should provide recommendations for remediation. This may involve suggesting patches, configuration changes, security best practices, or other mitigation measures to reduce the risk associated with each vulnerability.

79
Q

Which of the following regulations protects the data of EU citizens and restricts it from being moved outside of certain jurisdictional areas?

A. PCI DSS
B. SOX
C. HIPAA
D. GDPR

A

D. GDPR

Explanation:
A company may be subject to various regulations that mandate certain controls be in place to protect customers’ sensitive data or ensure regulatory transparency. Some examples of regulations that can affect cloud infrastructure include:

General Data Protection Regulation (GDPR): GDPR is a regulation protecting the personal data of EU citizens. It defines required security controls for their data, export controls, and rights for data subjects.
US CLOUD Act: The US CLOUD Act creates a framework for handling cross-border data requests from cloud providers. The US law enforcement and their counterparts in countries with similar laws can request data hosted in a data center in a different country.
Privacy Shield: Privacy Shield is a program designed to bring the US into partial compliance with GDPR and allow US companies to transfer EU citizen data outside of the US. The main reason that the US is not GDPR compliant is that federal agencies have unrestricted access to non-citizens’ data.
Gramm-Leach-Bliley Act (GLBA): GLBA requires financial services organizations to disclose to customers how they use those customers’ personal data.
Stored Communications Act of 1986 (SCA): SCA provides privacy protection for the electronic communications (email, etc.) of US citizens.
Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) Act: HIPAA and HITECH are US regulations that protect the protected health information (PHI) that patients give to medical providers.
Payment Card Industry Data Security Standard (PCI DSS): PCI DSS is a standard defined by major payment card brands to secure payment data and protect against fraud.
Sarbanes Oxley (SOX): SOX is a US regulation that applies to publicly-traded companies and requires annual disclosures to protect investors.
North American Electric Reliability Corporation/Critical Infrastructure Protection (NERC/CIP): NERC/CIP are regulations designed to protect the power grid in the US and Canada by ensuring that power providers have certain controls in place.
80
Q

Yun is working with the application developers as they move through development and into operations with their new application. They are looking to add something to the application that can allow the application to protect itself.

Which of the following is a security mechanism that allows an application to protect itself by responding and reacting to ongoing events and threats?

A. Dynamic Application Security Testing (DAST)
B. Vulnerability scanning
C. Runtime Application Self-Protection (RASP)
D. Static Application Security Testing (SAST)

A

C. Runtime Application Self-Protection (RASP)

Explanation:
Runtime Application Self-Protection (RASP) is a security mechanism that runs on the server and starts when the application starts. RASP allows an application to protect itself by responding and reacting to ongoing events and threats in real time. RASP can monitor the application, continuously looking at its own behavior. This allows the application to detect malicious input or behavior and respond accordingly.

Dynamic Application Security Testing (DAST) is a type of security test that looks at the application in a dynamic or running state. This means that the tester can only use the application. They do not have the source code for the application. This can be used to test if the application behaves as needed or if it can be used maliciously by a bad actor.

Static Application Security Testing (SAST) is a type of test where the application is static or still. That means the application is not in a running state, so what the test has knowledge of and access to is the source code.

Vulnerability scanning is a test that is run on systems to ensure that the systems are properly hardened, and there are not any known vulnerabilities in the system.

81
Q

Rafferty is a cloud information security manager who is working with the Incident Response Team (IRT). They have just detected a possible compromise of one of their systems. An Indication of Compromise (IoC) has been reported by the Security Information and Event Manager (SIEM). What the SIEM has seen indicates that a user has clicked on a Uniform Resource Locator (URL) that contains malicious script.

What type of attack is this?

A. Identification and authentication failures
B. Cross-site scripting
C. Security misconfiguration
D. Insecure design

A

B. Cross-site scripting

Explanation:
This is a Cross-Site Scripting attack (XSS). XSS was merged into the injection category in the OWASP 2021 list of threats. Cross-site scripting is a type of injection attack in which a malicious actor can send data to a user’s browser without going through proper validation. There are three different types of XSS attacks. This is specifically a reflected XSS. The three types are:

Reflected XSS: The injected script is embedded in a URL or input field and then reflected back in the response from the server. The attacker typically tricks the victim into clicking a malicious link containing the injected script.
DOM-based XSS: This type of XSS occurs when the vulnerability exists in the client-side code (typically JavaScript) that manipulates the Document Object Model (DOM) of the webpage. The attack targets the client-side code directly, modifying the DOM to execute malicious actions.
Stored XSS: The injected malicious script is permanently stored on the target server and served to multiple users when they access the affected page or view the compromised content.

Insecure design is from the beginning of the software development lifecycle. It involves performing threat modeling and other actions, not an actual attack as described in the question.

Security misconfiguration is when the software has insecure configurations or is left with the default configuration—not the attack itself such as in the question but rather how the developers or operations configured the software.

Identification and authentication failures include things like not adhering to the least privilege principle or leaving default account and default passwords on the systems.

82
Q

Many cloud customers have legal requirements to protect data that they place on the cloud provider’s servers. There are some legal responsibilities for the cloud provider to protect that data. Therefore, it is normal for the cloud provider to have their data centers audited using which of the following?

A. Internal auditor
B. Cloud operators
C. External auditor
D. Cloud architect

A

C. External auditor

Explanation:
An external auditor is not employed by the company being audited. An external auditor will often use industry standards such as ISO 27001 and SOC2 to perform an audit of a cloud provider. Due to the legal requirements, this work needs to be done by an independent party. Therefore, internal auditors are not the correct answer here.

Cloud architects design cloud structures, and cloud operators do the daily maintenance and monitoring of the cloud, according to the Cloud Security Alliance (CSA).

83
Q

Which framework, developed by the International Data Center Authority (IDCA), covers all aspects of data center design, including cabling, location, connectivity, and security?

A. OCTAVE
B. Risk Management Framework
C. HITRUST
D. Infinity Paradigm

A

D. Infinity Paradigm

Explanation:
The International Data Center Authority (IDCA)) is responsible for developing the Infinity Paradigm, which is a framework intended to be used for operations and data center design. The Infinity Paradigm covers aspects of data center design, which include location, cabling, security, connectivity, and much more.

Risk Management Framework (RMF) is defined by NIST as “a process that integrates security, privacy, and cyber supply chain risk management activities into the system development life cycle. The risk-based approach to control selection and specification considers effectiveness, efficiency, and constraints due to applicable laws, directives, Executive Orders, policies, standards, or regulations.”

The Health Information Trust Alliance (HITRUST) is a non-profit organization. They are best known for developing the HITRUST Common Security Framework (CSF), in collaboration with healthcare, technology, and information security organizations around the world. It aligns standards from ISO, NIST, PCI, and regulations like HIPAA.

The Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) is a software threat modeling technique by Carnegie Mellon University that was developed for the US Department of Defense (DoD).

84
Q

Which of the following is the LEAST vulnerable to attack when a resource is not in use?

A. Hypervisors
B. Serverless
C. Ephemeral Computing
D. Containers

A

C. Ephemeral Computing

Explanation:
Some important security considerations related to virtualization include:

Hypervisor Security: The primary virtualization security concern is isolation or ensuring that different VMs can’t affect each other or read each other’s data. VM escape attacks occur when a malicious VM exploits a vulnerability in the hypervisor or virtualization platform to accomplish this.
Container Security: Containers are self-contained packages that include an application and all of the dependencies that it needs to run. Containers improve portability but have security concerns around poor access control and container misconfigurations.
Ephemeral Computing: Ephemeral computing is a major benefit of virtualization, where resources can be spun up and destroyed at need. This enables greater agility and reduces the risk that sensitive data or resources will be vulnerable to attack when not in use. However, these systems can be difficult to monitor and secure since they only exist briefly when they are needed, so their security depends on correctly configuring them.
Serverless Technology: Serverless applications are deployed in environments managed by the cloud service provider. Outsourcing server management can make serverless systems more secure, but it also means that organizations can’t deploy traditional security solutions that require an underlying OS to operate.
85
Q

Rogelio is working with the deployment team to deploy 50 new servers as virtual machines (VMs). The servers that he will be deploying will be a combination of different Operating Systems (OS) and Databases (DB). When deploying these images, it is critical to make sure…

A. That the golden images are used and then patched as soon as it is deployed
B. That the VM images are pulled from a trusted external source
.
That the VMs are updated and patched as soon as they ar. deployed
D. That the golden images are always used for each deployment

A

D. That the golden images are always used for each deployment

Explanation:
The golden image is the current and up-to-date image that is ready for deployment into production. If an image needs patching, it should be patched offline and then the new, better version is turned into the new current golden image. Patching servers in deployment is not the best idea. Patching the image offline is the advised path to take.

The golden image should be built within a business, not pulled from an external source, although there are exceptions. It is critical to know the source of the image (IT or security) and to make sure that it is being maintained and patched on a regular basis.

86
Q

Which of the following solutions is designed to improve security and decrease account takeover attack risk by making it harder to use stolen/compromised passwords?

A. Multi-Factor Authentication
B. Single Sign-On
C. Federated Identity
D. Secrets Management

A

A. Multi-Factor Authentication

Explanation:
Identity and Access Management (IAM) is critical to application security. Some important concepts in IAM include:

Federated Identity: Federated identity allows users to use the same identity across multiple organizations. The organizations set up their IAM systems to trust user credentials developed by the other organization.
Single Sign-On (SSO): SSO allows users to use a single login credential for multiple applications and systems. The user authenticates to the SSO provider, and the SSO provider authenticates the user to the apps using it.
Identity Providers (IdPs): IdPs manage a user’s identities for an organization. For example, Google, Facebook, and other organizations offer identity management and SSO services on the Web.
Multi-Factor Authentication (MFA): MFA requires a user to provide multiple authentication factors to log into a system. For example, a user may need to provide a password and a one-time password (OTP) sent to a smartphone or generated by an authenticator app.
Cloud Access Security Broker (CASB): A CASB sits between cloud applications and users and manages access and security enforcement for these applications. All requests go through the CASB, which can perform monitoring and logging and can block requests that violate corporate security policies.
Secrets Management: Secrets include passwords, API keys, SSH keys, digital certificates, and anything that is used to authenticate identity and grant access to a system. Secrets management includes ensuring that secrets are randomly generated and stored securely.
87
Q

An application uses application-specific access control, and users must authenticate with their own credentials to gain their allowed level of access to the application. A bad actor accessed corporate data after having stolen credentials. According to the STRIDE threat model, what type of threat is this?

A. Insufficient due diligence
B. Spoofing identity
C. Broken authentication
D. Tampering with data

A

B. Spoofing identity

Explanation:
The STRIDE threat model has six threat categories, including Spoofing identity, Tampering with data, Repudiation, Information disclosure, Denial of service, and Elevation of privileges (STRIDE). A bad actor logging in as a user is known as identity spoofing. Ensuring that credentials are protected in transmission and when stored by any system is critical. Using Multi-Factor Authentication (MFA) is also essential to prevent this. If you have an account with any internet-accessible account (your bank, Amazon, etc.), you should enable MFA. The same advice is true in the cloud.

Broken authentication is the entry on the OWASP top 10 list that includes identity spoofing (and more). The question is about STRIDE.

Insufficient due diligence is a cloud problem (and elsewhere) when corporations do not think carefully before putting their systems and data into the cloud and ensuring all the right controls are in place.

Tampering with data could occur once the bad actor is logged in as a user, but the question does not go that far. It is not necessary for someone to log in to tamper with data.

88
Q

Abigail is designing the infrastructure of Identity and Access Management (IAM) for their future Platform as a Service (PaaS) environment. As she is setting up identities, she knows that which of the following is true of roles?

A. Roles are assigned to specific users permanently an. occasionally assumed
B. Roles are the same as user identities
C. Roles are temporarily assumed by another identity
D. Roles are permanently assumed by a user or group

A

C. Roles are temporarily assumed by another identity

Explanation:
Roles are not the same as they are in traditional data centers. Roles are in a way similar to traditional roles in that they allow a user or group a certain amount of access. The group is closer to what we traditionally called roles in Role Based Access Control (RBAC). In the cloud, roles are assumed temporarily. You can assume roles in a variety of ways, but, again, they are temporary.

The user is not permanently assigned a specific role. A user will log in as their user identity, then assume a role. This is temporary (e.g., for 15 hours or only the life of that session).

Note the distinction between assigning and assuming roles — you might have access to certain permissions, but you only use the role and those permissions occasionally.

An additional resource for your review/study is on the AWS website. Look for the user guide regarding roles.

89
Q

Amelia is responsible for the e-commerce server within her company. She needs to move the server someplace that has the ability to allow for the functionality of the server through the slower times of the year as well as ensure that there is enough capability to handle the busier times of the year. She has been told the cloud is the place to be. What cloud feature would make this possible?

A. Dynamic optimization
B. Resource pooling
C. Remediation
D. Distributed resource scheduling

A

B. Resource pooling

Explanation:
Resource pooling allows a cloud customer to quickly scale resources up or down as needed. The Cloud Service Provider (CSP) ensures that resources are available when the cloud customer needs them. Resource pooling occurs when a cloud service provider groups its resources for shared use between multiple cloud customers. This also allows the CSP to scale resources up and down on a per-customer basis.

Dynamic Optimization (DO) allows for dynamic placement and provisioning of cloud services and dynamic service and resource optimization.

Distributed Resource Scheduling (DRS) is very similar to DO; however, it is more specific to the resource scheduling and can be used with clusters and load balancers.

Remediation is the repair and fixing of something.

90
Q

Mandatory MFA and increased monitoring might be part of an organization’s security strategy for managing which type of access in the cloud?

A. User Access
B. Service Access
C. Physical Access
D. Privilege Access

A

D. Privilege Access

Explanation:
Key components of an identity and access management (IAM) policy in the cloud include:

User Access: User access refers to managing the access and permissions that individual users have within a cloud environment. This can use the cloud provider’s IAM system or a federated system that uses the customer’s IAM system to manage access to cloud services, systems, and other resources.
Privilege Access: Privileged accounts have more access and control in the cloud, potentially including management of cloud security controls. These can be controlled in the same way as user accounts but should also include stronger access security controls, such as mandatory multi-factor authentication (MFA) and greater monitoring.
Service Access: Service accounts are used by applications that need access to various resources. Cloud environments commonly rely heavily on microservices and APIs, making managing service access essential in the cloud.

Physical access to cloud servers is the responsibility of the cloud service provider, not the customer.

91
Q

Zepher is working with the cloud development team, and they are trying to figure out which of the software development models would work the best for their new project. The customer has defined a few requirements, such as this will be on a Platform as a Service (PaaS) architecture and that they know that the users will be using this software to interrogate databases to determine how their products should move from store to store and the change in seasons.

Which model would you recommend?

A. Agile
B. V model
C. Waterfall
D. Big bang

A

A. Agile

Explanation:
Agile is the best of the models to select. Agile is an iterative development methodology that allows the developers to work with the customer collaboratively as the software evolves.

The waterfall method requires the requirements to be defined completely before the software is developed, and it does not sound possible with the scenario in this question.

The V model is a variation of the waterfall, so it is problematic for the same reason. It is different than a traditional waterfall in that the testing is paired with each of the development phases.

Big bang works great for individual developers who just want to get to the coding phase and evolve as their requirements do. Since this sounds like a more formal process in the question, this is not the best answer.

92
Q

Nyofu is working with the application developers on the migration of their applications from their data center to a Platform as a Service (PaaS) deployment. They are concerned with their ability to migrate their applications from their traditional data center over to the cloud. The term that they have heard from the Cloud Security Alliance is “cloud ready.”

What concern do they have?

A. It is unlikely that controls or configurations work without reengineering or changing them to work in the cloud
B. Transitioning between a traditional data center model and a cloud environment is typically a seamless, simple, and transparent process
C. It is unlikely that an application from a traditional data center model can simply be picked up and dropped into a cloud environment
D. Even legacy systems from traditional data centers are typically programmed to work within a cloud environment

A

C. It is unlikely that an application from a traditional data center model can simply be picked up and dropped into a cloud environment

Explanation:
Correct answer: It is unlikely that an application from a traditional data center model can simply be picked up and dropped into a cloud environment

Many applications have trouble moving into the cloud without some modifications. That would be a concern for them.

Controls and configurations might need changing as well, but the topic of the question is the application. Controls could be firewalls and other devices.

The other two answers are not concerns. If transitioning is seamless and simpler, that would be a great thing or legacy systems that are programmed to work in the cloud.

93
Q

Which of the following types of testing would NOT be performed during the initial release of a piece of software?

A. Regression Testing
B. Integration Testing
C. Unit Testing
D. Usability Testing

A

A. Regression Testing

Explanation:
Functional testing is used to verify that software meets the requirements defined in the first phase of the SDLC. Examples of functional testing include:

Unit Testing: Unit tests verify that a single component (function, module, etc.) of the software works as intended.
Integration Testing: Integration testing verifies that the individual components of the software fit together correctly and that their interfaces work as designed.
Usability Testing: Usability testing verifies that the software meets users’ needs and provides a good user experience.
Regression Testing: Regression testing is performed after changes are made to the software and verifies that the changes haven’t introduced bugs or broken functionality.

Non-functional testing tests the quality of the software and verifies that it provides necessary functionality not explicitly listed in requirements. Load and stress testing or verifying that sensitive data is properly secured and encrypted are examples of non-functional testing.

94
Q

Joel is working with the DevOps teams on a new piece of software. This software will analyze medical information looking for patterns to facilitate the development of a new vaccine. The users of this project are researchers from many different businesses around the world. Due to all the privacy laws around the world, they are sure that they need to protect the patients and their personal information. They are debating the use of de-identification versus anonymization.

When data is anonymized, what information is removed?

A. Direct identifiers only
B. Direct identifiers and all information about their gender
C. Indirect identifiers only
D. Direct and indirect identifiers
E. Indirect identifiers and their phone numbers

A

D. Direct and indirect identifiers

Explanation:
When data is anonymized, both the direct and indirect identifiers are removed. Direct identifiers include options like name, account number, phone number, etc.—anything that with that piece of information would allow the identification of a single person. Indirect identifiers are defined differently in different places/books/laws. A good general description of an indirect identifier would be information that, when added together, would eventually allow a single person to be identified. This could be race, gender, political views, religious affiliation and so on.

95
Q

Which of the following NIST-defined methods of media sanitization might involve the operating system’s Recycle Bin?

A. Destroy
B. Purge
C. Clear
D. Wipe

A

C. Clear

Explanation:
When data is no longer needed, it should be disposed of using an approved and appropriate mechanism. NIST SP 800-88, Guidelines for Media Sanitization, defines three levels of data destruction:

Clear: Clearing is the least secure method of data destruction and involves using mechanisms like deleting files from the system and the Recycle Bin. These files still exist on the system but are not visible to the computer. This form of data destruction is inappropriate for sensitive information.
Purge: Purging destroys data by overwriting it with random or dummy data or performing cryptographic erasure (cryptoshredding). Often, purging is the only available option for sensitive data stored in the cloud, since an organization doesn’t have the ability to physically destroy the disks where their data is stored. However, in some cases, data can be recovered from media where sensitive data has just been overwritten with other data.
Destroy: Destroying damages the physical media in a way that makes it unusable and the data on it unreadable. The media could be pulverized, incinerated, shredded, dipped in acid, or undergo similar methods.

Wipe is not a NIST-defined method of media sanitization.

96
Q

Encryption of data at rest and implementing access controls are an important part of which stage of the cloud data lifecycle?

A. Destroy
B. Store
C. Use
D. Create

A

B. Store

Explanation:
The cloud data lifecycle has six phases, including:

Create: Data is created or generated. Data classification, labeling, and marking should occur in this phase.
Store: Data is placed in cloud storage. Data should be encrypted in transit and at rest using encryption and access controls.
Use: The data is retrieved from storage to be processed or used. Mapping and securing data flows becomes relevant in this stage.
Share: Access to the data is shared with other users. This sharing should be managed by access controls and should include restrictions on sharing based on legal and jurisdictional requirements. For example, the GDPR limits the sharing of EU citizens’ data.
Archive: Data no longer in active use is placed in long-term storage. Policies for data archiving should include considerations about legal data retention and deletion requirements and the rotation of encryption keys used to protect long-lived sensitive data.
Destroy: Data is permanently deleted. This should be accomplished using secure methods such as cryptographic erasure/crypto shredding.
97
Q

The Chief Executive Officer (CEO) has tasked their Chief Information Security Officer (CISO) with updating the policy that informs a user regarding what they are allowed to use cloud storage for regarding both corporate and personal uses. This acceptable use policy is an example of what type of policy?

A. Functional
B. Procedure
C. Organization
D. Baselines

A

A. Functional

Explanation:
Acceptable use policies are an example of functional policies. Functional policies set guiding principles for individual business functions and activities. NIST says that you should have less than 20 of these, and ISACA says you should have 24 or fewer. Examples would be Data security, Identity and Access Management (IAM), Acceptable use, and Business Continuity Management (BCM).

The organizational level policy should show management’s commitment to security, where the functional policies then start to explore certain topics.

Standards, baselines, procedures, and guidelines are then used to detail how the corporation will fulfill the functional policies. Baselines detail technical configurations, and the procedures say how to do something step by step.

98
Q

Amelia is responsible for the e-commerce server within her company. She needs to move the server someplace that has the ability to allow for the functionality of the server through the slower times of the year as well as ensure that there is enough capability to handle the busier times of the year. She has been told the cloud is the place to be. What cloud feature would make this possible?

A .Distributed resource scheduling
B. Dynamic optimization
C. Remediation
D .Resource pooling

A

D .Resource pooling

Explanation:
Resource pooling allows a cloud customer to quickly scale resources up or down as needed. The Cloud Service Provider (CSP) ensures that resources are available when the cloud customer needs them. Resource pooling occurs when a cloud service provider groups its resources for shared use between multiple cloud customers. This also allows the CSP to scale resources up and down on a per-customer basis.

Dynamic Optimization (DO) allows for dynamic placement and provisioning of cloud services and dynamic service and resource optimization.

Distributed Resource Scheduling (DRS) is very similar to DO; however, it is more specific to the resource scheduling and can be used with clusters and load balancers.

Remediation is the repair and fixing of something.

99
Q

Which of the following is NOT an example of a valid MFA scheme?

A. Facial Recognition and OTP from Authenticator App
B. Password and OTP from Authenticator App
C. Facial Recognition and PIN
D. Password and PIN

A

D. Password and PIN

Explanation:
Multi-factor authentication requires a user to provide multiple authentication factors to gain access to their account. These factors must come from two or more of the following categories:

Something You Know: Passwords, security questions, and PINs are examples of knowledge-based factors.
Something You Have: These factors include hardware tokens, smart cards, or smartphones that can receive or generate a one-time password (OTP).
Something You Are: Biometric factors include fingerprints, facial recognition, and similar technologies.

Password and PIN are both knowledge-based factors, so this is not a valid MFA scheme.

100
Q

Alphonse is an information security manager working for a training corporation located in France. Alphonse is responsible for the Security Operations Center (SOC). He has just been advised by one of the technicians that they received an alert that is an Indication Of Compromise (IoC). At first glance, it appears that a bad actor has been able to gain access to their database of student information. Under the European Union’s (EU) General Data Protection Regulation (GDPR) there is a notification requirement.

How long does a data controller have to notify the applicable government agency after a data breach or leak of personal or private information and when does the clock start?

A. 72 hours from the moment the manager was informed of the IoC
B. 48 hours from the moment the manager was informed of the IoC
C. 48 hours from the moment the technician saw the IoC
D. 72 hours from the moment the technician saw the IoC

A

D. 72 hours from the moment the technician saw the IoC

Explanation:
Under GDPR, data controllers must notify the applicable government agencies within 72 hours of a data breach or leak of personal or private information is found. So that means 72 hours from the moment the technician realized that something had happened. However, there are some exemptions for law enforcement and national security agencies. GDPR is mostly focused on scenarios where the data is viewable by a malicious party rather than instances where the data is erased or encrypted.