AWS Certified Cloud Practitioner Practice Test 2 (Bosos) Flashcards

1
Q
What is the best type of instance purchasing option to choose if you will run an EC2 instance for 3 months to perform a job that is uninterruptible?
A.Spot
B.Dedicated
C.On-Demand
D.Reserved
A

C.On-Demand

Explanation:
With On-Demand instances you only pay for EC2 instances you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs.

This type of instance lets you pay for compute capacity by the hour or second (minimum of 60 seconds) with no long-term commitments. This frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs.

There is a limit to the number of running On-Demand Instances per AWS account per Region. You can determine whether your On-Demand Instance limits are count-based or vCPU-based. With vCPU-based instance limits, your limits are managed in terms of the number of virtual central processing units (vCPUs) that your running On-Demand Instances are using, regardless of the instance type. You can use the vCPU limits calculator to determine the number of vCPUs that you require for your application needs:

On-Demand instances are the best instance type to use when you need instances for short periods of time and for uninterruptible workloads since they are the cheapest option for its span of time.

Reserved instance is incorrect because although it does offer discounts on hourly cost, you still need to commit at least a whole year’s worth of instance cost to fully maximize the discounts. Since your workload will run for only 3 months, this option is not suitable.

Spot instance is incorrect because this can be terminated by Amazon EC2 based on the long-term supply of and demand for Spot Instances. Hence, this is not recommended for uninterruptible workloads.

Dedicated Instance is incorrect because this is just a type of Amazon EC2 instance that runs in a VPC on hardware that’s dedicated to a single customer. This option is not relevant to the question so this is incorrect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
In the AWS Shared Responsibility Model, whose responsibility is it to patch the host operating system of an Amazon EC2 instance?
A.Customer
B.AWS
C.Neither AWS nor the customer
D.Both AWS and the customer
A

B.AWS

Explanation:
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

The customer assumes responsibility and management of the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations.

The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment. As shown in the chart below, this differentiation of responsibility is commonly referred to as Security OF the Cloud versus Security IN the Cloud.

This customer/AWS shared responsibility model also extends to IT controls. Just as the responsibility to operate the IT environment is shared between AWS and its customers, so is the management, operation and verification of IT controls shared. AWS can help relieve customer burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by the customer. As every customer is deployed differently in AWS, customers can take advantage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment.

Customers can then use the AWS control and compliance documentation available to them to perform their control evaluation and verification procedures as required. Below are examples of controls that are managed by AWS, AWS Customers and/or both.

Inherited Controls: Controls which a customer fully inherits from AWS.

  • Physical and Environmental controls

Shared Controls: Controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services.

Examples include:

  • Patch Management: AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
  • Configuration Management: AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.
  • Awareness & Training: AWS trains AWS employees, but a customer must train their own employees.

Customer Specific: Controls which are solely the responsibility of the customer based on the application they are deploying within AWS services.

Examples include:

  • Service and Communications Protection or Zone Security which may require a customer to route or zone data within specific security environments.

The host operating system, which is managed by AWS, is the hypervisor that creates several guest operating systems that can be managed by different customers. Amazon EC2 uses a technology commonly known as virtualization to run multiple operating systems on a single physical machine. Virtualization ensures that each guest operating system receives its fair share of CPU time, memory, and I/O bandwidth to the local disk and to the network using a host operating system, sometimes known as a hypervisor. The hypervisor also isolates the guest operating systems from each other so that one guest cannot modify or otherwise interfere with another one on the same machine.

Hence, the correct answer is AWS.

Customer is incorrect because their responsibility is to patch the guest operating system of their EC2 instance and not the host operating system.

Both AWS and the customer is incorrect because patching the host operating system of the Amazon EC2 instance is the responsibility of AWS. Take note that if you are using a fully-managed service like Amazon DynamoDB or Redshift, AWS will also be responsible for the underlying guest operating system.

Neither AWS nor the customer is incorrect as this task falls under the responsibilities of AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

In AWS, which of the following is a design principle that you should implement when designing your cloud architecture?
A.Always use large serves to anticipate increase usage
B.Use Multiple Availability Zones
C.Utilize free or open-source software
D.Tightly couple your compnents

A

B.Use Multiple Availability Zones

Explanation:
There are various best practices that you can follow which can help you build an application in the cloud. The notable ones are:

  1. Design for failure
  2. Decouple your components
  3. Implement elasticity
  4. Think parallel

In Design for failure, it encourages you to be a pessimist when designing architectures in the cloud; assume things will fail. In other words, always design, implement, and deploy for automated recovery from failure.

In particular, assume that your hardware will fail. Assume that outages will occur. Assume that some disaster will strike your application. Assume that you will be slammed with more than the expected number of requests per second someday. Assume that with time your application software will fail too. By being a pessimist, you end up thinking about recovery strategies during design time, which helps in designing an overall system better.

Designing with an assumption that underlying hardware will fail, will prepare you for the future when it actually fails. This design principle will help you design operations-friendly applications. If you can extend this principle to pro-actively measure and balance load dynamically, you might be able to deal with variance in network and disk performance that exists due to the multi-tenant nature of the cloud.

AWS specific tactics for implementing this best practice are as follows:

  1. Failover gracefully using Elastic IPs: Elastic IP is a static IP that is dynamically re-mappable. You can quickly remap and failover to another set of servers so that your traffic is routed to the new servers. It works great when you want to upgrade from old to new versions or in case of hardware failures
  2. Utilize multiple Availability Zones: Availability Zones are conceptually like logical datacenters. By deploying your architecture to multiple availability zones, you can ensure high availability. Utilize Amazon RDS Multi-AZ deployment functionality to automatically replicate database updates across multiple Availability Zones.
  3. Maintain an Amazon Machine Image so that you can restore and clone environments very easily in a different Availability Zone; Maintain multiple Database slaves across Availability Zones and setup hot replication.
  4. Utilize Amazon CloudWatch (or various real-time open source monitoring tools) to get more visibility and take appropriate actions in case of hardware failure or performance degradation. Setup an Auto Scaling group to maintain a fixed fleet size so that it replaces unhealthy Amazon EC2 instances by new ones.
  5. Utilize Amazon EBS and set up cron jobs so that incremental snapshots are automatically uploaded to Amazon S3 and data is persisted independent of your instances.
  6. Utilize Amazon RDS and set the retention period for backups, so that it can perform automated backups.

By focusing on concepts and best practices - like designing for failure, decoupling the application components, understanding and implementing elasticity, combining it with parallelization, and integrating security in every aspect of the application architecture - cloud architects can understand the design considerations necessary for building highly scalable cloud applications.

Hence, the correct answer is: Use multiple Availability Zones.

The option that says: Tightly couple your components is incorrect because this is exactly the opposite of the “Decouple your components” cloud design principle.

The option that says: Always use large servers to anticipate increase usage is incorrect because this action doesn’t follow the concept of implementing elasticity to your cloud architecture. In this case, it is better to use Auto Scaling to automatically increase or decrease the number of your servers based on the application demand.

The option that says: Utilize free or open-source software is incorrect because this is not considered as one of the cloud design principles nor a best practice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
Which of the following tasks fall under the sole responsibility of AWS based on the shared responsibility model?
A.Physical and environmental controls
B.Patch Management
C.Applying Amazon S3 bucket policies
D.Imokementing IAM policies
A

A.Physical and environmental controls

Explanation:
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

The customer assumes responsibility and management of the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations.

The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment. As shown in the chart below, this differentiation of responsibility is commonly referred to as Security OF the Cloud versus Security IN the Cloud.

This customer/AWS shared responsibility model also extends to IT controls. Just as the responsibility to operate the IT environment is shared between AWS and its customers, so is the management, operation and verification of IT controls shared. AWS can help relieve customer burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by the customer. As every customer is deployed differently in AWS, customers can take advantage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment.

Customers can then use the AWS control and compliance documentation available to them to perform their control evaluation and verification procedures as required. Below are examples of controls that are managed by AWS, AWS Customers and/or both.

Inherited Controls: Controls which a customer fully inherits from AWS.

  • Physical and Environmental controls

Shared Controls: Controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services.

Examples include:

  • Patch Management: AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
  • Configuration Management: AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.
  • Awareness & Training: AWS trains AWS employees, but a customer must train their own employees.

Customer Specific: Controls which are solely the responsibility of the customer based on the application they are deploying within AWS services.

Examples include:

  • Service and Communications Protection or Zone Security which may require a customer to route or zone data within specific security environments.

Hence, the correct answer is Physical and environmental controls.

Both Implementing IAM policies and Applying Amazon S3 bucket policies are incorrect because these are the responsibilities of the customer and not AWS.

Patch Management is incorrect because this is actually a shared control between AWS and the customer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
\_\_\_\_\_\_\_\_\_\_ lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define.
A.Amazon LightSail
B.Virtual Private Gateway
C.Amazon WorkSpaces
D.Amazon VPC
A

D.Amazon VPC

Explanation:
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications.

You can easily customize the network configuration for your Amazon VPC. For example, you can create a public-facing subnet for your web servers that has access to the Internet, and place your backend systems such as databases or application servers in a private-facing subnet with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet.

Hence, the correct answer is Amazon VPC.

Amazon LightSail is incorrect because this service is just a virtual private server (VPS) solution which provides developers with compute, storage, and networking capacity and capabilities to deploy and manage websites and web applications in the cloud.

Virtual Private Gateway is incorrect because this is primarily used for connecting your on-premises network to your VPC.

Amazon WorkSpaces is incorrect because this is just a Desktop-as-a-Service (DaaS) solution in AWS which allows you to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which of the following provides software solutions that are either hosted on or integrated with the AWS platform which may include Independent Software Vendors (ISVs), SaaS, PaaS, developer tools, management, and security vendors?
A.AWS Partner Network Consulting Partners
B.AWS Partner Network Technology Partners
C.Concierge Support
D.Technical Account management

A

B.AWS Partner Network Technology Partners

Explanation

The AWS Partner Network (APN) is focused on helping partners build successful AWS-based businesses to drive superb customer experiences. This is accomplished by developing a global ecosystem of Partners with specialties unique to each customer’s needs.

There are two types of APN Partners:

  1. APN Consulting Partners
  2. APN Technology Partners

APN Consulting Partners are professional services firms that help customers of all sizes design, architect, migrate, or build new applications on AWS. Consulting Partners include System Integrators (SIs), Strategic Consultancies, Resellers, Digital Agencies, Managed Service Providers (MSPs), and Value-Added Resellers (VARs).

APN Technology Partners provide software solutions that are either hosted on, or integrated with, the AWS platform. Technology Partners include Independent Software Vendors (ISVs), SaaS, PaaS, developer tools, management and security vendors.

Hence, the correct answer in this scenario is APN Technology Partners.

APN Consulting Partners is incorrect because this program only helps customers to design, architect, migrate, or build new applications on AWS. You have to use APN Technology Partners instead.

Concierge Support is incorrect because this is a team composed of AWS billing and account experts that specialize in working with enterprise accounts. They will quickly and efficiently assist you with your billing and account inquiries, and work with you to implement billing and account best practices so that you can focus on running your business.

Technical Account Management is incorrect because this is just a part of AWS Enterprise Support which provides advocacy and guidance to help plan and build solutions using best practices, coordinate access to subject matter experts and product teams, and proactively keep your AWS environment operationally healthy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
Which of the following policies grant the necessary permissions required to access your Amazon S3 resources? (Select TWO.)
A.Object policies
B.Routing policies
C.User policies
D.Bucket policies
E.Network access control policies
A

C.User policies
D.Bucket policies

Explanation:
When granting permissions, you decide who is getting them, which Amazon S3 resources they are getting permissions for, and specific actions you want to allow on those resources. Buckets and objects are Amazon S3 resources. By default, only the resource owner can access these resources. The resource owner refers to the AWS account that creates the resource

Bucket policy and user policy are two of the access policy options available for you to grant permission to your Amazon S3 resources. Both use JSON-based access policy language. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. User policies are policies that allow an IAM User access to one of your buckets.

Hence, the correct answers are bucket policy and user policy.

All other options (routing policies, network access control policies, and object policies) are incorrect as these are not the correct features that will grant permissions to your Amazon S3 bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
Which of the following are the pillars of the AWS Well-Architected Framework? (Select TWO.)
A.Performance Efficiency
B.Agility
C.High Availability
D.Operational Excellence
E.Scalability
A

A.Performance Efficiency
D.Operational Excellence

Explanation:
The Well-Architected Framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. This is based on five pillars namely:

  1. Operational Excellence
  2. Security
  3. Reliability
  4. Performance Efficiency
  5. Cost Optimization

This Framework provides a consistent approach for customers and partners to evaluate architectures, and implement designs that will scale over time.

The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using this Framework, you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement. The process for reviewing an architecture is a constructive conversation about architectural decisions and is not an audit mechanism. Having well-architected systems greatly increases the likelihood of business success.

AWS Solutions Architects have years of experience architecting solutions across a wide variety of business verticals and use cases. AWS has helped design and review thousands of customers’ architectures on AWS. From this experience, AWS has identified best practices and core strategies for architecting systems in the cloud that you can also implement.

You can also use the AWS Well-Architected Tool; it helps you review the state of your workloads and compares them to the latest AWS architectural best practices. The tool is based on the AWS Well-Architected Framework, developed to help cloud architects build secure, high-performing, resilient, and efficient application infrastructure.

Hence, the correct answers are Operational Excellence and Performance Efficiency.

High Availability, Scalability and Agility are all incorrect because these are not part of the 5 AWS Well-Architected Framework pillars.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
Which of the following will allow you to create a data warehouse in AWS for your business intelligence needs?
A,Amazon RDS
B.Amazon DynamoDB
C.Amazon RedShift
D.Amazon S3
A

C.Amazon RedShift

Explanation:
Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale out to petabytes of data for $1,000 per terabyte per year, less than a tenth the cost of traditional solutions.

Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to directly run SQL queries against exabytes of unstructured data in Amazon S3. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, SequenceFile, TextFile, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data being retrieved, so queries against Amazon S3 run fast, regardless of data set size.

Traditional data warehouses require significant time and resource to administer, especially for large datasets. In addition, the financial cost associated with building, maintaining, and growing self-managed, on-premise data warehouses is very high. As your data grows, you have to constantly trade-off what data to load into your data warehouse and what data to archive in storage so you can manage costs, keep ETL complexity low, and deliver good performance. Amazon Redshift not only significantly lowers the cost and operational overhead of a data warehouse, but with Redshift Spectrum, also makes it easy to analyze large amounts of data in its native format without requiring you to load the data.

Hence, the correct answer is Amazon Redshift.

Amazon Relational Database Service (Amazon RDS) is incorrect since this is a relational (SQL) database in the cloud, not a data warehouse.

Amazon DynamoDB is incorrect since this service is a non-relational (noSQL) database in the cloud, not a data warehouse.

Amazon S3 is incorrect since this service is a durable cloud storage for objects and files, and not a data warehouse.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company plans to migrate their on-premises MySQL database to Amazon RDS. Which AWS service should they use for this task?
A.AWS Server Migration Service
B.AWS Direct Connect
C.AWS Schema Conversion Tool (AWS SCT)
D.AWS Database Migration Service (AWS DMS)

A

D.AWS Database Migration Service (AWS DMS)

Explanation:
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.

AWS Database Migration Service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora. With AWS Database Migration Service, you can continuously replicate your data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3.

The AWS Schema Conversion Tool makes heterogeneous database migrations predictable by automatically converting the source database schema and a majority of the database code objects, including views, stored procedures, and functions, to a format compatible with the target database. Any objects that cannot be automatically converted are clearly marked so that they can be manually converted to complete the migration. SCT can also scan your application source code for embedded SQL statements and convert them as part of a database schema conversion project.

During this process, SCT performs cloud-native code optimization by converting legacy Oracle and SQL Server functions to their equivalent AWS service thus helping you modernize the applications at the same time of database migration. Once schema conversion is complete, SCT can help migrate data from a range of data warehouses to Amazon Redshift using built-in data migration agents. For example, it can convert PostgreSQL to MySQL or an Oracle Data Warehouse to Amazon Redshift

Hence, the correct answer is AWS Database Migration Service (AWS DMS).

AWS Schema Conversion Tool (AWS SCT) is incorrect because this is primarily used to convert your existing database schema from one database engine to another. The scenario didn’t mention anything about migrating the MySQL database to another database type. Since the task is to just migrate their on-premises MySQL database to Amazon RDS, you simply need to use the AWS Database Migration Service (AWS DMS).

AWS Server Migration Service is incorrect because this is just an agentless service that makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. This is not the appropriate service to use in migrating your on-premises database.

AWS Direct Connect is incorrect because this is just a cloud service solution that makes it easier for you to establish a dedicated network connection from your premises to AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which of the following best describes the concept of the loose coupling design principle?
A.Increase the number of resources by adding more hard drives to a storage array or adding more servers
B.Increase the specifications of an individual resource by upgrading a server with a larger hard drive or a faster CPU
C.A change or a failure in one component must be cascaded to other components
D.A change or a failure in one component should not cascade to other components

A

D.A change or a failure in one component should not cascade to other components

Explanation:
The AWS Cloud includes many design patterns and architectural options that you can apply to a wide variety of use cases. Some key design principles of the AWS Cloud include scalability, disposable resources, automation, loose coupling managed services instead of servers, and flexible data storage options.

As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies — a change or a failure in one component should not cascade to other components.

A way to reduce interdependencies in a system is to allow the various components to interact with each other only through specific, technology-agnostic interfaces, such as RESTful APIs. In that way, technical implementation detail is hidden so that teams can modify the underlying implementation without affecting other components. As long as those interfaces maintain backward compatibility, deployments of different components are decoupled. This granular design pattern is commonly referred to as a microservices architecture.

Hence, the correct answer is: A change or a failure in one component should not cascade to other components.

The option that says: Increase the specifications of an individual resource by upgrading a server with a larger hard drive or a faster CPU is incorrect because this refers to Vertical Scaling.

The option that says: A change or a failure in one component must be cascade to other components is incorrect because it should be the other way around. IT systems should be designed in a way that reduces interdependencies in which a change or a failure in one component should not cascade to other components.

The option that says: Increase the number of resources by adding more hard drives to a storage array or adding more servers is incorrect because this refers to Horizontal Scaling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
Which of the following services allows you to easily migrate petabyte-scale data to AWS?
A.AWS Transit Gateway
B.AWS Data Pipeline
C.AWS Snowball
D.Amazon SQS
A

C.AWS Snowball

Explanation:
AWS Snowball is a petabyte-scale data transport solution that uses devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns.

With Snowball, you don’t need to write any code or purchase any hardware to transfer your data. Simply create a job in the AWS Management Console (“Console”) and a Snowball device will be automatically shipped to you. Once it arrives, attach the device to your local network, download and run the Snowball Client (“Client”) to establish a connection, and then use the Client to select the file directories that you want to transfer to the device. The Client will then encrypt and transfer the files to the device at high speed. Once the transfer is complete and the device is ready to be returned, the E Ink shipping label will automatically update and you can track the job status via Amazon Simple Notification Service (SNS), text messages, or directly in the Console.

Hence, the correct answer is AWS Snowball.

Amazon Data Pipeline is incorrect since this service does not offer an easy solution for transporting petabyte-scale data from data centers to AWS.

Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. This service is not meant for petabyte-scale data migration.

AWS Transit Gateway is incorrect since this is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The IT Security team of your company needs to conduct a vulnerability analysis on your application servers to ensure that the EC2 instances comply with the annual security IT audit. You need to set up an automated security assessment service to improve the security and compliance of your applications. The solution should automatically assess applications for exposure, vulnerabilities, and deviations from the AWS best practices.

Which of the following options would you implement to satisfy this requirement?
A.AWS Inspector
B.AWS Snowball
C.AWS WAF
D.Amazon CloudFront
A

A.AWS Inspector

Explanation:
Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API.

Amazon Inspector security assessments help you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances. Amazon Inspector assessments are offered to you as pre-defined rules packages mapped to common security best practices and vulnerability definitions. Examples of built-in rules include checking for access to your EC2 instances from the internet, remote root login being enabled, or vulnerable software versions installed. These rules are regularly updated by AWS security researchers.

Hence, the correct answer is: Amazon Inspector.

AWS WAF is incorrect because this is a firewall service to safeguard your VPC against DDoS, SQL Injection, and many other threats.

AWS Snowball is incorrect because Snowball is mainly used to transfer data from your on-premises network to AWS.

Amazon CloudFront is incorrect because CloudFront is used as a content distribution service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
In compliance with the Sarbanes-Oxley Act (SOX) federal law, a US-based company is required to provide SOC 1 and SOC 2 reports of their cloud resources. Where are these AWS compliance documents located?
A.AWS Certificate Manager
B.AWS Artifact
C.AWS Organizations
D.AWS GovCloud
A

B.AWS Artifact

Explanation:
The Service Organization Controls (SOC) Reports are used to evaluate the effectiveness of AWS controls that might affect your internal controls over financial reporting (ICOFR). The audit is performed according to the SSAE 18 and ISAE 3402 standards. Many AWS customers use this report as an integral part of their Sarbanes-Oxley (SOX) efforts.

AWS Artifact is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to AWS’ security and compliance reports and select online agreements. Reports available in AWS Artifact include our Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls. Agreements available in AWS Artifact include the Business Associate Addendum (BAA) and the Nondisclosure Agreement (NDA).

All AWS Accounts have access to AWS Artifact. Root users and IAM users with admin permissions can download all audit artifacts available to their account by agreeing to the associated terms and conditions. You will need to grant IAM users with non-admin permissions access to AWS Artifact using IAM permissions. This allows you to grant a user access to AWS Artifact, while restricting access to other services and resources within your AWS Account.

Hence, the correct answer in this scenario is AWS Artifact.

AWS GovCloud is incorrect as this is basically just an isolated AWS Region and not a compliance document repository like AWS Artifact, which is designed to allow U.S. government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements.

AWS Organizations is incorrect because this just helps you centrally govern your environment as you grow and scale your workloads in AWS.

AWS Certificate Manager is incorrect because this is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. This service does not store certifications or compliance-related documents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
Which AWS service is commonly used for streaming data in real-time?
A,Amazon EMR
B.Amazon Data Pipeline
C.Amazon Kinesis
D.Amazon Elastisearch
A

C.Amazon Kinesis

Explanation:
With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin.

Amazon Kinesis can handle any amount of streaming data and process data from hundreds of thousands of sources with very low latencies. It enables you to ingest, buffer, and process streaming data in real-time, so you can derive insights in seconds or minutes instead of hours or days.

Hence, the correct answer is Amazon Kinesis.

Amazon Elasticsearch is incorrect because it’s just a fully managed service that allows you to deploy, secure, and operate Elasticsearch at scale with zero down time.

Amazon EMR is incorrect since this is just a big data service that gives analytical teams the engines and elasticity to run Petabyte-scale analysis for a fraction of the cost of traditional on-premise clusters, using open source Apache tools.

Amazon Data Pipeline is incorrect because this is simply a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
Which service allows you to send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available?
A.Amazon SES
B.Amazon SWF
C.Amazon SQS
D.Amazon Route 53
A

C.Amazon SQS

Explanation:
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

You can get started with SQS in minutes using the AWS console, Command Line Interface or SDK of your choice, and three simple commands. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.

Hence, the correct answer is Amazon SQS.

Amazon SWF is incorrect since this is a service that is meant for automation. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud.

Amazon Route 53 is incorrect since this is a DNS web service in AWS.

Amazon SES is incorrect since this is a cloud-based email sending service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which of the following is the most cost-effective instance purchasing option for hosting an application which will run non-interruptible workloads for a period of three years?
A.Amazon EC2 Scheduled Reserved Instances
B.Amazon EC2 Standard Reserved Instances
C.Amazon EC2 On-Demand Instances
D.Amazon EC2 Spot Instances

A

B.Amazon EC2 Standard Reserved Instances

Explanation:
Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.

Standard Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing and can be purchased for a 1-year or 3-year term. The average discount off On-Demand instances varies based on your term and chosen payment options (up to 40% for 1-year and 60% for a 3-year term). Customers have the flexibility to change the Availability Zone, the instance size, and networking type of their Standard Reserved Instances.

Convertible Reserved Instances provide you with a significant discount (up to 54%) compared to On-Demand Instances and can be purchased for a 1-year or 3-year term. Purchase Convertible Reserved Instances if you need additional flexibility, such as the ability to use different instance families, operating systems, or tenancies over the Reserved Instance term.

Hence, the correct answer is Amazon EC2 Standard Reserved Instances.

The Amazon EC2 Spot Instances option is incorrect because although this is the most cost-effective type, this instance can be interrupted by Amazon EC2 for capacity requirements making it not suitable for non-interruptible workloads.

The Amazon EC2 Scheduled Reserved Instances option is incorrect because this will just enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration for a one-year term only and not for a three-year term. This is more suitable for non-continuous workloads which runs on a specific time of the day only.

The Amazon EC2 On-Demand Instances option is incorrect because although it is suitable to run non-interruptible workloads for a period of three years, it entails a higher running cost compared to Reserved or Spot instances. In fact, this is actually the most expensive type of EC2 instance and not the cheapest one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
Which of the following is not required when launching an EBS-backed EC2 instance?
A.EBS Root Volume
B.VPC and subnet specifications
C.Security group
D.Elastic IP address
A

D.Elastic IP address

Explanation:
Instances that use Amazon EBS for the root device automatically have an Amazon EBS volume attached. When you launch an Amazon EBS-backed instance, we create an Amazon EBS volume for each Amazon EBS snapshot referenced by the AMI you use. You can optionally use other Amazon EBS volumes or instance store volumes, depending on the instance type.

An Amazon EBS-backed instance can be stopped and later restarted without affecting data stored in the attached volumes. There are various instance/volume-related tasks you can do when an Amazon EBS-backed instance is in a stopped state. For example, you can modify the properties of the instance, change its size, or update the kernel it is using, or you can attach your root volume to a different running instance for debugging or any other purpose.

When launching an EC2 instance, you are not required to provide an Elastic IP address. If the instance is a public web server, then you can optionally choose to have an AWS-provided public IP address assigned to it. This IP address will depend on the setting of the subnet where you launched the instance.

Hence, Elastic IP address is the correct answer.

Security groups, EBS root volumes, and VPC and subnet values are all required when launching an EC2 instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which of the following is one of the benefits of migrating your systems from an on-premises data center to AWS Cloud?
A.Eliminates the need for the customer ti implement client-side or service-side encryption for their data
B.Completely eliminates the administrative overhead of patching the guest operating system of their EC2 instances
C.Enables the customer to eliminate high IT infrastructure costs since cloud computing is absolutely free
D.Enables the customer to focus on business activities rathert than on heavy lifting of racking, stacking and powering servers

A

D.Enables the customer to focus on business activities rathert than on heavy lifting of racking, stacking and powering servers

Explanation:
Cloud computing is the on-demand delivery of compute power, database, storage, applications, and other IT resources via the internet with pay-as-you-go pricing.

Whether you are using it to run applications that share photos to millions of mobile users or to support business critical operations, a cloud services platform provides rapid access to flexible and low cost IT resources. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use.

There are six advantages of using Cloud Computing:

  1. Trade capital expense for variable expense

– Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume.

  1. Benefit from massive economies of scale

– By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.

  1. Stop guessing capacity

– Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice.

  1. Increase speed and agility

– In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.

  1. Stop spending money running and maintaining data centers

– Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers.

  1. Go global in minutes

– Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost.

Hence, the correct answer is: Enables the customer to focus on business activities rather than on the heavy lifting of racking, stacking, and powering servers.

The option that says: Enables the customer to eliminate high IT infrastructure costs since cloud computing is absolutely free is incorrect because although it is true that cloud computing can lessen or eliminate exorbitant IT infrastructure costs, the customers will still be charged based on their usage in AWS. You can opt to use the AWS Free Tier (which has limited capabilities) for testing but this is not considered a benefit of using AWS over your traditional data center.

The option that says: Completely eliminate the administrative overhead of patching the guest operating system of their EC2 instances is incorrect because based on the Shared Responsibility Model, the customer is the one responsible for patching the guest OS while AWS is responsible for the underlying host OS of the EC2 instance.

The option that says: Eliminates the need for the customer to implement client-side or service-side encryption for their data is incorrect because based on the Shared Responsibility Model, the customer is responsible for applying the encryption of their data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
Which of the following is the most cost-effective option when you purchase either a Standard or Convertible Reserved Instance for a 1-year term?
A.No Upfront
B.All Upfront
C.Partial Upfront
D.Deferred
A

B.All Upfront

Explanation:
Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.

Standard Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing and can be purchased for a 1-year or 3-year term. The average discount off On-Demand instances varies based on your term and chosen payment options (up to 40% for 1-year and 60% for a 3-year term). Customers have the flexibility to change the Availability Zone, the instance size, and networking type of their Standard Reserved Instances.

Convertible Reserved Instances provide you with a significant discount (up to 54%) compared to On-Demand Instances and can be purchased for a 1-year or 3-year term. Purchase Convertible Reserved Instances if you need additional flexibility, such as the ability to use different instance families, operating systems, or tenancies over the Reserved Instance term.

You can choose between three payment options when you purchase a Standard or Convertible Reserved Instance:

All Upfront option: You pay for the entire Reserved Instance term with one upfront payment. This option provides you with the largest discount compared to On-Demand instance pricing.

Partial Upfront option: You make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term.

No Upfront option: Does not require any upfront payment and provides a discounted hourly rate for the duration of the term.

Here’s a sample calculation to see the price difference between a Standard RI and Convertible RI on various payment options for 1-year and 3-year terms:

As a general rule, Standard RI provides more savings than Convertible RI, which means that the former is the cost-effective option. The All Upfront option provides you with the largest discount compared with the other types. Opting for a longer compute reservation, such as the 3-year term, gives us greater discount as opposed to a shorter 1-year renewable term.

Hence, the correct answer is All Upfront.

Partial Upfront is incorrect because although it is more cost-effective than No Upfront option, its cost is higher compared with the All Upfront option.

No Upfront is incorrect because although it does not require any upfront payment and provides a discounted hourly rate for the duration of the term, it still costs higher than both the All Upfront and Partial Upfront options.

Deferred is incorrect because this is not an available option for Reserved Instance pricing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
Which of the following is used to enable instances in the public subnet to connect to the public Internet?
A.NAT Gateway
B/API Gateway
C.Internet Gateway
D.NAT instance
A

C.Internet Gateway

Explanation:
An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.

To enable communication over the internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that’s associated with a private IPv4 address on your instance. Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet.

The Internet gateway logically provides the one-to-one NAT on behalf of your instance, so that when traffic leaves your VPC subnet and goes to the Internet, the reply address field is set to the public IPv4 address or Elastic IP address of your instance, and not its private IP address. Conversely, traffic that’s destined for the public IPv4 address or Elastic IP address of your instance has its destination address translated into the instance’s private IPv4 address before the traffic is delivered to the VPC.

Both NAT Gateways and NAT Instances are incorrect because these are simply used to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating connections with the instances.

API Gateway is incorrect since this is a service meant for creating, publishing, maintaining, monitoring, and securing APIs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You are permitted to conduct security assessments and penetration testing without prior approval against which AWS resources? (Select TWO.)
A.Amazon S3
B.AWS Identity and Access Management (IAM)
C.AWS Security Token Service (STS)
D.Amazon Aurora

A

D.Amazon Aurora

Explanation

Cloud security at AWS is the highest priority. As an AWS customer, you will benefit from a data center and network architecture built to meet the requirements of the most security-sensitive organizations. An advantage of the AWS cloud is that it allows customers to scale and innovate, while maintaining a secure environment. Customers pay only for the services they use, meaning that you can have the security you need, but without the upfront expenses, and at a lower cost than in an on-premises environment.

AWS customers are welcome to carry out security assessments or penetration tests against their AWS infrastructure without prior approval to a few services only.

Permitted Services – You’re welcome to conduct security assessments against AWS resources that you own if they make use of the services listed below. Take note that AWS is constantly updating this list:

  • Amazon EC2 instances, NAT Gateways, and Elastic Load Balancers
  • Amazon RDS
  • Amazon CloudFront
  • Amazon Aurora
  • Amazon API Gateways
  • AWS Lambda and Lambda Edge functions
  • Amazon Lightsail resources
  • Amazon Elastic Beanstalk environments

Prohibited Activities – The following activities are prohibited at this time:

  • DNS zone walking via Amazon Route 53 Hosted Zones
  • Denial of Service (DoS), Distributed Denial of Service (DDoS), Simulated DoS, Simulated DDoS
  • Port flooding
  • Protocol flooding
  • Request flooding (login request flooding, API request flooding)

Hence, the correct answers are: Amazon RDS and Amazon Aurora.

All other options are incorrect since they are not included in the list shown above.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Which of the following is the benefit of using Amazon Relational Database Service (Amazon RDS) over traditional database management?
A.It is give times faster than standard MySQL databses and three times faster than standard PostgreSQL Database
B.Automatically scales up the instances type of your RDS cluster based in demand
C.Lower administrative burden through automatic software patching and maintenance of the underlying operating system
D.Automatically apply both client-side and server-side encryption to your data by default

A

C.Lower administrative burden through automatic software patching and maintenance of the underlying operating system

Explanation:
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.

Amazon RDS is available on several database instance types - optimized for memory, performance or I/O - and provides you with several database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. You can use the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS

You can use the AWS Management Console, the Amazon RDS Command Line Interface, or simple API calls to access the capabilities of a production-ready relational database in minutes. Amazon RDS database instances are pre-configured with parameters and settings appropriate for the engine and class you have selected. You can launch a database instance and connect your application within minutes. DB Parameter Groups provide granular control and fine-tuning of your database.

Amazon RDS will make sure that the relational database software powering your deployment stays up-to-date with the latest patches. You can exert optional control over when and if your database instance is patched.

Hence, the correct answer is: Lower administrative burden through automatic software patching and maintenance of the underlying operating system.

The option that says: Automatically apply both client-side and server-side encryption to your data by default is incorrect because this is not done by RDS at all. In RDS, you can manually configure your database cluster in order to secure your data at rest or in transit but this is not done automatically by default.

The option that says: Automatically scales up the instance type of your RDS cluster based on demand is incorrect because in RDS, you still have to manually upgrade the underlying instance type of your database cluster in order to scale it up.

The option that says: It is five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases is incorrect because this is not a feature of Amazon RDS but of Amazon Aurora.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q
Which of the following should you use if you need to provide temporary AWS credentials for users who have been authenticated via their social media logins as well as for guest users who do not require any authentication?
A.Amazon Cognito User Pool
B.Amazon COgnito Sync
C.Amazon Cognito Identity Pool
D.AWS Single Sign-On
A

C.Amazon Cognito Identity Pool

Explanation:
Amazon Cognito identity pools provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and received a token. An identity pool is a store of user identity data specific to your account.

Amazon Cognito identity pools enable you to create unique identities and assign permissions for users. Your identity pool can include:

  • Users in an Amazon Cognito user pool
  • Users who authenticate with external identity providers such as Facebook, Google, or a SAML-based identity provider
  • Users authenticated via your own existing authentication process

With an identity pool, you can obtain temporary AWS credentials with permissions you define to directly access other AWS services or to access resources through Amazon API Gateway.

Hence, the correct answer is to use Amazon Cognito Identity Pool.

Amazon Cognito User Pool is incorrect because a user pool is just a user directory in Amazon Cognito. In addition, it doesn’t enable access to unauthenticated identities. You have to use an Identity Pool instead.

Amazon Cognito Sync is incorrect because this is just a client library that enables cross-device syncing of application-related user data.

AWS Single Sign-On is incorrect because this service just makes it easy for you to centrally manage SSO access to multiple AWS accounts. It also does not allow any “guest” or unauthenticated access, unlike Amazon Cognito.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Which of the following is a valid characteristic of an IAM Group?
A.There’s no limit to the number of groups you can have
B.A group can contain many users, and a user can belong to multiple groups
C.There is a default group that automatically includes all users in the AWS account
D.Groups can be nested

A

B.A group can contain many users, and a user can belong to multiple groups

Explanation:
An IAM group is a collection of IAM users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users. For example, you could have a group called Admins and give that group the types of permissions that administrators typically need. Any user in that group automatically has the permissions that are assigned to the group.

If a new user joins your organization and needs administrator privileges, you can assign the appropriate permissions by adding the user to that group. Similarly, if a person changes jobs in your organization, instead of editing that user’s permissions, you can remove him or her from the old groups and add him or her to the appropriate new groups.

Note that a group is not truly an “identity” in IAM because it cannot be identified as a Principal in a permission policy. It is simply a way to attach policies to multiple users at one time.

Following are some important characteristics of groups:

  • A group can contain many users, and a user can belong to multiple groups.
  • Groups can’t be nested; they can contain only users, not other groups.
  • There’s no default group that automatically includes all users in the AWS account. If you want to have a group like that, you need to create it and assign each new user to it.
  • There’s a limit to the number of groups you can have, and a limit to how many groups a user can be in.

Based on the above paragraph, the correct answer is: A group can contain many users, and a user can belong to multiple groups.

The option that says: Groups can be nested is incorrect since this is not allowed in IAM Groups.

The option that says: There’s no limit to the number of groups you can have is incorrect because there is actually a certain limit to the number of groups you can have as well as a limit to how many groups a user can be in.

The option that says: There is a default group that automatically includes all users in the AWS account is incorrect because there is no such thing as this in IAM Group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q
Which is a machine learning-powered security service that discovers, classifies, and protects sensitive data such as personally identifiable information (PII) or intellectual property?
A.Amazon Rekognition
B.Amazon Cognito
C.Amazon GuardDuty
D.Amazon Macie
A

D.Amazon Macie

Explanation:
Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property, and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved. The fully managed service continuously monitors data access activity for anomalies, and generates detailed alerts when it detects risk of unauthorized access or inadvertent data leaks.

You can use Amazon Macie to protect against security threats by continuously monitoring your data and account credentials. Amazon Macie gives you an automated and low touch way to discover and classify your business data and detect sensitive information such as personally identifiable information (PII) and credential data. When alerts are generated, you can use Amazon Macie for incident response, using Amazon CloudWatch Events to swiftly take action to protect your data.

Hence, the correct answer is Amazon Macie.

Amazon Rekognition is incorrect because although it is also a machine learning-based service like Amazon Macie, it is primarily used for image and video analysis. You can’t use this to protect your sensitive data in AWS.

Amazon GuardDuty is incorrect because this is just a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads.

Amazon Cognito is incorrect because this is primarily used if you want to add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Which of the following are the best practices that can help secure your AWS resources using the AWS Identity and Access Management (IAM) service? (Select TWO.)
A.Use Bastion Hosts
B.Grant Least Privilege
C.Lock away your AWS Account root user access keys
D.Use Inline Policies instead of Customer Managed Policies
E.Grant most privilege

A

B.Grant Least Privilege
C.Lock away your AWS Account root user access keys

Explanation:
AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.

To help secure your AWS resources, below are the best practices in using your AWS Identity and Access Management (IAM) service:

- Lock Away Your AWS Account Root User Access Keys
- Create Individual IAM Users
- Use Groups to Assign Permissions to IAM Users
- Grant Least Privilege
- Get Started Using Permissions with AWS Managed Policies
- Use Customer Managed Policies Instead of Inline Policies
- Use Access Levels to Review IAM Permissions
- Configure a Strong Password Policy for Your Users
- Enable MFA
- Use Roles for Applications That Run on Amazon EC2 Instances
- Use Roles to Delegate Permissions
- Do Not Share Access Keys
- Rotate Credentials Regularly
- Remove Unnecessary Credentials
- Use Policy Conditions for Extra Security
- Monitor Activity in Your AWS Account

Hence, the correct answers in this scenario are Grant Least Privilege and Lock away your AWS account root user access keys.

The option that says: Grant Most Privilege is incorrect because it should be Grant Least Privilege.

The option that says: Use Inline Policies instead of Customer Managed Policies is incorrect because it should be the other way around. It is recommended to use Customer Managed Policies instead of Inline Policies.

The option that says: Use Bastion Hosts is incorrect because this relates more to your VPC Security than IAM. A bastion host is a server whose purpose is to provide access to a private network from an external network, such as the Internet.

28
Q
Which of the following services are part of the AWS serverless platform that does not require provisioning, maintaining, and administering servers for backend components? (Select TWO.)
A.Amazon EMR
B.Lambda@Edge
C.Amazon ElastiCache
D.Amazon CloudSearch
A

B.Lambda@Edge

Explanation:
Serverless is the native architecture of the cloud that enables you to shift more of your operational responsibilities to AWS, increasing your agility and innovation. Serverless allows you to build and run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning. You can build them for nearly any type of application or backend service, and everything required to run and scale your application with high availability is handled for you.

Serverless enables you to build modern applications with increased agility and lower total cost of ownership. Building serverless applications means that your developers can focus on their core product instead of worrying about managing and operating servers or runtimes, either in the cloud or on-premises. This reduced overhead lets developers reclaim time and energy that can be spent on developing great products which scale and that are reliable.

AWS provides a set of fully managed services that you can use to build and run serverless applications. Serverless applications don’t require provisioning, maintaining, and administering servers for backend components such as compute, databases, storage, stream processing, message queueing, and more. You also no longer need to worry about ensuring application fault tolerance and availability. Instead, AWS handles all of these capabilities for you. This allows you to focus on product innovation while enjoying faster time-to-market.

AWS Lambda, Lambda@Edge, and AWS Fargate are the services that you can use for serverless computing. For your API Proxy, you can leverage the power of the Amazon API Gateway service.

Hence, the correct answers are Amazon API Gateway and Lambda@Edge.

All of the other options (Amazon CloudSearch, Amazon ElastiCache, and Amazon EMR) are incorrect because you still need to choose which type of EC2 instance type will be used for running these services as well as its scaling capability.

29
Q

Which of the following is true regarding the Business support plan in AWS?
A.Provides a 1-hour response time support if your production system goes down
B.Provides a 15-minute response time support if your business-critical system goes down
C.Provides a 15-minute response time support if your production system goes down
D.Provides a 1-hour response time support if your production system got impaired

A

A.Provides a 1-hour response time support if your production system goes down

Explanation:
AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, whitepapers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can select a support plan that best aligns with your AWS use case.

AWS Support offers four support plans: Basic, Developer, Business, and Enterprise. The Basic plan is free of charge and offers support for account and billing questions and service limit increases. The other plans offer an unlimited number of technical support cases with pay-by-the-month pricing and no long-term contracts, providing the level of support that meets your needs.

All AWS customers automatically have around-the-clock access to these features of the Basic support plan:

  • Customer Service: one-on-one responses to account and billing questions
  • Support forums
  • Service health checks
  • Documentation, whitepapers, and best-practice guides

In addition, customers with a Business or Enterprise support plan have access to these features:

  • Use-case guidance: what AWS products, features, and services to use to best support your specific needs.
  • AWS Trusted Advisor, which inspects customer environments. Then, Trusted Advisor identifies opportunities to save money, close security gaps, and improve system reliability and performance.
  • An API for interacting with Support Center and Trusted Advisor. This API allows for automated support case management and Trusted Advisor operations.
  • Third-party software support: help with Amazon Elastic Compute Cloud (EC2) instance operating systems and configuration. Also, help with the performance of the most popular third-party software components on AWS.

The AWS Support API provides access to some of the features of the AWS Support Center. This API allows programmatic access to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status. AWS provides this access for AWS Support customers who have a Business or Enterprise support plan.

Based on the table above, the correct answer in this scenario is: provides a 1-hour response time support if your production system goes down.

The option that says: provides a 15-minute response time support if your production system goes down is incorrect because the Business support plan only provides a 1-hour response time and not 15 minutes.

The option that says: provides a 15-minute response time support if your business-critical system goes down is incorrect because this high level of support is only available for Enterprise support plan.

The option that says: provides a 1-hour response time support if your production system got impaired is incorrect because the Business support plan only gives you a 4-hour response time and not an hour in the event that your production system got impaired.

30
Q
A website is experiencing varying levels of traffic throughout the day and is not fully consuming server capacity all the time. Which advantage does AWS Cloud provide over traditional data centers when it comes to handling traffic load?
A.High Availability
B.Elasticity
C.Durability
D.Quick capacity provisioning
A

B.Elasticity

Explanation:
Elasticity is the ability to acquire resources as you need them and release resources when you no longer need them. In the cloud, you want to do this automatically.

Hence, Elasticity is the correct answer.

Durability is incorrect since this characteristic concerns data durability than handling server load.

High availability is incorrect since the scenario is not experiencing downtime issues.

Quick capacity provisioning is incorrect since the scenario is more concerned with how to handle traffic load than the agility in provisioning resources.

31
Q
A customer needs to retrieve the instance ID, public keys, and public IP address of their EC2 instance. Which of the following should they use to get these details?
A.Instance user data
B.INstance metadata
C.Amazon MAchine Image
D.Resource Tag
A

B.INstance metadata

Explanation:
Instance metadata is the data about your instance that you can use to configure or manage the running instance. You can get the instance ID, public keys, public IP address and many other information from the instance metadata by firing a URL command in your instance to this URL:

http://169.254.169.254/latest/meta-data/

Hence, the correct answer is: Instance metadata.

Instance user data is incorrect because this is primarily used to perform common automated configuration tasks and run custom scripts after the instance starts. It doesn’t contain any information about the instance ID, public keys or the public IP address of your EC2 instance.

Resource tag is incorrect because this is just a label that you assign to an AWS resource. Each tag consists of a key and an optional value, both of which you define.

Amazon Machine Image is incorrect because this mainly provides the information required to launch an instance, which is a virtual server in the cloud.

32
Q
Which of the following are defined as global services in AWS? (Select TWO.)
A.Amazon DynamoDB
B.Amazon CloudFront
C.AWS Batch
D.AWS Identity and Access management
E.Amazon RDS
A

B.Amazon CloudFront
D.AWS Identity and Access management

Explanation:
CloudFront is a global service that delivers your content through a worldwide network of data centers called edge locations or points of presence (POPs). If your content is not already cached in an edge location, CloudFront retrieves it from an origin that you’ve identified as the source for the definitive version of the content.

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.

An AWS resource can be a Global, Regional or a Zonal service. A Global service means that it covers all of the AWS Regions across the globe, while a regional service means that a resource is only applicable to one specific region at a time. A regional service may or may not have the ability to replicate the same resource to another region. Lastly, a Zonal service can only exist in one Availability Zone.

You don’t need to memorize the scope of all of the AWS services as long as you know the pattern. There are actually only a handful of services that are considered as global services such as IAM, STS Route 53, CloudFront and WAF. For Zonal services, the examples are EC2 Instance and EBS Volumes where they are tied to the Availability Zone where they were launched. Take note that although EBS Volumes are considered as a zonal service, the EBS snapshots are considered as a regional since it is not tied to a specific Availability Zone. The rest of the services are regional in scope.

Hence, the correct answers are: AWS Identity and Access Management and Amazon CloudFront

AWS Batch, Amazon RDS, and Amazon DynamoDB are all incorrect because these are considered as regional services and not global.

33
Q

A space agency is using Amazon S3 to store their high-resolution satellite images and videos everyday. Which of the following should they do to minimize the upload time?
A.Shift to S3 Intelligent-Tiering storage class
B.Use the Multipart upload API
C.Upload the images and videos using the BatchWriteItem API
D.Enable Cross-Origin Resource Sharing (CORS)

A

B.Use the Multipart upload API

Explanation:
Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object’s data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.

Using multipart upload provides the following advantages:

  • Improved throughput - You can upload parts in parallel to improve throughput.
  • Quick recovery from any network issues - Smaller part size minimizes the impact of restarting a failed upload due to a network error.
  • Pause and resume object uploads - You can upload object parts over time. Once you initiate a multipart upload there is no expiry; you must explicitly complete or abort the multipart upload.
  • Begin an upload before you know the final object size - You can upload an object as you are creating it.

Hence, the correct answer in this scenario is: Use the Multipart Upload API.

The option that says: Use the BatchWriteItem API is incorrect because this is a DynamoDB API action and not S3.

The option that says: Shift to S3 Intelligent-Tiering storage class is incorrect because this is primarily used to optimize your storage costs automatically based on your data access patterns without performance impact or operational overhead.

The option that says: Enable Cross-Origin Resource Sharing (CORS) is incorrect because this is only applicable for client web applications that are loaded in one domain to interact with resources in a different domain

34
Q
Which service does AWS use to notify you when AWS is experiencing events that may impact you?
A.AWS Support Center
B.Amazon SNS
C.AWS Service Health Dashboard
D.AWS Personal Health Dashboard
A

D.AWS Personal Health Dashboard

Explanation:
AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources.

The AWS Personal Health Dashboard provides information about AWS Health events that can affect your account. The information is presented in two ways: a dashboard that shows recent and upcoming events organized by category, and a full event log that shows all events from the past 90 days.

Hence, the correct answer is: AWS Personal Health Dashboard.

AWS Support Center is incorrect because this is where you can check the support package you are subscribed to, and where you can file cases if you need assistance from the AWS support team. This option is incorrect since it does not provide the information stated in the scenario.

Service Health Dashboard is incorrect because this just provides access to current status and historical data about each and every Amazon Web Service. If there’s a problem with a service, you’ll be able to expand the appropriate line in the Details section. You can even subscribe to the RSS feed for any service. You can use the “Report an Issue” link to make sure that we are aware of any system-wide service issues. However, AWS does not use this dashboard to notify you of any system-impactful issues.

Amazon SNS is incorrect because this is simply a messaging service used to deliver push notifications to recipients. This option is incorrect since it is the service used by AWS to notify you of any system-impactful issues.

35
Q
Which of the following allows you to create and deploy infrastructure-as-code templates in AWS?
A.Systems Manager
B.CloudFormation
C.LightSail
D.Elastic Beanstalk
A

B.CloudFormation

Explanation:
AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS resources. AWS CloudFormation is available at no additional charge, and you pay only for the AWS resources needed to run your applications.

AWS CloudFormation allows you to model your entire infrastructure with either a text file or programming languages. This provides a single source of truth for your AWS resources and helps you to standardize infrastructure components used across your organization, enabling configuration compliance and faster troubleshooting.

Hence, AWS CloudFormation is the correct choice.

Elastic Beanstalk is incorrect because it does not support infrastructure as code.

Lightsail is incorrect. This service is just a package solution for the fast deployment of websites and other applications.

Systems Manager is incorrect since this service is used for automation of your EC2 instances.

36
Q

A customer has a number of on-demand instances running simultaneously to serve customer transactions. Occasionally, most of these instances do not perform any tasks when demand is low. What is a good cost optimization strategy to implement for this case?
A.Create a script that would automatically shut down an instance when utilization is low
B.Scale up the instances to a higher instance type to reduce the number of running instances at a time
C.Use spot instances instead of on-demand instances
D.Implement an auto scaling group to control the number of running instances at a time

A

D.Implement an auto scaling group to control the number of running instances at a time

Explanation:
Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define. You can use the fleet management features of EC2 Auto Scaling to maintain the health and availability of your fleet. You can also use the dynamic and predictive scaling features of EC2 Auto Scaling to add or remove EC2 instances. Dynamic scaling responds to changing demand and predictive scaling automatically schedules the right number of EC2 instances based on predicted demand. Dynamic scaling and predictive scaling can be used together to scale faster.

When your instances are experiencing varying levels of traffic, it is best to use an auto scaling group to scale your instances based on the workload. So that when demand is low, the auto scaling group can adjust the number of running instances to a bare minimum.

Hence, the correct answer is: Implement an auto scaling group to control the number of running instances at a time.

Scaling up your instances to a higher instance type is incorrect since this will increase your AWS costs.

Using spot instances is incorrect since this might affect your application. Spot instances can be terminated anytime the bid price changes. Although this might be cost-effective for you, it also affects your operations which should not happen.

Creating a script to shut down an instance is incorrect because this is unnecessary. You would have to know when to start it up again, and it would be too much work than just using an auto-scaling group.

37
Q

A customer currently has a Basic support plan and they are planning to use the Infrastructure Event Management, Well-Architected Reviews and Operations Reviews features in AWS. What should they do in order to access these features in the most cost-effective manner?
A.Upgrade to Enterprise Support plan
B.Upgrade to Developer support plan
C.None since these features are already included in their Basic support plan
D.Upgrade to Business support plan

A

A.Upgrade to Enterprise Support plan

Explanation;’
AWS Enterprise Support provides you with concierge-like service where the main focus is helping you achieve your outcomes and find success in the cloud.

With Enterprise Support, you get 24x7 technical support from high-quality engineers, tools and technology to automatically manage health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive / preventative programs and AWS subject matter experts.

In addition to what is available with Basic Support, Enterprise Support provides:

AWS Trusted Advisor - Access to the full set of Trusted Advisor checks and guidance to provision your resources following best practices to help reduce costs, increase performance and fault tolerance, and improve security.

AWS Personal Health Dashboard - A personalized view of the health of AWS services, and alerts when your resources are impacted. Also includes the Health API for integration with your existing management systems.

AWS Support API - Programmatic access to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status.

Proactive Technical Account Management - A Technical Account Manager (TAM) is your designated technical point of contact who provides advocacy and guidance to help plan and build solutions using best practices, coordinate access to subject matter experts and product teams, and proactively keep your AWS environment operationally healthy.

Architecture Support – Contextual guidance on how services fit together to meet your specific use-case, workload, or application.

Third-Party Software Support - Guidance, configuration, and troubleshooting of AWS interoperability with many common operating systems, platforms, and application stack components.

Proactive Support Programs – Included access to Well-Architected Reviews, Operations Reviews, and Infrastructure Event Management.

Support Concierge - the Concierge Team are AWS billing and account experts that specialize in working with enterprise accounts. They will quickly and efficiently assist you with your billing and account inquiries, and work with you to implement billing and account best practices so that you can focus on what matters: running your business.

For companies with a Business support plan, you can have access to the Infrastructure Event Management for an additional fee. However, all other proactive support programs such as Well-Architected Reviews and Operations Reviews are exclusively available for companies who opted for Enterprise support. Hence, the correct answer in this scenario is: Upgrade to Enterprise support plan.

The option that says: None since these features are already included in their Basic support plan is incorrect because the Basic plan does not have any Proactive Support Programs such as Well-Architected Reviews, Operations Reviews, and Infrastructure Event Management.

The option that says: Upgrade to Developer support plan is incorrect because just like the Basic plan, this one does not have access to any of the Proactive Support Programs.

The option that says: Upgrade to Business support plan is incorrect because although this has access to the Infrastructure Event Management feature for an additional fee, it doesn’t have access to the Well-Architected Reviews nor Operations Reviews. In this scenario, you must upgrade to the Enterprise support plan.

38
Q

A customer is building a cloud architecture in AWS which should scale horizontally or vertically in order to automatically adjust capacity and maintain steady, predictable performance at the lowest possible cost. Which of the following statements are true regarding horizontal and vertical scaling? (Select TWO.)
A.Upgrading to a higher EC2 instance type and adding more EC2 instances to your resource pool are both examples of Horizontal Scaling
B.Adding more EC2 instances to your resource pool is an example of Horizontal Scaling
C.Adding more EC2 instances to your resource pool is an example of Vertical Scaling
D.Upgrading to a higher EC2 instance type is an exmaple of Vertical Scaling
E.Upgrading to a higher EC2 instance type is an exmaple of horizontal scaling

A

B.Adding more EC2 instances to your resource pool is an example of Horizontal Scaling

Explanation:
Systems that are expected to grow over time need to be built on top of a scalable architecture. Such an architecture can support growth in users, traffic, or data size with no drop-in performance. It should provide that scale in a linear manner where adding extra resources results in at least a proportional increase in ability to serve additional load. Growth should introduce economies of scale, and cost should follow the same dimension that generates business value out of that system. While cloud computing provides virtually unlimited on-demand capacity, your design needs to be able to take advantage of those resources seamlessly.

There are generally two ways to scale an IT architecture: vertically and horizontally.

Vertical Scaling

  • Scaling vertically takes place through an increase in the specifications of an individual resource, such as upgrading a server with a larger hard drive or a faster CPU. With Amazon EC2, you can stop an instance and resize it to an instance type that has more RAM, CPU, I/O, or networking capabilities. This way of scaling can eventually reach a limit, and it is not always a cost-efficient or highly available approach. However, it is very easy to implement and can be sufficient for many use cases especially in the short term.

Horizontal Scaling

  • Scaling horizontally takes place through an increase in the number of resources, such as adding more hard drives to a storage array or adding more servers to support an application. This is a great way to build internet-scale applications that leverage the elasticity of cloud computing. Take note that not all architectures are designed to distribute their workload to multiple resources.

Hence, the correct statements are Upgrading to a higher EC2 instance type is an example of Vertical Scaling and Adding more EC2 instances to your resource pool is an example of Horizontal Scaling.

The option that says: Upgrading to a higher EC2 instance type is an example of Horizontal Scaling is incorrect because this is actually an example of Vertical Scaling.

The option that says: Adding more EC2 instances to your resource pool is an example of Vertical Scaling is incorrect because this is actually an example of Horizontal Scaling.

The option that says: Upgrading to a higher EC2 instance type and adding more EC2 instances to your resource pool are both examples of Horizontal Scaling is incorrect. Only the act of adding more EC2 instances to your resource pool is an example of Horizontal Scaling.

39
Q

Agility is one of the benefits of using cloud computing that provides customer with what advantage?
A.Easily deploy your application in multiple physical locations around the world with just a few clicks
B.Focus your valuable IT resources on developing applications that differentiate your business rather than managing infrastructure and data centers
C.Avoid overprovisioning of your infrastructure to ensure you have enough capacity to handle your business operations at the peak level of activity
D.Allows you to trade capital expenses for variable expense

A

B.Focus your valuable IT resources on developing applications that differentiate your business rather than managing infrastructure and data centers

Explanation:
Cloud computing gives you access to servers, storage, databases, and a broad set of application services over the Internet. A cloud services provider such as Amazon Web Services, owns and maintains the network-connected hardware required for these application services, while you provision and use what you need via a web application.

There are many benefits of using Cloud Computing such as:

  1. Agility
  2. Deploy globally in minutes
  3. Elasticity
  4. Cost savings

Agility

The cloud allows you to innovate faster because you can focus your valuable IT resources on developing applications that differentiate your business and transform customer experiences rather than managing infrastructure and data centers. With cloud, you can quickly spin up resources as you need them, deploying hundreds or even thousands of servers in minutes. The cloud also makes it easy and fast to access a broad range technology such as compute, storage, databases, analytics, machine learning, and many other services on an as-needed basis. As a result, you can very quickly develop and roll out new applications, and your teams can experiment and innovate more quickly and frequently. If an experiment fails, you can always de-provision resources without risk.

Deploy globally in minutes

Deploy globally in minutes With the cloud, you can easily deploy your application in multiple physical locations around the world with just a few clicks. This means you can provide a lower latency and better experience for your customers simply and at minimal cost.

Elasticity

Before cloud computing, you had to overprovision infrastructure to ensure you had enough capacity to handle your business operations at the peak level of activity. Now, you can provision the amount of resources that you actually need, knowing you can instantly scale up or down with the needs of your business. This reduces costs and improves your ability to meet your users’ demands.

Cost savings

The cloud allows you to trade capital expense (data centers, physical servers, etc.) for variable expense and only pay for IT as you consume it. Plus, the variable expense is much lower than what you can do for yourself because of the larger economies of scale.

Hence, the correct answer is: Focus your valuable IT resources on developing applications that differentiate your business rather than managing infrastructure and data centers.

The option that says: Avoid overprovisioning of your infrastructure to ensure you have enough capacity to handle your business operations at the peak level of activity is incorrect because this is an advantage of Elasticity and not Agility.

The option that says: Allows you to trade capital expense for variable expense is incorrect because this is actually referring to the Cost savings benefit of cloud computing rather than Agility.

The option that says: Easily deploy your application in multiple physical locations around the world with just a few clicks is incorrect because this refers to the concept of Deploy globally in minutes.

40
Q

Which of the following actions below will allow you to take advantage of volume discounts in AWS?
A.Opt for an All upfront Convertible Reserved Instance pricing for a 3-year term
B.Use AWS Organizations and enable the consolidated billing feature
C.Move all of your AWS Resources from multiple accounts to a single global account
D.Upgrade to an AWS Enterprise support plan

A

B.Use AWS Organizations and enable the consolidated billing feature

Explanation:
For billing purposes, AWS treats all the accounts in the organization as if they were one account. Some services, such as Amazon EC2 and Amazon S3, have volume pricing tiers across certain usage dimensions that give you lower prices the more you use the service.

With consolidated billing, AWS combines the usage from all accounts to determine which volume pricing tiers to apply, giving you a lower overall price whenever possible. AWS then allocates each linked account a portion of the overall volume discount based on the account’s usage.

The Bills page for each linked account displays an average tiered rate that is calculated across all the accounts on the consolidated bill for the organization. For example, let’s say that Bob’s consolidated bill includes both Bob’s own account and Susan’s account. Bob’s account is the payer account, so he pays the charges for both himself and Susan.

As shown in the following illustration, Bob transfers 8 TB of data during the month and Susan transfers 4 TB.

For the purposes of this example, AWS charges $0.17 per GB for the first 10 TB of data transferred and $0.13 for the next 40 TB. This translates into $174.08 per TB (= .171024) for the first 10 TB, and $133.12 per TB (= .131024) for the next 40 TB. Remember that 1 TB = 1024 GB.

For the 12 TB that Bob and Susan used, Bob’s payer account is charged:

= ($174.08 * 10 TB) + ($133.12 * 2 TB)

= $1740.80 + $266.24

= $2,007.04

The average cost-per-unit of data transfer out for the month is therefore $2,007.04 / 12 TB = $167.25 per TB. That is the average tiered rate that is shown on the Bills page and in the downloadable cost report for each linked account on the consolidated bill.

Without the benefit of tiering across the consolidated bill, AWS would have charged Bob and Susan each $174.08 per TB for their usage, for a total of $2,088.96.

Hence, the correct answer in this scenario is to use AWS Organizations and enable the consolidated billing feature.

The option that says: Move all of your AWS resources from multiple accounts to a single global account is incorrect because you don’t need to do this since you can simply use AWS Organizations to consolidate your resources.

The option that says: Opt for an All upfront Convertible Reserved Instance pricing for a 3-year term is incorrect because this type of discount is only applicable for Reserved Instances and is not related to Volume Pricing.

The option that says: Upgrade to an AWS Enterprise support plan is incorrect because doing this will only give you more access to various AWS support services like a Technical Account Manager, access to Concierge Support and many others. In order to avail volume pricing, you have to use AWS Organizations and enable Consolidated Billing.

41
Q
Which among the services below can you use to test and troubleshoot IAM and resource-based policies?
A.Systems Manager
B.Amazon Inspector
C.AWS Config
D.IAM Policy Simulator
A

D.IAM Policy Simulator

Explanation:
The IAM policy simulator evaluates the policies that you choose and determines the effective permissions for each of the actions that you specify. The simulator uses the same policy evaluation engine that is used during real requests to AWS services. But the simulator differs from the live AWS environment in the following ways:

  • The simulator does not make an actual AWS service request, so you can safely test requests that might make unwanted changes to your live AWS environment.
  • Because the simulator does not simulate running the selected actions, it cannot report any response to the simulated request. The only result returned is whether the requested action would be allowed or denied.
  • If you edit a policy inside the simulator, these changes affect only the simulator. The corresponding policy in your AWS account remains unchanged.

With the IAM policy simulator, you can test and troubleshoot IAM and resource-based policies in the following ways:

  • Test policies that are attached to IAM users, groups, or roles in your AWS account. If more than one policy is attached to the user, group, or role, you can test all the policies, or select individual policies to test. You can test which actions are allowed or denied by the selected policies for specific resources.
  • Test policies that are attached to AWS resources, such as Amazon S3 buckets, Amazon SQS queues, Amazon SNS topics, or Amazon S3 Glacier vaults.
  • If your AWS account is a member of an organization in AWS Organizations, then you can test the impact of service control policies (SCPs) on your IAM policies and resource policies.

Hence, the correct answer is IAM Policy Simulator.

AWS Config is incorrect because this is just a service that enables you to assess, audit, and evaluate the configurations of your AWS resources.

Systems Manager is incorrect because this service just provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. Unlike IAM Policy Simulator, it can’t be used to simulate your policies.

Amazon Inspector is incorrect because it is just an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.

42
Q
Which of the following is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads?
A.Amazon Macie
B.Amazon GuardDuty
C.AWS WAF
D.AWS Shield
A

B.Amazon GuardDuty

Explanation:
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time-consuming for security teams to continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in the AWS Cloud.

This service uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats. GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail, Amazon VPC Flow Logs, and DNS logs. With a few clicks in the AWS Management Console, GuardDuty can be enabled with no software or hardware to deploy or maintain. By integrating with AWS CloudWatch Events, GuardDuty alerts are actionable, easy to aggregate across multiple accounts, and straightforward to push into existing event management and workflow systems.

Hence, the correct answer is Amazon GuardDuty.

Amazon Macie is incorrect because this is just a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS.

AWS Shield is incorrect because this is simply a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS.

AWS WAF is incorrect because this is basically a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.

43
Q

What is the most secure way to provide applications temporary access to your AWS resources?
A.Create an IAM policy that allows the application to access the resources and attach the policy to the applications
B.Create an IAM user with access keys and assign it to the application
C.Create an IAM role and have the application assume the role
D.Create an IAM group that has access to the resources and add the application there

A

C.Create an IAM role and have the application assume the role

Explanation:
AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.

An IAM role is an IAM identity that you can create in your account that has specific permissions. An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials such as a password or access keys associated with it. Instead, when you assume a role, it provides you with temporary security credentials for your role session.

Hence, the correct answer is: Create an IAM role and have the application assume the role.

The option that says: Create an IAM user with access keys and assign it to the application is incorrect because an IAM User is primarily used for long-term credentials, not for temporary access.

The option that says: Create an IAM group that has access to the resources, and add the application there is incorrect because an IAM Group does not provide temporary access credentials.

The option that says: Create an IAM policy that allows the application to access the resources, and attach the policy to the application is incorrect because IAM policies are not entities which have credentials in AWS.

44
Q
Which service allows you to add powerful visual analysis feature to your applications that enables you to search, verify, and organize millions of images?
A.Amazon SageMaker
B.Amazon Macie
C.Amazon CloudSearch
D.Amazon Rekognition
A

D.Amazon Rekognition

Explanation:
Amazon Rekognition makes it easy to add image and video analysis to your applications. You just provide an image or video to the Rekognition API, and the service can identify the objects, people, text, scenes, and activities, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial recognition on images and video that you provide. You can detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.

Amazon Rekognition is based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images and videos daily, and requires no machine learning expertise to use. Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3. Amazon Rekognition is always learning from new data, and we are continually adding new labels and facial recognition features to the service.

Hence, the correct answer is Amazon Rekognition.

Amazon Macie is incorrect because it is just a security service and not suitable for visual analysis. It uses machine learning to automatically discover, classify, and protect sensitive data in AWS.

Amazon SageMaker is incorrect because this is simply a service that provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly in AWS.

Amazon CloudSearch is incorrect because this is primarily used to set up, manage, and scale a search solution for your website or application in AWS.

45
Q
In Amazon EC2, which pricing construct adjusts its price based on supply and demand of EC2 instances?
A.On-Demand Instance
B.Standard Reserved Instance
C.Convertible Reserved Instance
D.Spot Instance
A

D.Spot Instance

Explanations;
Amazon EC2 simplified the Amazon EC2 Spot instance pricing by moving to a model that delivers low, predictable prices that adjust gradually based on long-term trends in supply and demand. You will continue to save up to 90% off the On-Demand instance price and you will continue to pay the Spot price that’s in effect at the beginning of each instance-hour for your running instance.

Amazon EC2 Spot instances are spare compute capacity in the AWS cloud available to you at steep discounts compared to On-Demand prices. It can be interrupted by AWS EC2 with two minutes of notification when the EC2 needs the capacity back.

To use Spot Instances, you create a Spot Instance request that includes the number of instances, the instance type, the Availability Zone, and the maximum price that you are willing to pay per instance hour. If your maximum price exceeds the current Spot price, Amazon EC2 fulfills your request immediately if capacity is available. Otherwise, Amazon EC2 waits until your request can be fulfilled or until you cancel the request.

Hence, the correct answer is Spot Instance.

Both the Standard Reserved Instance and Convertible Reserved Instance are incorrect because these are types of Reserved Instance purchase option. This provides a capacity reservation that gives you additional confidence in your ability to launch instances when you need them.

On-Demand Instance is incorrect as this is the type which you pay for compute capacity by the hour or the second depending on which instances you run.

46
Q
Which of the following cloud best practices reinforces the use of the Service-Oriented Architecture (SOA) design principle?
A.Think parallel
B.Implement elasticity
C.Decouple your components
D.Design for failure
A

C.Decouple your components

Explanations;
There are various best practices that you can follow which can help you build an application in the AWS cloud. The notable ones are:

  1. Design for failure
  2. Decouple your components
  3. Implement elasticity
  4. Think parallel

In Decouple your components, the key is to build components that do not have tight dependencies on each other, so that if one component were to die (fail), sleep (not respond) or remain busy (slow to respond) for some reason, the other components in the system are built so as to continue to work as if no failure is happening. The cloud reinforces the Service-Oriented Architecture (SOA) design principle that the more loosely coupled the components of the system, the bigger and better it scales.

You can build a loosely coupled system using messaging queues such as SQS. If a queue/buffer is used to connect any two components together, it can support concurrency, high availability and load spikes. As a result, the overall system continues to perform even if parts of the components are momentarily unavailable. If one component dies or becomes temporarily unavailable, the system will buffer the messages and get them processed when the component comes back up.

In essence, loose coupling isolates the various layers and components of your application so that each component interacts asynchronously with the others and treats them as a “black box”. For example, in the case of web application architecture, you can isolate the app server from the web server and from the database. The app server does not know about your web server and vice versa, this gives decoupling between these layers and there are no dependencies code-wise or functional perspectives. In the case of batch-processing architecture, you can create asynchronous components that are independent of each other.

The AWS specific tactics for implementing this best practice are:

  1. Use Amazon SQS to isolate components
  2. Use Amazon SQS as buffers between components
  3. Design every component such that it exposes a service interface and is responsible for its own scalability in all appropriate dimensions and interacts with other components asynchronously
  4. Bundle the logical construct of a component into an Amazon Machine Image so that it can be deployed more often
  5. Make your applications as stateless as possible. Store session state outside of component (in Amazon SimpleDB, if appropriate)

Hence, the correct answer in this scenario is: Decouple your components

Think parallel is incorrect because this just internalizes the concept of parallelization when designing architectures in the cloud. It advocates to not only implement parallelization wherever possible but also automate it because the cloud allows you to create a repeatable process every easily.

Implement elasticity is incorrect because this principle is primarily implemented by automating your deployment process and streamlining the configuration and build process of your architecture. This ensures that the system can scale without any human intervention.

Design for failure is incorrect because it only encourages you to be a pessimist when designing architectures in the cloud; assume things will fail. In other words, you should always design, implement and deploy for automated recovery from failure.

47
Q
A company needs to troubleshoot an issue on their serverless application which is composed of an API Gateway, Lambda function, and a DynamoDB database. Which service should they use to trace user requests as they travel through their entire application?
A.Amazon Inspector
B.AWS X-Ray
C.Amazon CloudWatch
D.AWS CloudTrail
A

B.AWS X-Ray

Explanation:
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.

AWS X-Ray works with Amazon EC2, Amazon EC2 Container Service (Amazon ECS), AWS Lambda, and AWS Elastic Beanstalk. You can use X-Ray with applications written in Java, Node.js, and .NET that are deployed on these services.

Hence, the correct answer is AWS X-Ray.

Amazon CloudWatch is incorrect because although you can troubleshoot the issue by checking the logs, it is still better to use AWS X-Ray as it enables you to analyze and debug your serverless application more effectively.

Amazon Inspector is incorrect because this is primarily used for EC2 and not for Lambda.

AWS CloudTrail is incorrect because this will only enable you to track all API calls to your Lambda, DynamoDB, and SNS. It is still better to use AWS X-Ray to debug your application.

48
Q

A new AWS customer needs to deploy up to 100 t3a.large EC2 instances on their recently launched VPC, which is way beyond the default service limit. What should they do before launching their instances?
A.Enable Enhanced Networking
B.Create a case in the AWS SUpport Center page and request a service limit increase
C.Use AWS Trusted Advisor to increase the default service limits for EC2 instances
D.Do nothing, You can directly launch 100 t3a.large EC2 instances at the same time since AWS will automatically increase your service limit for you

A

B.Create a case in the AWS SUpport Center page and request a service limit increase

Explanation

AWS maintains service limits for each account to help guarantee the availability of AWS resources, as well as to minimize billing risks for new customers. Some service limits are raised automatically over time as you use AWS, though most AWS services require that you request limit increases manually.

By default, there is an imposed limit in launching EC2 instances in your VPC. These increases are not granted immediately, so it may take a couple of days for your increase to become effective. You are limited to running up to a total of 20 On-Demand instances across the instance family, purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic Spot limit per region.

To request a limit increase:

  1. Open the AWS Support Center page, sign in if necessary, and choose Create case.
  2. For the “Regarding” field, choose: Service Limit Increase.
  3. Complete the form. If this request is urgent, choose Phone as the method of contact instead of Web.
  4. Choose Submit.

Hence, the correct answer is: Create a case in the AWS Support Center page and request a service limit increase.

The option that says: Enable Enhanced Networking is incorrect because this has nothing to do with service limits.

The option that says: Use AWS Trusted Advisor to increase the default service limits for EC2 instances is incorrect because you can’t use AWS Trusted Advisor to increase your service limit. You have to raise a request to the AWS Support Center instead.

The option that says: Do nothing. You can directly launch 100 t3a.large EC2 instances at the same time since AWS will automatically increase your service limit for you is incorrect because AWS doesn’t do this for you. It is your responsibility to raise a request to increase your service limit as this is not automatically done by AWS.

49
Q

_______ is a cloud design principle which supports growth in users, traffic, or data size with no drop-in performance.
A.Decouple your components
B.Go Serverlesss to reduce compute footprint
C.Scalability
D.Design for failure

A

C.Scalability

Explanation:
The AWS Cloud includes many design patterns and architectural options that you can apply to a wide variety of use cases. Some key design principles of the AWS Cloud include scalability, disposable resources, automation, loose coupling managed services instead of servers, and flexible data storage options.

Systems that are expected to grow over time need to be built on top of a scalable architecture. Such an architecture can support growth in users, traffic, or data size with no drop-in performance. It should provide that scale in a linear manner where adding extra resources results in at least a proportional increase in ability to serve additional load

Growth should introduce economies of scale, and cost should follow the same dimension that generates business value out of that system. While cloud computing provides virtually unlimited on-demand capacity, your design needs to be able to take advantage of those resources seamlessly.

Hence, the correct answer is Scalability.

Think parallel is incorrect because this just internalizes the concept of parallelization when designing architectures in the cloud. It advocates to not only implement parallelization wherever possible but also automate it because the cloud allows you to create a repeatable process very easily.

Design for failure is incorrect because it only encourages you to be a pessimist when designing architectures in the cloud; assume things will fail. In other words, you should always design, implement and deploy for automated recovery from failure.

Go Serverless to reduce compute footprint is incorrect because this is not considered as one of the key design principles of the AWS Cloud. Although it is true using a Serverless architecture can reduce your compute footprint, this is not applicable in every case. AWS Lambda is an example of a serverless service.

50
Q

Which of the following are the characteristics of Amazon EC2 Convertible Reserved Instances? (Select TWO.)
A.Has the capability to change the attributes of the RI as long as the exchange results in the creation of Reserved Instances of equal or greater value
B.Allows the change of instance family, operating system, tenancy and payment option
C.Provides the most significant discount (up to 75% off On-Demand) and are best suited for steady-state usage
D.Allows you to match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, a week or month
E.Allows you to change the attributes of the RI as long as the exchange results in the creation of Reserved Instances of equal or lesser value

A

A.Has the capability to change the attributes of the RI as long as the exchange results in the creation of Reserved Instances of equal or greater value
B.Allows the change of instance family, operating system, tenancy and payment option

Explanation:
Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.

Convertible Reserved Instances provide you with a significant discount (up to 54%) compared to On-Demand Instances and can be purchased for a 1-year or 3-year term. Purchase Convertible Reserved Instances if you need additional flexibility, such as the ability to use different instance families, operating systems, or tenancies over the Reserved Instance term.

With Reserved Instances (RIs), you can choose the type that best fits the needs of your application:

  • Standard RIs: These provide the most significant discount (up to 75% off On-Demand) and are best suited for steady-state usage.
  • Convertible RIs: These provide a discount (up to 54% off On-Demand) and the capability to change the attributes of the RI as long as the exchange results in the creation of Reserved Instances of equal or greater value. Like Standard RIs, Convertible RIs are best suited for steady-state usage.
  • Scheduled RIs: These are available to launch within the time windows you reserve. This option allows you to match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, a week, or a month.

Hence, the correct answers are:

  • Allows the change of instance family, operating system, tenancy, and payment option
  • Has the capability to change the attributes of the RI as long as the exchange results in the creation of Reserved Instances of equal or greater value.

The option that says: Allows you to match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, a week, or a month is incorrect because this is a description for Scheduled RIs and not for Convertible RIs.

The option that says: Allows you to change the attributes of the RI as long as the exchange results in the creation of Reserved Instances of equal or lesser value is incorrect because it should be “equal or greater value” and not “equal or lesser value”

The option that says: Provides the most significant discount (up to 75% off On-Demand) and are best suited for steady-state usage is incorrect because this is a characteristic of Standard RIs and not Convertible RIs.

51
Q
What service will allow you to sell your catalog of custom AMIs in AWS?
A.AWS Service Catalog
B.Amazon Mechanical Turk
C.Amazon CloudSearch
D.AWS Marketplace
A

D.AWS Marketplace

Explanation:
AWS Marketplace enables qualified partners to market and sell their software to AWS Customers. AWS Marketplace is an online software store that helps customers find, buy, and immediately start using the software and services that run on AWS.

Hence, AWS Marketplace is the correct answer.

Amazon Mechanical Turk is incorrect because this is simply a web service that provides an on-demand, scalable, human workforce to complete jobs that humans can do better than computers, such as recognizing objects in photographs. It is not a place where you can buy and sell custom software like AWS Marketplace.

AWS Service Catalog is incorrect because this just allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

Amazon CloudSearch is incorrect because this is primarily used as a search solution for your website or application.

52
Q
Which service would you use to speed up content delivery to your customers?
A.AWS CloudTrail
B.Amazon S3 Transfer Acceleration
C.Amazon CloudWatch
D.Amazon CloudFront
A

D.Amazon CloudFront

Explanation:
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2 or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.

You can get started with the Content Delivery Network in minutes, using the same AWS tools that you’re already familiar with: APIs, AWS Management Console, AWS CloudFormation, CLIs, and SDKs. Amazon’s CDN offers a simple, pay-as-you-go pricing model with no upfront fees or required long-term contracts, and support for the CDN is included in your existing AWS Support subscription.

Hence, the correct answer is: Amazon CloudFront.

Amazon S3 Transfer Acceleration only applies to S3 objects. It does not guarantee faster speeds as well. This service does not integrate with Amazon EC2 and the like.

Amazon CloudWatch is incorrect because it’s just a monitoring tool used to gather metrics on your AWS resources.

AWS CloudTrail is incorrect because it is just another monitoring tool that captures API events in your account and lists them down in a history trail.

53
Q
A customer needs to establish a dedicated connection between their on-premises network and their AWS VPC that provides a more consistent network experience than Internet-based connections. Which of the following network services should they use?
A.AWS Direct Connect
B.VPN Connection
C.VPC Peering
D.AWS VPN CloudHub
A

A.AWS Direct Connect

Explanation:
AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

Hence, the correct for this scenario is AWS Direct Connect.

VPN Connection and AWS VPN CloudHub are incorrect because a VPN is an Internet-based connection, unlike Direct Connect which provides a dedicated connection. An Internet-based connection means that the traffic from the VPC and to the on-premises network traverses the public Internet, which is why it is slow. You should use Direct Connect instead.

VPC Peering is incorrect because this is mainly used to connect two or more VPCs and not your on-premises data center.

54
Q
What services will help you create a highly available and scalable web app in the cloud? (Select TWO.)
A.Amazon AppStream 2.0
B.AWS ELB
C.Amazon EC2 Auto Scaling
D.Amazon CloudFront
A

B.AWS ELB
C.Amazon EC2 Auto Scaling

Explanation:
The purpose of automatic scaling is to automatically increase the size of your Auto Scaling group when demand goes up and decrease it when demand goes down. As capacity is increased or decreased, the Amazon EC2 instances being added or removed must be registered or deregistered with a load balancer. This enables your application to automatically distribute incoming web traffic across such a dynamically changing number of instances.

Your load balancer acts as a single point of contact for all incoming web traffic to your Auto Scaling group. When an instance is added to your Auto Scaling group, it needs to register with the load balancer or no traffic is routed to it. When an instance is removed from your Auto Scaling group, it must deregister from the load balancer or traffic continues to be routed to it.

Hence, the correct answers are: AWS ELB and Amazon EC2 Auto Scaling.

Amazon CloudWatch and Amazon CloudFront are incorrect because these are not the primary services to be used to achieve a highly available and scalable application in the Cloud.

Amazon AppStream 2.0 is incorrect because this is just a fully managed application streaming service which you can use to centrally manage your desktop applications.

55
Q

Which of the following actions will AWS charge you for?
A.Transfer of EC2 files between two AWS Regions
B.Transfer of data from your data center to S3 through a VPN
C.Setting up additional VPCs in your account
D.Provisioning elastic IPs and attaching them to EC2 instances

A

A.Transfer of EC2 files between two AWS Regions

Explanation:
AWS charges you for data transferred between two different Regions. This is similar to the costs incurred from the data transfer between AWS network and the public internet.

Hence, the correct answer is Transfer of EC2 files between two AWS Regions.

The option that says: Transfer of data from your data center to S3 through a VPN is incorrect because the data coming in from your data center to AWS does not incur you charges.

The option that says: Provisioning Elastic IPs and attaching them to EC2 instances is incorrect because Elastic IPs are only charged if they are not attached to instances or NAT gateways.

The option that says: Setting up additional VPCs in your account is incorrect because VPCs are free to use in AWS.

56
Q
Which of the following is a fully managed database in AWS that can be used to store JSON documents?
A.Amazon DynamoDB
B.Amazon RedShift
C.Amazon ElastiCache
D.Amazon Aurora
A

A.Amazon DynamoDB

Explanation:
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multiregion, multimaster, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. This is the perfect database to use for JSON type documents.

DynamoDB supports some of the world’s largest scale applications by providing consistent, single-digit millisecond response times at any scale. You can build applications with virtually unlimited throughput and storage. DynamoDB global tables replicate your data across multiple AWS Regions to give you fast, local access to data for your globally distributed applications. For use cases that require even faster access with microsecond latency, DynamoDB Accelerator (DAX) provides a fully managed in-memory cache.

Hence, the correct answer is Amazon DynamoDB.

Amazon Aurora is incorrect because this is a MySQL and PostgreSQL-compatible relational database. It will not be able to store JSON type documents.

Amazon ElastiCache is incorrect because this is just a service that offers a fully managed Redis and Memcached. This is a caching service and is not used for directly storing database entries.

Amazon Redshift is incorrect because this is a data warehousing service that uses columnar storage. This is not the best option compared to using Amazon DynamoDB.

57
Q

There is an incident with your team where an S3 object was deleted using an account without the owner’s knowledge. What can be done to prevent unauthorized deletion of your S3 objects?
A.Set your S3 buckets to private so tat objects are not publicly readable/writable
B.Set up stricter IAM policies that will prevents users from deleting S3 objects
C.Create access control policies so that only you can perform S3-related actions
D.Have each account enable multi-factor authentication

A

D.Have each account enable multi-factor authentication

Explanation:
By setting up MFA, you add an extra layer of protection for your AWS accounts. This is very useful for preventing unwanted access to your AWS resources. In S3, once versioning is enabled for your objects, you can also set up MFA delete so that deleting objects require an additional MFA authentication.

Hence, the correct answer is: Have each account enable multi-factor authentication.

The option that says: Set up stricter IAM policies that will prevent users from deleting S3 objects is incorrect because you can prevent unwanted deletion by removing the permission from IAM Users. However, in this case, the issue is caused by unauthorized access to the account which had the capability of deleting objects. This will totally restrict the authorized users to delete necessary objects.

The option that says: Create access control policies so that only you can perform S3-related actions is incorrect because this will not prevent unauthorized access to AWS accounts.

The option that says: Set your S3 buckets to private so that objects are not publicly readable/writable is incorrect because this is unrelated to the issue in this case.

58
Q
Which is a fully-managed source control service that allows you to host Git-based repositories and enable code collaboration for your team via pull requests, branching, and merging?
A.AWS CodeCommit
B.AWS CodeStar
C.AWS CodeDeploy
D.AWS CodeBuild
A

A.AWS CodeCommit

Explanation:
AWS CodeCommit is a fully managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories. AWS CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. You can use AWS CodeCommit to securely store anything from source code to binaries, and it works seamlessly with your existing Git tools.

AWS CodeCommit helps you collaborate on code with teammates via pull requests, branching, and merging. You can implement workflows that include code reviews and feedback by default, and control who can make changes to specific branches.

AWS CodeCommit keeps your repositories close to your build, staging, and production environments in the AWS cloud. You can transfer incremental changes instead of the entire application. This allows you to increase the speed and frequency of your development lifecycle.

Hence, the correct answer is AWS CodeCommit.

AWS CodeStar is incorrect because this simply enables you to quickly develop, build, and deploy applications on AWS.

AWS CodeBuild is incorrect because this is just a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy.

AWS CodeDeploy is incorrect because this is primarily used to automate code deployments to any instance, including EC2 instances and instances running on-premises.

59
Q
Which of the following is typically used to secure your VPC subnets?
A.Network ACL
B.Security Group
C.AWS Config
D.AWS IAM
A

A.Network ACL

Explanation:
A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

Hence, Network ACL is the the correct answer.

Security group is incorrect because this is just used to secure your EC2 instances and RDS databases in a similar way with how network ACLs work. However, they are not used for subnet security.

AWS IAM is incorrect because it is a service used for account and user management.

AWS Config is incorrect because it is a tool that checks for resource compliance in your account.

60
Q
Which service will allow you to group together users who perform a similar function and apply function-specific privileges?
A.Tagging
B.Directory Service
C.Resource Groups
D.AWS IAM
A

D.AWS IAM

Explanation:
AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. IAM has various identities such as IAM Users, IAM Groups, and IAM Roles.

An IAM group is a collection of IAM users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users. For example, you could have a group called Admins and give that group the types of permissions that administrators typically need. Any user in that group automatically has the permissions that are assigned to the group. If a new user joins your organization and needs administrator privileges, you can assign the appropriate permissions by adding the user to that group. Similarly, if a person changes jobs in your organization, instead of editing that user’s permissions, you can remove him or her from the old groups and add him or her to the appropriate new groups.

Note that a group is not truly an “identity” in IAM because it cannot be identified as a Principal in a permission policy. It is simply a way to attach policies to multiple users at one time.

Hence, the correct answer is AWS IAM.

Resource group is incorrect since it is used to organize your AWS resources.

Tagging is incorrect as well since it has no capability to perform what is asked in the scenaro.

AWS Directory Service is incorrect. AWS Directory Service provides multiple ways to use Amazon Cloud Directory and Microsoft Active Directory (AD) with other AWS services.

61
Q
Which of the following is a data transport solution that accelerates moving terabytes to petabytes of data into and out of AWS using appliances with on-board storage and compute capabilities?
A.AWS Snowball Edge
B.AWS Snowcone
C.Lambda@Edge
D.AWS Snowmobile
A

A.AWS Snowball Edge

Explanation:
AWS Snowball Edge is a data migration and edge computing device that comes in two options. Snowball Edge Storage Optimized provides both block storage and Amazon S3-compatible object storage, and 24 vCPUs. It is well suited for local storage and large scale data transfer. Snowball Edge Compute Optimized provides 52 vCPUs, block and object storage, and an optional GPU for use cases such as advanced machine learning and full-motion video analysis in disconnected environments. Customers can use these two options for data collection, machine learning and processing, and storage in environments with intermittent connectivity (such as manufacturing, industrial, and transportation) or in extremely remote locations (such as military or maritime operations) before shipping it back to AWS. These devices may also be rack mounted and clustered together to build larger, temporary installations.

Snowball Edge supports specific Amazon EC2 instance types as well as AWS Lambda functions, so customers may develop and test in AWS then deploy applications on devices in remote locations to collect, pre-process, and return the data. Common use cases include data migration, data transport, image collation, IoT sensor stream capture, and machine learning.

Hence, the correct answer is AWS Snowball Edge.

AWS Snowmobile is incorrect because this is primarily used to migrate tens of petabytes to exabytes of data in batches to the cloud.

AWS Snowcone is incorrect because although it is a data transport solution like Snowball Edge, it is not suitable for moving terabytes to petabytes of data. Take note that the usable storage for Snowcone is only 8 TB.

Lambda@Edge is incorrect because this is just a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance and reduces latency.

62
Q
Which of the following is best suited for load balancing Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Transport Layer Security (TLS) traffic including the capability of handling millions of requests per second while maintaining ultra-low latencies?
A.Network Load Balancer
B.None of the above
C.Application Load Balancer
D.Classic Load Balancer
A

A.Network Load Balancer

Explanation:
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones.

Elastic Load Balancing offers three types of load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault-tolerant. They are:

Application Load Balancer - This is best suited for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers. Operating at the individual request level (Layer 7), Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request.

Network Load Balancer - This is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load Balancer is also optimized to handle sudden and volatile traffic patterns.

Classic Load Balancer - This provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and connection level. Classic Load Balancer is intended for applications that were built within the EC2-Classic network.

The Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load Balancer is also optimized to handle sudden and volatile traffic patterns.

Hence, the correct type of elastic load balancer to use is the Network Load Balancer.

Application Load Balancer and Classic Load Balancer are incorrect because these are not suitable to handle TCP, UDP, and TLS traffic. Moreover, these two do not have the capability of handling millions of requests per second while maintaining ultra-low latencies.

The option that says: None of the above is incorrect because this requirement can be fulfilled by simply using a Network Load Balancer.

63
Q

When a company uses AWS and decouple from their on-premises data center, they will be able to have which of the following benefits? (Select TWO.)
A.Decrease your TCO
B.Reduce time to market
C.Deferred payments to their operational expenditures
D.Replace low variable costs with upfront capital expenses (CAPEX)
E.Massive discounts for bare metal servers from Amazon.com

A

A.Decrease your TCO
B.Reduce time to market

Explanation:
As the technology has matured over the last decade, companies are moving to the cloud to lower costs, reduce complexity, and increase flexibility. The cloud provides scalable and powerful compute solutions, low-cost, reliable storage, and database technologies that meet the most demanding workload requirements. In addition, cloud technologies can be used to deploy solutions quickly and cost-effectively around the world and on any device.

When you decouple from the data center, you’ll be able to:

  • Decrease your TCO: Eliminate many of the costs related to building and maintaining a data center or colocation deployment. Pay for only the resources you consume.
  • Reduce complexity: Reduce the need to manage infrastructure, investigate licensing issues, or divert resources.
  • Adjust capacity on the fly: Add or reduce resources, depending on seasonal business needs, using infrastructure that is secure, reliable, and broadly accessible.
  • Reduce time to market: Design and develop new IT projects faster.
  • Deploy quickly, even worldwide: Deploy applications across multiple geographic areas.
  • Increase efficiencies: Use automation to reduce or eliminate IT management activities that waste time and resources.
  • Innovate more: Spin up a new server and try out an idea. Each project moves through the funnel more quickly because the cloud makes it faster (and cheaper) to deploy, test, and launch new products and services.
  • Spend your resources strategically: Switch to a DevOps model to free your IT staff from operations and maintenance that can be handled by the cloud services provider.
  • Enhance security: Spend less time conducting security reviews on infrastructure. Mature cloud providers have teams of people who focus on security, offering best practices to ensure you’re compliant, no matter what your industry.

Hence, the correct answers are: Decrease your TCO and Reduce time to market.

Deferred payments to their operational expenditures is incorrect because this type of payment is not supported when you move to AWS.

Replace low variable costs with upfront capital expenses (CAPEX) is incorrect because it is actually the other way around: Using AWS, companies will have the opportunity to replace upfront capital expenses (CAPEX) with low variable costs.

Massive discounts for bare metal servers from Amazon.com is incorrect because this is not an advantage of using AWS. By moving from a traditional data center to the AWS cloud, you can reduce or eliminate the overhead related to managing, operating, and maintaining your data center. Hence, the reduction of your TCO is not from the massive discounts from the Amazon e-commerce website.

64
Q
Which of the following AWS Cost Management tools enable you to forecast future costs and usage of your AWS resources based on your past consumption?
A.AWS Pricing Calculator
B.Amzon Forecast
C.AWS Cost and Usage Report
D.Cost Explorer
A

D.Cost Explorer

Explanation:
Cost Explorer is a tool that enables you to view and analyze your costs and usage. You can explore your usage and costs using the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. You can view data for up to the last 13 months, forecast how much you’re likely to spend for the next three months if you set the detail level to at least daily and next twelve months if you set the detail level to at least monthly, and get recommendations for what Reserved Instances to purchase. You can use Cost Explorer to identify areas that need further inquiry and see trends that you can use to understand your costs.

A forecast is a prediction of how much you will use AWS services over the forecast time period that you selected, based on your past usage. Forecasting provides an estimate of what your AWS bill will be and enables you to use alarms and budgets for amounts that you’re predicted to use. Because forecasts are predictions, the forecasted billing amounts are estimated and might differ from your actual charges for each statement period.

When you first sign up for Cost Explorer, AWS prepares the data about your costs for the current month and the last three months and then calculates the forecast for the next three months or twelve months depending on the level of detail on the forecast (hourly, daily or monthly). The current month’s data is available for viewing in about 24 hours. The rest of your data takes a few days longer. Cost Explorer updates your cost data at least once every 24 hours. After you sign up, Cost Explorer can display up to 12 months of historical data (if you have that much), the current month, and the forecasted costs for the next three months or twelve months depending on the level of detail on the forecast (hourly, daily or monthly). The first time that you use Cost Explorer, it walks you through the main parts of the console with an explanation for each section. You can trigger this walkthrough at a later time as well.

Hence, the correct answer is AWS Cost Explorer.

AWS Pricing Calculator is incorrect because this just allows you to estimate your AWS bill by manually entering your planned resources by service. It does not forecast future costs and usage of your AWS resources based on your past consumption, unlike the AWS Cost Explorer.

AWS Cost and Usage report is incorrect because this tool doesn’t forecast your future costs. It just lists your AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes.

Amazon Forecast is incorrect because this is actually not considered as one of the AWS Cost Management tools. Amazon Forecast is a fully managed service that uses machine learning to deliver highly accurate forecasts of any time-series data, such as retail demand, manufacturing demand, travel demand, revenue, IT capacity, logistics, and web traffic.

65
Q
Which of the following should you use to automatically transfer your infrequently accessed data in your S3 bucket to a more cost-effective storage class?
A.Lifecycle policy
B.Cross-Origin Resource Sharing (CORS)
C.Cross-Region replication
D.S3 access control list
A

A.Lifecycle policy

Explanation:
You can use lifecycle policies in S3 to automatically move your infrequently accessed data to a more cost-effective storage class such as S3-IA or Glacier.

Lifecycle configuration in Amazon S3 enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:

Transition actions – In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation or archive objects to the GLACIER storage class one year after creation.

Expiration actions – In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.

Hence, the correct answer is Lifecycle policy.

Cross-Region replication is incorrect because this just enables automatic, asynchronous copying of objects across Amazon S3 buckets in different AWS Regions. You can copy objects between different AWS Regions but the Versioning feature should be enabled first in your S3 bucket.

S3 access control list is incorrect because this feature is primarily used to manage access to your buckets and objects. It defines which AWS accounts or groups are granted access and the type of access. When a request is received against a resource, Amazon S3 checks the corresponding ACL to verify that the requester has the necessary access permissions.

Cross-Origin Resource Sharing (CORS) is incorrect because this is only applicable to client web applications that are loaded in one domain to interact with resources in a different domain. You cannot use this feature to automatically transition your objects to a more cost-effective storage class.