AWS Certified Cloud Practitioner Practice Test 1(Bonso) Flashcards

1
Q
Which among the options below can you use to launch a new Amazon RDS database cluster to your VPC? (Select TWO.)
A.AWS Management Console
B.AWS CodePipeline
C.AWS CloudFormation
D.AWS Concierge
E.AWS Systems Manager
A

A.AWS Management Console
C.AWS CloudFormation

Explanation:
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.

You can launch a new RDS database cluster using the AWS Management Console, AWS CLI, and AWS CloudFormation. The AWS Management Console provides a web-based way to administer AWS services. You can sign in to the console and create, list, and perform other tasks with AWS services for your account. These tasks might include starting and stopping Amazon EC2 instances and Amazon RDS databases, creating Amazon DynamoDB tables, creating IAM users, and so on. The AWS Command Line Interface (CLI), on the other hand, is a unified tool to manage your AWS services.

AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.

Hence, the correct answers are: AWS Management Console and AWS CloudFormation.

AWS Concierge is incorrect because this is actually a senior customer service agent who is assigned to your account when you subscribe to an Enterprise or qualified Reseller Support plan. This customer service agent is not authorized to launch an RDS cluster on your behalf.

AWS CodePipeline is incorrect because this is just a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.

AWS Systems Manager is incorrect because this is just a unified user interface so you can view operational data from multiple AWS services, and allows you to automate operational tasks across your AWS resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In AWS, _______ is one of the advantages of Consolidated Billing.
A.Go global in minutes
B.Volume pricing
C.The ability to have one member account to pay the charges of all the master accounts
D. Consolidation of both AWS and AISPL accounts into one billing

A

B.Volume pricing

Explanation
For billing purposes, AWS treats all the accounts in the organization as if they were one account. Some services, such as Amazon EC2 and Amazon S3, have volume pricing tiers across certain usage dimensions that give you lower prices the more you use the service. With consolidated billing, AWS combines the usage from all accounts to determine which volume pricing tiers to apply, giving you a lower overall price whenever possible. AWS then allocates each linked account a portion of the overall volume discount based on the account’s usage.

The Bills page for each linked account displays an average tiered rate that is calculated across all the accounts on the consolidated bill for the organization. For example, let’s say that Bob’s consolidated bill includes both Bob’s own account and Susan’s account. Bob’s account is the payer account, so he pays the charges for both himself and Susan.

As shown in the following illustration, Bob transfers 8 TB of data during the month and Susan transfers 4 TB.

For the purposes of this example, AWS charges $0.17 per GB for the first 10 TB of data transferred and $0.13 for the next 40 TB. This translates into $174.08 per TB (= .171024) for the first 10 TB, and $133.12 per TB (= .131024) for the next 40 TB. Remember that 1 TB = 1024 GB.

For the 12 TB that Bob and Susan used, Bob’s payer account is charged:

= ($174.08 * 10 TB) + ($133.12 * 2 TB)

= $1740.80 + $266.24

= $2,007.04

The average cost-per-unit of data transfer out for the month is therefore $2,007.04 / 12 TB = $167.25 per TB. That is the average tiered rate that is shown on the Bills page and in the downloadable cost report for each linked account on the consolidated bill.

Without the benefit of tiering across the consolidated bill, AWS would have charged Bob and Susan each $174.08 per TB for their usage, for a total of $2,088.96.

Hence, the correct answer in this scenario is Volume pricing.

The option that says: consolidation of both AWS and AISPL accounts into one billing is incorrect because AWS and AISPL (Amazon Internet Services Private Limited) accounts are considered as two different entities and hence, can’t be consolidated together.

The option that says: Go global in minutes is incorrect because this is one of the advantages of Cloud Computing and not Consolidated Billing.

The option that says: The ability to have one member account to pay the charges of all the master accounts is incorrect because it is actually the other way around. Every organization in AWS Organizations has a master account that pays the charges of all the member accounts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

_________ is one of the components of AWS Global Infrastructure which consists of one or more discrete data centers each with redundant power, networking, and connectivity, and housed in separate facilities.

A.AWS Region
B.VPC
C.Edge Location
D.Availability Zone

A

D.Availability Zone

Explanation:
The AWS Cloud infrastructure is built around AWS Regions and Availability Zones. An AWS Region is a physical location in the world where we have multiple Availability Zones. Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity, housed in separate facilities.

These Availability Zones offer you the ability to operate production applications and databases that are more highly available, fault-tolerant, and scalable than would be possible from a single data center. The AWS Cloud operates in over 60 Availability Zones within over 20 geographic Regions around the world, with announced plans for more Availability Zones and Regions.

Each Amazon Region is designed to be completely isolated from the other Amazon Regions. This achieves the greatest possible fault tolerance and stability. Each Availability Zone is isolated, but the Availability Zones in a Region are connected through low-latency links. AWS provides you with the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each AWS Region.

Each Availability Zone is designed as an independent failure zone. This means that Availability Zones are physically separated within a typical metropolitan region and are located in lower-risk flood plains (specific flood zone categorization varies by AWS Region). In addition to discrete uninterruptable power supply (UPS) and onsite backup generation facilities, they are each fed via different grids from independent utilities to further reduce single points of failure. Availability Zones are all redundantly connected to multiple tier-1 transit providers.

Hence, the correct answer is Availability Zone.

Edge location is incorrect because this is just a site that CloudFront uses to cache copies of your content for faster delivery to users at any location.

AWS Region is incorrect because this consists of multiple Availability Zones (AZ).

VPC is incorrect because it is just a service that lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following below are the benefits of using Consolidated billing in AWS? (Select TWO.)
A.Allows one member account to pay the charges of all the master accounts
B.Share the volume pricing and Reserved Instance discounts by combining the usage across all accounts in the organization
C.You get one bill for multiple accounts
D.Consolidate all the bills from multiple AWS accounts for only $1 every month
E.COnsolidate together the billing and payment of both AWS accounts and Amazon Internet Services Pvt. Ltd (AISPL) accounts

A

B.Share the volume pricing and Reserved Instance discounts by combining the usage across all accounts in the organization
C.You get one bill for multiple accounts

Explanation:
You can use the consolidated billing feature in AWS Organizations to consolidate billing and payment for multiple AWS accounts or multiple Amazon Internet Services Pvt. Ltd (AISPL) accounts. Every organization in AWS Organizations has a master account that pays the charges of all the member accounts. The master account is also called a payer account, and the member account is also known as a linked account.

Consolidated billing has the following benefits:

One bill – You get one bill for multiple accounts.

Easy tracking – You can track the charges across multiple accounts and download the combined cost and usage data.

Combined usage – You can combine the usage across all accounts in the organization to share the volume pricing discounts and Reserved Instance discounts. This can result in a lower charge for your project, department, or company than with individual standalone accounts.

No extra fee – Consolidated billing is offered at no additional cost.

If you have access to the payer account, you can see a combined view of the AWS charges that the linked accounts incur. You also can get a cost report for each linked account. AWS and AISPL accounts can’t be consolidated together. If your contact address is in India, you can use AWS Organizations to consolidate AISPL accounts within your organization.

When a linked account leaves an organization, the linked account can no longer access Cost Explorer data that was generated when the account was in the organization. The data isn’t deleted, and the payer account in the organization can still access the data. If the linked account rejoins the organization, the linked account can access the data again.

Hence, the correct answers for this scenario are:

  • You get one bill for multiple accounts
  • Share the volume pricing and Reserved Instance discounts by combining the usage across all accounts in the organization

The option that says: Consolidated all the bills from multiple AWS accounts for only $1 every month is incorrect because this feature is offered at no additional cost.

The option that says: Allows one member account to pay the charges of all the master accounts is incorrect because it should be the other way around. The master account pays the charges of all the member accounts.

The option that says: Consolidate together the billing and payment of both AWS accounts and Amazon Internet Services Pvt. Ltd (AISPL) accounts is incorrect because these two can’t be consolidated together.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which service should you use if there is a need to launch a customized self-hosted database which requires a scheduled shutdown every night to save on cost?

A.Amazon Redshift
B.Amazon EC2 instance with an Instance Store volume
C.Amazon DynamoDB
D.Amazon EC2 instance with an EBS volume

A

D.Amazon EC2 instance with an EBS volume

Explanation:
Amazon EBS provides durable, block-level storage volumes that you can attach to a running instance. You can use Amazon EBS as a primary storage device for data that requires frequent and granular updates. For example, Amazon EBS is the recommended storage option when you run a database on an instance.

An EBS volume behaves like a raw, unformatted, external block device that you can attach to a single instance. The volume persists independently from the running life of an instance. After an EBS volume is attached to an instance, you can use it like any other physical hard drive. As illustrated in the figure, multiple volumes can be attached to an instance. You can also detach an EBS volume from one instance and attach it to another instance. You can dynamically change the configuration of a volume attached to an instance. EBS volumes can also be created as encrypted volumes using the Amazon EBS encryption feature.

Hence, the correct answer for this scenario is Amazon EC2 instance with an EBS volume.

Amazon DynamoDB is incorrect because this is a non-relational database service that is fully-managed by AWS. This means that you have no control over its underlying server.

Amazon EC2 instance with an Instance Store volume is incorrect because if you use this for your self-hosted database, all of your data will be lost after you shut down the instance. You have to use an EBS Volume instead in order to persist the data for the scheduled nightly shutdown.

Amazon Redshift is incorrect because just like Amazon DynamoDB, you don’t have control over its underlying server and hence, you won’t be able to schedule a nightly shutdown. This is a fully managed, petabyte-scale data warehouse service in the cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In the Shared Responsibility Model, which of the following options below is a shared control between AWS and the customer?
A.Server-side data encryption
B.Awareness and training
C.Client-side data encryption
D.Physical and environmental controls of the AWS data centers

A

B.Awareness and training

Explanation:
Deploying workloads on Amazon Web Services (AWS) helps streamline time-to-market, increase business efficiency, and enhance user performance for many organizations. But as you capitalize on this strategy, it is important to understand your role in securing your AWS environment.

Based on the AWS Shared Responsibility Model, AWS provides a data center and network architecture built to meet the requirements of the most security-sensitive organizations, while you are responsible for securing services built on top of this infrastructure, notably including network traffic from remote networks.
This customer/AWS shared responsibility model also extends to IT controls. Just as the responsibility to operate the IT environment is shared between AWS and its customers, so is the management, operation and verification of IT controls shared. AWS can help relieve customer burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by the customer. As every customer is deployed differently in AWS, customers can take advantage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment.

Customers can then use the AWS control and compliance documentation available to them to perform their control evaluation and verification procedures as required. Below are examples of controls that are managed by AWS, AWS Customers and/or both.

Inherited Controls: Controls which a customer fully inherits from AWS.

- Physical and Environmental controls

Shared Controls: Controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services.

Examples include:

- Patch Management: AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
- Configuration Management: AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.
- Awareness & Training: AWS trains AWS employees, but a customer must train their own employees.

Customer Specific: Controls which are solely the responsibility of the customer based on the application they are deploying within AWS services.

Examples include:

- Service and Communications Protection or Zone Security which may require a customer to route or zone data within specific security environments.

Hence, the correct answer is Awareness and training.

The options that say: Client-side data encryption and Server-side data encryption are incorrect because these items fall under the responsibilities of the customer.

The option that says: Physical and environmental controls of the AWS data centers is incorrect because this is the sole responsibility of AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of the following IAM identities is associated with the access keys that are used in managing your cloud resources via the AWS Command Line Interface (AWS CLI)?

A.IAM Group
B.IAM Role
C.IAM User
D.IAM Policy

A

C.IAM User

Explanation:
Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK).

Access keys consist of two parts:

  1. Access key ID (for example: AKIAIOSTUTORIALSDOJO)
  2. Secret access key (for example: wJalrXUtnFEMI/K7MDENG/bTutorialsDojoKEY).

Like a user name and password, you must use both the access key ID and secret access key together to authenticate your requests. Manage your access keys as securely as you do your user name and password. It is quite important that you do not provide your access keys to a third party, even to help find your canonical user ID. By doing this, you might give someone permanent access to your account.

As a best practice, use temporary security credentials (IAM roles) instead of access keys, and disable any AWS account root user access keys. When you create an access key pair, save the access key ID and secret access key in a secure location. The secret access key is available only at the time you create it. If you lose your secret access key, you must delete the access key and create a new one.

An IAM user is an entity that you create in AWS. The IAM user represents the person or service who uses the IAM user to interact with AWS. The primary use for IAM users is to give people the ability to sign in to the AWS Management Console for interactive tasks and to make programmatic requests to AWS services using the API or CLI.

A user in AWS consists of a name, a password to sign into the AWS Management Console, and up to two access keys that can be used with the API or CLI. When you create an IAM user, you grant it permissions by making it a member of a group that has appropriate permission policies attached, or by directly attaching policies to the user. You can also clone the permissions of an existing IAM user, which automatically makes the new user a member of the same groups and attaches all the same policies.

Hence, the correct answer is IAM User.

IAM Role is incorrect because although you can use this IAM identity for AWS CLI, it is not associated with access keys just as what is clearly mentioned in the scenario.

IAM Group is incorrect because this is just a collection of IAM users and is not used for the AWS CLI tool. You can use IAM groups to specify permissions for a collection of users, which can make those permissions easier to manage for those users.

IAM Policy is incorrect because this is actually not considered as one of the IAM identities and it is not associated with the access keys used for the AWS CLI. You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which statement below is correct regarding the components of the AWS Global Infrastructure?
A.An edge location contains multiple AWS Regions
B.An AWS Region contains multiple Availability Zones
C.An Availability Zone contains multiple AWS regions
D.An Availability Zones contains edge locations

A

B.An AWS Region contains multiple Availability Zones

Explanation:
The AWS Global Infrastructure delivers a cloud infrastructure companies can depend on—no matter their size, changing needs, or challenges. The AWS Global Infrastructure is designed and built to deliver the most flexible, reliable, scalable, and secure cloud computing environment with the highest quality global network performance available today. Every component of the AWS infrastructure is designed and built for redundancy and reliability, from regions to networking links to load balancers to routers and firmware.

AWS provides a more extensive global footprint than any other cloud provider, and it opens up new Regions faster than other providers. To support its global footprint and ensure customers are served across the world, AWS maintains multiple geographic regions, including Regions in North America, South America, Europe, Asia Pacific, and the Middle East.

Each AWS Region provides full redundancy and connectivity to the network. Unlike other cloud providers, who define a region as a single data center, at AWS Regions consist of multiple Availability Zones, each of which is a fully isolated partition of the AWS infrastructure that consists of discrete data centers, each with redundant power, networking, and connectivity, and each housed in separate facilities.

An Availability Zone gives customers the ability to operate production applications and databases that are more highly available, fault-tolerant, and scalable than would be possible from a single data center. All AZs are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs. The network performance is sufficient to accomplish synchronous replication between AZs.

Hence, the correct answer is: An AWS Region contains multiple Availability Zones.

The option that says: An Availability Zone contains multiple AWS Regions is incorrect because it is actually the other way around. It is the AWS Region which contains multiple Availability Zones.

The option that says: An Availability Zone contains edge locations is incorrect because this is a false description of the relationship between these two components. An edge location is simply a site that CloudFront uses to cache copies of your content for faster delivery to users in any location.

The option that says: An edge location contains multiple AWS Regions is incorrect because an edge location and an AWS Region are not geographically related. Hence, it is important to note that an edge location does not contain multiple AWS Regions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which AWS service should you use if you need to launch a highly scalable MySQL database?

A.Amazon Aurora
B.Amazon Elasticache
C.Amazon Redshift
D.Amazon DynamoDB

A

A.Amazon Aurora

Explanation:
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases.

Amazon Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases. It provides the security, availability, and reliability of commercial databases at 1/10th the cost. Amazon Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups.
Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 64TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three Availability Zones (AZs).

Hence, the correct answer is Amazon Aurora.

Amazon Redshift is incorrect because this is just a data warehousing service and doesn’t support MySQL.

Amazon DynamoDB is incorrect because although this service is highly scalable, this is primarily used for nonrelational databases only.

Amazon ElastiCache is incorrect because this service just makes it easy for you to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which AWS service allows your EC2 compute capacity to automatically scale based on the incoming traffic?

A.Amazon Macie
B.AWS Auto scaling
C.Amazon LightSail
D.AWS CloudTrail

A

B.AWS Auto scaling

Explanation:
AWS Auto Scaling enables you to configure automatic scaling for the AWS resources that are part of your application in a matter of minutes. The AWS Auto Scaling console provides a single user interface to use the automatic scaling features of multiple AWS services. You can configure automatic scaling for individual resources or for whole applications.

With AWS Auto Scaling, you configure and manage scaling for your resources through a scaling plan. The scaling plan uses dynamic scaling and predictive scaling to automatically scale your application’s resources. This ensures that you add the required computing power to handle the load on your application and then remove it when it’s no longer required. The scaling plan lets you choose scaling strategies to define how to optimize your resource utilization. You can optimize for availability, for cost, or a balance of both. Alternatively, you can create custom scaling strategies.

Hence, the correct answer is: AWS Auto Scaling.

Amazon Macie is incorrect because this is just a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS.

AWS CloudTrail is incorrect because this service is primarily used for governance, compliance, operational auditing, and risk auditing of your AWS account.

Amazon LightSail is incorrect because this service is just a virtual private server (VPS) solution and is not used for Amazon EC2 Scaling. This service provides developers compute, storage, and networking capacity and capabilities to deploy and manage websites and web applications in the cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company which has a basic support plan needs resources to deploy, test, and improve their AWS environment. Which of the following can they use for free?
A.Technical Account manager consultation
B.AWS Support API for programmatic case management
C.AWS online documentation, whitepapers, blogs and support forms
D.In-person classes with an accredited AWS instructor

A

C.AWS online documentation, whitepapers, blogs and support forms

Explanation:
AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, whitepapers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can select a support plan that best aligns with your AWS use case.
AWS Support offers four support plans: Basic, Developer, Business, and Enterprise. The Basic plan is free of charge and offers support for account and billing questions and service limit increases. The other plans offer an unlimited number of technical support cases with pay-by-the-month pricing and no long-term contracts, providing the level of support that meets your needs.

All AWS customers automatically have around-the-clock access to these features of the Basic support plan:

  • Customer Service: one-on-one responses to account and billing questions
  • Support forums
  • Service health checks
  • Documentation, whitepapers, and best-practice guides

In addition, customers with a Business or Enterprise support plan have access to these features:

  • Use-case guidance: what AWS products, features, and services to use to best support your specific needs.
  • AWS Trusted Advisor, which inspects customer environments. Then, Trusted Advisor identifies opportunities to save money, close security gaps, and improve system reliability and performance.
  • An API for interacting with Support Center and Trusted Advisor. This API allows for automated support case management and Trusted Advisor operations.
  • Third-party software support: help with Amazon Elastic Compute Cloud (EC2) instance operating systems and configuration. Also, help with the performance of the most popular third-party software components on AWS.

The AWS Support API provides access to some of the features of the AWS Support Center. This API allows programmatic access to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status. AWS provides this access for AWS Support customers who have a Business or Enterprise support plan.

Hence, the correct answer is: AWS online documentation, whitepapers, blogs and support forums.

The option that says: AWS Support API for programmatic case management is incorrect because the AWS Support API is only accessible to customers who have a Business or Enterprise support plan.

The option that says: Technical Account Manager consultation is incorrect because this feature only applies to customers with an Enterprise Support plan.

The option that says: In-person classes with an accredited AWS instructor is incorrect because this activity is not free.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
Which of the following can a developer use to interact with your AWS services? (Select TWO.)
A.AWS Artifact
B.AWS Organizations
C.AWS Command Line Interface
D.Elastic Network Interface
E.AWS SDKs
A

C.AWS Command Line Interface
E.AWS SDKs

Explanation:
The AWS Command Line Interface (AWS CLI) is an open-source tool that enables you to interact with AWS services using commands in your command-line shell. With minimal configuration, you can start using functionality equivalent to that provided by the browser-based AWS Management Console from the command prompt in your favorite terminal program such as Linux shell or the Windows command line.

You can also use Software Development Kits (SDKs) to interact with your AWS services. SDKs take the complexity out of coding by providing language-specific APIs for AWS services to enable you to develop cloud applications much faster.

In addition, you can also utilize aws-shell which is an integrated shell program for working with the AWS CLI. Take note that this is just an interactive productivity booster for the AWS CLI which is why you have to install the CLI first before you can use this.

You need to have access keys in order to use the AWS CLI. Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to AWS. If you don’t have access keys, you can create them from the AWS Management Console. As a best practice, do not use the AWS account root user access keys for any task where it’s not required.

Hence, the correct answers are AWS Command Line Interface and AWS SDKs.

Elastic Network Interface is incorrect because this is just a logical networking component in a VPC that represents a virtual network card.

AWS Artifact is incorrect because it simply provides on-demand access to AWS’ security and compliance reports and select online agreements. The compliance reports include Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls.

AWS Organizations is incorrect because this is just an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

For security audit, a company needs to download the compliance-related documents in AWS such as ISO certifications, Payment Card Industry (PCI), and Service Organization Control (SOC) reports. Which of the following should they use to retrieve these files?

A.AWS Artifact
B.AWS Trusted Advisor
C.AWS CloudTrail
D.AWS Certificate Manager

A

A.AWS Artifact

Explanation:
AWS Artifact is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to AWS’ security and compliance reports and select online agreements. Reports available in AWS Artifact include our Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls. Agreements available in AWS Artifact include the Business Associate Addendum (BAA) and the Nondisclosure Agreement (NDA).

All AWS Accounts have access to AWS Artifact. Root users and IAM users with admin permissions can download all audit artifacts available to their account by agreeing to the associated terms and conditions. You will need to grant IAM users with non-admin permissions access to AWS Artifact using IAM permissions. This allows you to grant a user access to AWS Artifact, while restricting access to other services and resources within your AWS Account.

Hence, the correct answer in this scenario is AWS Artifact.

AWS Trusted Advisor is incorrect because this is just an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. It inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps.

AWS Certificate Manager is incorrect because this is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. This service does not store certifications or compliance-related documents.

AWS CloudTrail is incorrect because although this service is helpful for auditing your AWS resources, it doesn’t store any compliance-related documents which are mentioned in the scenario. This simply is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which of the following allows you to categorize and track your AWS costs on a detailed level?

A.AWS Budgets
B.Cost allocation tags
C.Consolidated Billing
D.Amazon Aurora Backtrack

A

B.Cost allocation tags

Explanation:
A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. A key can have more than one value. You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level.

After you activate cost allocation tags, AWS uses these tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track your AWS costs. AWS provides two types of cost allocation tags, an AWS generated tags and user-defined tags. AWS defines, creates, and applies the AWS generated tags for you, and you define, create, and apply user-defined tags. You must activate both types of tags separately before they can appear in Cost Explorer or on a cost allocation report.

Hence, the correct answer is Cost Allocation Tags.

Consolidated Billing is incorrect because this is just a feature in AWS Organizations to consolidate all of the billing and payments for multiple AWS accounts or multiple Amazon Internet Services Pvt. Ltd (AISPL) account.

AWS Budgets is incorrect because this just gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.

Amazon Aurora Backtrack is incorrect because this is simply one of the features of Amazon Aurora that allows you to easily undo mistakes on your database. If you mistakenly perform a destructive action, such as a DELETE without a WHERE clause, you can backtrack the DB cluster to a time before the destructive action with minimal interruption of service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What service provides the lowest-cost storage option for retaining database backups which also allows occasional data retrieval in minutes?

A.Amazon EBS
B.Amazon Glacier
C.AWS EFS
D.Amazon S3

A

B.Amazon Glacier

Explanation:
Amazon S3 Glacier and S3 Glacier Deep Archive are designed to be the lowest-cost Amazon S3 storage classes, allowing you to archive large amounts of data at a very low cost. This makes it feasible to retain all the data you want for use cases like data lakes, analytics, IoT, machine learning, compliance, and media asset archiving. You pay only for what you need, with no minimum commitments or up-front fees.

Amazon S3 Glacier and S3 Glacier Deep Archive are a secure, durable, and extremely low-cost Amazon S3 cloud storage classes for data archiving and long-term backup. They are designed to deliver 99.999999999% durability, and provide comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements.

Customers can store data for as little as $1 per terabyte per month, a significant savings compared to on-premises solutions. To keep costs low yet suitable for varying retrieval needs, Amazon S3 Glacier provides three options for access to archives, from a few minutes to several hours, and S3 Glacier Deep Archive provides two access options ranging from 12 to 48 hours.

Hence, the correct answer is Amazon Glacier.

Amazon S3 is incorrect because this type of storage service costs more than Glacier.

Amazon EBS is incorrect because this is a type of block storage that is not suitable to be used for database backups. It is also more expensive than Glacier.

Amazon EFS is incorrect because this is a type of POSIX-compliant file storage suitable to be used as a file system and not for storing backups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which of the following Cost Management Tools allows you to track your Amazon EC2 Reserved Instance (RI) usage and view the discounted RI rate that was charged to your resources?

A.AWS Cost Explorer
B.AWS Cost and Usage report
C.AWS Systems Manager
D.AWS Budgets

A

B.AWS Cost and Usage report

Explanation:
The Cost and Usage Report is your one-stop-shop for accessing the most granular data about your AWS costs and usage. You can also load your cost and usage information into Amazon Athena, Amazon Redshift, AWS QuickSight, or a tool of your choice.

It lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes. You can also customize the AWS Cost & Usage Report to aggregate your usage data to the daily or hourly level.

With the AWS Cost & Usage Report, you can do the following:

Access comprehensive AWS cost and usage information

  • The AWS Cost & Usage Report gives you the ability to delve deeply into your AWS cost and usage data, understand how you are using your AWS implementation, and identify opportunities for optimization.

Track your Amazon EC2 Reserved Instance (RI) usage

  • Each line item of usage that receives an RI discount contains information about where the discount was allocated. This makes it easier to trace which instances are benefitting from specific reservations.

Leverage strategic data integrations

  • Using the Amazon Athena data integration feature, you can quickly query your cost and usage information using standard SQL queries. You can also upload your data directly into Amazon Redshift or Amazon QuickSight.

One of the core benefits of the AWS Cost & Usage Report is the wealth of RI-related data that is made available to you. It can be customized to collect cost and usage data at the daily and monthly levels of detail and is updated at least once per day. Each line item of usage that receives an RI discount contains information about where the discount came from. This makes it easier to trace which instances are benefitting from specific reservations. If desired, the AWS Cost & Usage Report can even be ingested directly into Amazon Athena, Amazon QuickSight, or your Amazon Redshift cluster.

Hence, the correct answer is AWS Cost and Usage report.

AWS Cost Explorer is incorrect because this one has a Reserved Instance Utilization and Coverage report, it doesn’t show the discounted RI rate that was charged to your resources unlike the AWS Cost and Usage report.

AWS Budgets is incorrect because it simply gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.

AWS Systems Manager is incorrect because this is not a cost management tool. The Systems Manager simply provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A customer is planning to migrate some of their web applications that are hosted on-premises to AWS. Which of the following is a benefit of using AWS over virtualized data centers?

A.Higher variable costs and higher upfront costs
B.Lower variable costs and lower upfront costs
C.Higher variable costs and lower upfront costs
D.Lower variable costs and higher upfront costs

A

B.Lower variable costs and lower upfront costs

Explanation:
AWS helps customers reduce large capital investments with lower variable costs. AWS also gives customers the opportunity to work on their own terms without long-term lock-in, reducing the risks from unplanned capacity and demand. AWS helps finance teams plan and forecast more effectively, while giving IT teams the capacity and resources they need, even during peak periods.

In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses as web services—now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with your business. With the cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.

Hence, the correct answer is lower variable costs and lower upfront costs.

The option that says: Higher variable costs and higher upfront costs is incorrect because AWS actually provides the opposite: lower variable costs and lower upfront costs.

The option that says: Higher variable costs and lower upfront costs is incorrect because although it is true that AWS provides lower upfront costs, it does not have higher variable costs.

The option that says: Lower variable costs and higher upfront costs is incorrect because although AWS provides lower variable costs, it also offers lower upfront costs as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company is currently using an On-Demand EC2 instance for their application which they plan to migrate to a Reserved EC2 Instance to save on cost. Which of the following is the most cost-effective option if the application being hosted would be used for more than 3 years?

A.All Upfront Covertible Reserved Instance pricing for a 1-year term
B.No Upfront Convertible reserved instances pricing for a 3-year term
C.All Upfront, Standard Reserved Instance pricing for a 3-year term
D.No Upfront Standard Reserved Instance pricing for a 1-year term that is renewed every year

A

C.All Upfront, Standard Reserved Instance pricing for a 3-year term

Explanation:
Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.

Standard Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing and can be purchased for a 1-year or 3-year term. The average discount off On-Demand instances varies based on your term and chosen payment options (up to 40% for 1-year and 60% for a 3-year term). Customers have the flexibility to change the Availability Zone, the instance size, and networking type of their Standard Reserved Instances.

Convertible Reserved Instances provide you with a significant discount (up to 54%) compared to On-Demand Instances and can be purchased for a 1-year or 3-year term. Purchase Convertible Reserved Instances if you need additional flexibility, such as the ability to use different instance families, operating systems, or tenancies over the Reserved Instance term.

You can choose between three payment options when you purchase a Standard or Convertible Reserved Instance:

All Upfront option: You pay for the entire Reserved Instance term with one upfront payment. This option provides you with the largest discount compared to On-Demand instance pricing.

Partial Upfront option: You make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term.

No Upfront option: Does not require any upfront payment and provides a discounted hourly rate for the duration of the term.

Here’s a sample calculation to see the price difference between a Standard RI and Convertible RI on various payment options for 1-year and 3-year terms:

As a general rule, Standard RI provides more savings than Convertible RI, which means that the former is the cost-effective option. The All Upfront option provides you with the largest discount compared with the other types. Opting for a longer compute reservation, such as the 3-year term, gives us greater discount as opposed to a shorter 1-year renewable term.

Therefore, using an All Upfront, Standard Reserved Instance pricing for a 3-year term is the most cost-effective option in this scenario.

No Upfront Standard Reserved Instance pricing for a 1-year term that is renewed every year is incorrect because although using a Standard RI is more affordable than a Convertible RI, it is still much more cost-efficient if you use the All Upfront payment option for a longer 3-year term period.

All Upfront Convertible Reserved Instance pricing for a 1-year term is incorrect because although an All Upfront payment option provides you with the largest discount compared to On-Demand instance pricing, a Standard RI is still much more affordable to use than a Convertible RI.

No Upfront Convertible Reserved Instance pricing for a 3-year term is incorrect because although opting for a 3-year term is more affordable than a 1-year term, using a No Upfront Convertible Reserved Instance pricing option costs more money than using an All Upfront Standard RI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Users from different parts of the globe are complaining about the slow performance of the newly launched photo-sharing website in loading their high-resolution images. Which combination of AWS services should you use to serve the files with lowest possible latency? (Select TWO.)

A.Amazon Glacier
B.Amazon S3
C.Amazon Elastic File System
D.Amazon Cloudfront
E.AWS Storage Gateway
A

B.Amazon S3
D.Amazon Cloudfront

Explanation
You can configure your application to deliver static content and decrease the end-user latency using Amazon S3 and Amazon CloudFront. High-resolution images, videos, and other static files can be stored in Amazon S3. CloudFront speeds up content delivery by leveraging its global network of data centers, known as edge locations, to reduce delivery time by caching your content close to your end-users.

CloudFront fetches your content from an origin, such as an Amazon S3 bucket, an Amazon EC2 instance, an Amazon Elastic Load Balancing load balancer or your own web server, when it’s not already in an edge location. CloudFront can be used to deliver your entire website or application, including dynamic, static, streaming, and interactive content. You can set your Amazon S3 bucket as the origin of your CloudFront web distribution.

Hence, the correct answers are Amazon S3 and Amazon CloudFront.

AWS Storage Gateway is incorrect because this is just a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage in AWS.

Amazon Elastic File System is incorrect because this is not a suitable service to use to store static content unlike S3. It is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. In addition, you can’t directly connect it to CloudFront, unlike S3.

Amazon Glacier is incorrect because this is primarily used for data archival with usually a long data retrieval time. Like EFS, you can’t directly connect it to CloudFront too, unlike Amazon S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Among the following services, which is the most suitable one to use to store the results of I/O-intensive SQL database queries to improve application performance?

A.Amazon DynamoDB Accelerator (DAX)
B.AWS Greengrass
C.AWS CloudFront
D.Amazon ElastiCache

A

D.Amazon ElastiCache

Explanation:
Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, run, and scale popular open source compatible in-memory data stores. With this service, you can build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores.

The in-memory caching provided by Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing and Q&A portals) or compute-intensive workloads (such as a recommendation engine).

In-memory caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of I/O-intensive database queries or the results of computationally-intensive calculations.

Hence, the correct answer in this scenario is Amazon Elasticache.

AWS Greengrass is incorrect because this is just a software that lets you run local compute, messaging, data caching, sync, and ML inference capabilities on connected devices in a secure way.

Amazon CloudFront is incorrect because this is a global CDN service that accelerates delivery of your websites, APIs, video content or other web assets to your customers around the world. A CDN provides you the ability to utilize its global network of edge locations to deliver a cached copy of web content such as videos, webpages, images and not I/O-intensive SQL database queries. The more suitable service to use here is Amazon Elasticache.

Amazon DynamoDB Accelerator (DAX) is incorrect because although this is a caching feature, it is only applicable to DynamoDB which is a NoSQL database. Remember that the requirement says that you need to store the results of I/O-intensive SQL database queries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
Which of the following provides you the most granular data about your AWS costs and usage and also load that information into Amazon Athena, Amazon Redshift, AWS QuickSight, or a tool of your choice?
A.AWS Cost and Usage report
B.AWS Cost Explorer
C.Consolidated Billing
D.AWS Budgets
A

A.AWS Cost and Usage report

Explanation:
The Cost and Usage Report is your one-stop-shop for accessing the most granular data about your AWS costs and usage. You can also load your cost and usage information into Amazon Athena, Amazon Redshift, AWS QuickSight, or a tool of your choice.

It lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes. You can also customize the AWS Cost & Usage Report to aggregate your usage data to the daily or hourly level.

With the AWS Cost & Usage Report, you can do the following:

Access comprehensive AWS cost and usage information

  • The AWS Cost & Usage Report gives you the ability to delve deeply into your AWS cost and usage data, understand how you are using your AWS implementation, and identify opportunities for optimization.

Track your Amazon EC2 Reserved Instance (RI) usage

  • Each line item of usage that receives an RI discount contains information about where the discount was allocated. This makes it easier to trace which instances are benefitting from specific reservations.

Leverage strategic data integrations

  • Using the Amazon Athena data integration feature, you can quickly query your cost and usage information using standard SQL queries. You can also upload your data directly into Amazon Redshift or Amazon QuickSight.

Hence, the correct answer is AWS Cost and Usage report.

Consolidated Billing is incorrect because this just allows you to track the combined costs of all the linked AWS accounts in your organization. This feature does not provide the most granular data about your AWS costs and usage.

AWS Cost Explorer is incorrect because this is just a tool that enables you to view and analyze your costs and usage but not at a granular level like the AWS Cost and Usage report. It also does not provide a way to load your cost and usage information into Amazon Athena, Amazon Redshift, AWS QuickSight, or a tool of your choice.

AWS Budgets is incorrect because it simply gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

There is a requirement to launch a new database in AWS where the customer assumes the responsibility and management of the guest operating system, including updates and security patches. Which of the following services should the customer use?

A.Amazon DynamoDB
B.Amazon EC2
C.Amazon Aurora
D.Amazon DocumentDB

A

B.Amazon EC2

Explanation:
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

The customer assumes responsibility and management of the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment. This differentiation of responsibility is commonly referred to as the Security OF the Cloud versus Security IN the Cloud.

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate them from common failure scenarios.

Since you have more control over your EC2 instance, you can install any database that you prefer and manage its guest operating system, including the required updates and security patches. You can also choose an AMI with a pre-installed database (such as PostgreSQL or MySQL) in the Amazon EC2 Dashboard to save your time. Hence, the correct answer is Amazon EC2.

Amazon Aurora is incorrect because this is a fully-managed service that automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups without any manual intervention from you.

Amazon DocumentDB is incorrect because this is a fully-managed document database service that supports MongoDB workloads. Just like Amazon Aurora, you don’t need to handle or manage the guest operating system of this service since it is already managed by AWS.

Amazon DynamoDB is incorrect because just like the other two options above, this is also a fully-managed database service which means that you won’t be able to manage the underlying guest operating system or apply the required updates and security patches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Which of the following best describes what CloudWatch is?
A.A metric repository
B.An automated security assessment service
C.A rule repository
D.An audit service that records all API calls made to your AWS account

A

A.A metric repository

Explanation:
Amazon CloudWatch is basically a metrics repository. An AWS service, such as Amazon EC2, puts metrics into the repository, and you retrieve statistics based on those metrics. If you put your own custom metrics into the repository, you can retrieve statistics on these metrics as well.

You can use metrics to calculate statistics and then present the data graphically in the CloudWatch console. You can configure alarm actions to stop, start, or terminate an Amazon EC2 instance when certain criteria are met. In addition, you can create alarms that initiate Amazon EC2 Auto Scaling and Amazon Simple Notification Service (Amazon SNS) actions on your behalf.

Hence, a metric repository is the best option that best describes CloudWatch.

The option that says: an audit service that records all API calls made to your AWS account is incorrect because this describes CloudTrail and not CloudWatch.

The option that says: a rules repository is incorrect because this is more related with AWS Config.

The option that says: an automated security assessment service is incorrect because this description fits the Amazon Inspector service and not CloudWatch.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A company is planning to launch a new system in AWS but they do not have an employee who has an AWS-related expertise. Which of the following can help the company to design, architect, build, migrate, and manage their workloads and applications on AWS?
A.AWS Marketplace
B.Technical Account manager
C.AWS Partner Network Technology Partners
D.AWS Partner Network Consulting Partners

A

D.AWS Partner Network Consulting Partners

Explanation
The AWS Partner Network (APN) is focused on helping partners build successful AWS-based businesses to drive superb customer experiences. This is accomplished by developing a global ecosystem of Partners with specialties unique to each customer’s needs.

There are two types of APN Partners:

  1. APN Consulting Partners
  2. APN Technology Partners

APN Consulting Partners are professional services firms that help customers of all sizes design, architect, migrate, or build new applications on AWS. Consulting Partners include System Integrators (SIs), Strategic Consultancies, Resellers, Digital Agencies, Managed Service Providers (MSPs), and Value-Added Resellers (VARs).

APN Technology Partners provide software solutions that are either hosted on, or integrated with, the AWS platform. Technology Partners include Independent Software Vendors (ISVs), SaaS, PaaS, developer tools, management and security vendors.

Hence, the correct answer in this scenario is APN Consulting Partners.

APN Technology Partners is incorrect because this only provides software solutions that are either hosted on, or integrated with, the AWS platform. You should use APN Consulting Partners instead as this program helps customers to design, architect, migrate, or build new applications on AWS which is what is needed in the scenario.

AWS Marketplace is incorrect because this just provides a new sales channel for independent software vendors (ISVs) and Consulting Partners to sell their solutions to AWS customers. This makes it easy for customers to find, buy, deploy, and manage software solutions, including SaaS, in a matter of minutes.

Technical Account Management is incorrect because this is just a part of AWS Enterprise Support which provides advocacy and guidance to help plan and build solutions using best practices, coordinate access to subject matter experts and product teams, and proactively keep your AWS environment operationally healthy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

A company is designing a new cloud architecture for its mission-critical application in AWS which must be highly-available. Which of the following is the recommended pattern to meet this requirement?
A.Make sure that each component of the application has high bandwidfth and low-latency network connectivity using ENIs
B.Adopt a monolithic application architecture
C.Deploy an Amazon EC2 Spot Fleet with a diversified allocation strategy
D.Use multiple Availability Zones to ensure that the application can handle the failure of any single component

A

D.Use multiple Availability Zones to ensure that the application can handle the failure of any single component

Explanation:
At AWS, Availability Zones are the core of their infrastructure architecture and they form the foundation of AWS’s and customers’ reliability and operations. Availability Zones are designed for physical redundancy and provide resilience, enabling uninterrupted performance, even in the event of power outages, Internet downtime, floods, and other natural disasters.

Amazon EC2 is hosted in multiple locations worldwide. These locations are composed of Regions and Availability Zones. Each Region is a separate geographic area. Each Region has multiple, isolated locations known as Availability Zones. Amazon EC2 provides you the ability to place resources, such as instances, and data in multiple locations. Resources aren’t replicated across AWS Regions unless you do so specifically.

Amazon operates state-of-the-art, highly-available data centers. Although rare, failures can occur that affect the availability of instances that are in the same location. If you host all your instances in a single location that is affected by such a failure, none of your instances would be available.

Hence, the correct answer is: Use multiple Availability Zones to ensure that the application can handle the failure of any single component.

The option that says: Make sure that each component of the application has high bandwidth and low-latency network connectivity using ENIs is incorrect because improving the network connectivity through the use of Elastic Network Interfaces (ENIs) is not enough to make your architecture highly available. You still need to deploy your application to multiple Availability Zones.

The option that says: Deploy an Amazon EC2 Spot Fleet with a diversified allocation strategy is incorrect because although using a diversified allocation strategy for your EC2 Spot Fleet can improve the availability of your compute capacity, this solution is still inappropriate since Spot Instances can be interrupted by AWS.

The option that says: Adopt a monolithic application architecture is incorrect because this type of architecture is already obsolete and should be replaced with modern, microservices architecture.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Which of the following shares a collection of offerings to help you achieve specific business outcomes related to enterprise cloud adoption through paid engagements in several specialty practice areas?

A.AWS Technical Account Manager
B.AWS Professional Services
C.Concierge Support
D.AWS Enterprise Support

A

B.AWS Professional Services

Explanation:
AWS Professional Services shares a collection of offerings to help you achieve specific outcomes related to enterprise cloud adoption. Each offering delivers a set of activities, best practices, and documentation reflecting our experience supporting hundreds of customers in their journey to the AWS Cloud. AWS Professional Services’ offerings use a unique methodology based on Amazon’s internal best practices to help you complete projects faster and more reliably while accounting for evolving expectations and dynamic team structures along the way.
AWS Professional Services created the AWS Cloud Adoption Framework (AWS CAF) to help organizations design and travel an accelerated path to successful cloud adoption. The guidance and best practices provided by the framework help you build a comprehensive approach to cloud computing across your organization, and throughout your IT lifecycle. Using the AWS CAF helps you realize measurable business benefits from cloud adoption faster and with less risk.

Hence, the correct answer in this scenario is AWS Professional Services.

AWS Enterprise Support is incorrect because this is the one which provides 24x7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts.

Concierge Support is incorrect because this is a team composed of AWS billing and account experts that specialize in working with enterprise accounts. They will quickly and efficiently assist you with your billing and account inquiries, and work with you to implement billing and account best practices so that you can focus on running your business.

AWS Technical Account Manager is incorrect because this is your designated technical point of contact who provides advocacy and guidance to help plan and build solutions using best practices, coordinate access to subject matter experts and product teams, and proactively keep your AWS environment operationally healthy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Which of the following is an advantage of using managed services like RDS, ElastiCache, and CloudSearch in AWS?

A.Frees up the customer from the task of choosing an optimizing the underlying instance type and size of the service
B.Automatically scales the capacity wiithout any customer intervention
C.Better performance than customer-managed services such as EC2 instances
D.Simplifies all of your OS patching and backup activities to help keep your resources current and secure

A

D.Simplifies all of your OS patching and backup activities to help keep your resources current and secure

Explanation
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.

Periodically, Amazon RDS performs maintenance on Amazon RDS resources. Maintenance most often involves updates to the DB instance’s underlying hardware, underlying operating system (OS), or database engine version. Updates to the operating system most often occur for security issues and should be done as soon as possible.

For better performance, you can upgrade and customize the underlying instance type of your Relational Database Service (Amazon RDS) instance. The same goes for other managed services like Amazon Elasticache and Amazon Cloudsearch. You can select the right instance class, type and size based on your use case as shown below

Amazon Elasticache offers fully managed Redis and Memcached. Seamlessly deploy, run, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. Amazon Elasticache is a popular choice for Gaming, Ad-Tech, Financial Services, Healthcare, and IoT apps You no longer need to perform management tasks such as hardware provisioning, software patching, setup, configuration, monitoring, failure recovery, and backups. Elasticache continuously monitors your clusters to keep your workloads up and running so that you can focus on higher-value application development.

Amazon CloudSearch is a managed service in the AWS Cloud that makes it simple and cost-effective to set up, manage, and scale a search solution for your website or application. Amazon CloudSearch is a fully managed custom search service. Hardware and software provisioning, setup and configuration, software patching, data partitioning, node monitoring, scaling, and data durability are handled for you.

Hence, the advantage of using managed services like RDS, Elasticache, and CloudSearch in AWS is it simplifies all of your OS patching and backup activities to help keep your resources current and secure.

The option that says: Frees up the customer from the task of choosing and optimizing the underlying instance type and size of the service is incorrect because customers can still choose and optimize the underlying instance being used for their RDS, Elasticache, and CloudSearch service.

The option that says: Automatically scales the capacity without any customer intervention is incorrect because you are still responsible to scale your managed service by optimizing the underlying instance size and type. These AWS services will not automatically scale by default.

The option that says: Better performance than customer-managed services such as EC2 instances is incorrect because this is not always the case for AWS-managed services. In fact, it is possible that the customer-managed services can outperform the AWS-managed services based upon the class, type or size of EC2 instance being used.

28
Q

Which Amazon EC2 instance purchasing option lets you take advantage of unused EC2 capacity in the AWS Cloud and provides up to a 90% discount compared to On-Demand prices?

A.Convertible Reserved Instance
B.Standard Reserved Instance
C.Spot Instance
D.Dedicated host

A

C.Spot Instance

Explanation:
Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development workloads. Because Spot Instances are tightly integrated with AWS services such as Auto Scaling, EMR, ECS, CloudFormation, Data Pipeline and AWS Batch, you can choose how to launch and maintain your applications running on Spot Instances.

Moreover, you can easily combine Spot Instances with On-Demand and RIs to further optimize workload cost with performance. Due to the operating scale of AWS, Spot Instances can offer the scale and cost savings to run hyper-scale workloads. You also have the option to hibernate, stop or terminate your Spot Instances when EC2 reclaims the capacity back with two-minutes of notice. Only on AWS, you have easy access to unused compute capacity at such massive scale - all at up to a 90% discount.

Hence, the correct answer is Spot Instance.

Standard Reserved Instance is incorrect because although it provides a significant discount, it is only up to 75% and not 90%, compared to On-Demand instance pricing.

Convertible Reserved Instance is incorrect because this type doesn’t utilize unused EC2 capacity and actually costs more than a Standard Reserved Instance. This one only provides you with a discount of up to 54% compared to On-Demand Instances and can be purchased for a 1-year or 3-year term.

Dedicated Hosts is incorrect because this is actually a physical EC2 server dedicated for your use and not just an unused EC2 capacity in AWS.

29
Q

Which of the following are the things that Amazon CloudWatch Logs can accomplish? (Select TWO.)
A.Record AWS Management Console actions and API calls
B.Monitor application logs from Amazon EC2 instances
C.Store your log data at absolutely no charge
D.Create alarms that automatically stop, terminaye, reboot, or recover your EC2 instances
E.Adjust the retention policy for each log

A

B.Monitor application logs from Amazon EC2 instances
E.Adjust the retention policy for each log

Explanation:
You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources.

CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis. CloudWatch Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time, and you can query them and sort them based on other dimensions, group them by specific fields, create custom computations with a powerful query language, and visualize log data in dashboards.

Monitor Logs from Amazon EC2 Instances – You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold you specify. CloudWatch Logs uses your log data for monitoring; so, no code changes are required. For example, you can monitor application logs for specific literal terms (such as “NullReferenceException”) or count the number of occurrences of a literal term at a particular position in log data (such as “404” status codes in an Apache access log). When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify. Log data is encrypted while in transit and while it is at rest.

Monitor AWS CloudTrail Logged Events – You can create alarms in CloudWatch and receive notifications of particular API activity as captured by CloudTrail and use the notification to perform troubleshooting.

Log Retention – By default, logs are kept indefinitely and never expire. You can adjust the retention policy for each log group, keeping the indefinite retention, or choosing retention periods between 10 years and one day.

Archive Log Data – You can use CloudWatch Logs to store your log data in highly durable storage. The CloudWatch Logs agent makes it easy to quickly send both rotated and non-rotated log data off of a host and into the log service. You can then access the raw log data when you need it.

Log Route 53 DNS Queries – You can use CloudWatch Logs to log information about the DNS queries that Route 53 receives.

Hence, the correct answers are: monitor application logs from Amazon EC2 Instances and adjust the retention policy for each log group.

The option that says: record AWS Management Console actions and API calls is incorrect because this refers to CloudTrail and not CloudWatch Logs.

The option that says: create alarms that automatically stop, terminate, reboot, or recover your EC2 instances is incorrect because this is actually a task that can be accomplished by CloudWatch Alarms.

The option that says: store your log data at absolutely no charge is incorrect because this service is not entirely free and you still have to pay for your usage

30
Q
Which service provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services?
A.AWS CloudTrail
B.AWS Config
C.Amazon CloudWatch
D.AWS Infrastructure Event Management
A

A.AWS CloudTrail

Explanation
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.

With AWS CloudTrail, you can simplify your compliance audits by automatically recording and storing event logs for actions made within your AWS account. Integration with Amazon CloudWatch Logs provides a convenient way to search through log data, identify out-of-compliance events, accelerate incident investigations, and expedite responses to auditor requests.

It also increases visibility into your user and resource activity by recording AWS Management Console actions and API calls. You can identify which users and accounts called AWS, the source IP address from which the calls were made, and when the calls occurred.

Hence, the correct answer in this scenario is AWS CloudTrail.

Amazon CloudWatch is incorrect because this service is primarily used to collect monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.

AWS Config is incorrect because this is just a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. It doesn’t provide you with an event history of your AWS account activity, unlike CloudTrail.

AWS Infrastructure Event Management is incorrect because this is a structured program available to Enterprise Support customers (and Business Support customers for an additional fee) that helps you plan for large-scale events such as product or application launches, infrastructure migrations, and marketing events. The type of “events” that this program track is relating to business operations such as Application Launch, Datacenter Migration or Marketing Event, which is quite different from the type of “event” that CloudTrail tracks.

31
Q

What are the benefits of using Edge locations in AWS? (Select TWO.)

A.Seamlessly extends AWS to edge devices so they can act locally on the data they generate, while using the cloud for management, analytics, and durable storage
B.Offers an easy-to-use edge computing device that is helpful
C.Improves application performance by delivering content close to your users
D.Provides highly scalable object storage for your static content
E.Provides caching which reduces the load on your origin servers

A

C.Improves application performance by delivering content close to your users
E.Provides caching which reduces the load on your origin servers

Explanation:
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. Basically, an Edge location is just a site that CloudFront uses to cache copies of your content for faster delivery to users at any location.

When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

Regional edge caches are CloudFront locations that are deployed globally, close to your viewers. They’re located between your origin server and the points of presence (POPs) —global edge locations that serve content directly to viewers. As objects become less popular, individual POPs might remove those objects to make room for more popular content. Regional edge caches have a larger cache than an individual POP, so objects remain in the cache longer at the nearest regional edge cache location. This helps keep more of your content closer to your viewers, reducing the need for CloudFront to go back to your origin server, and improving overall performance for viewers.

Hence, the correct answers are:

  • Improves application performance by delivering content closer to your users
  • Provides caching which reduces the load on your origin servers

The option that says: Offers an easy-to-use edge computing device that is helpful for data migration is incorrect because this is the description for AWS Snowball Edge. An edge location is not commonly used for data migration and it is not related to edge computing devices.

The option that says: Seamlessly extends AWS to edge devices so they can act locally on the data they generate, while still using the cloud for management, analytics, and durable storage is incorrect because this describes the AWS IoT Greengrass service which is quite different from Edge locations.

The option that says: Provides highly scalable object storage for your static content is incorrect because this refers to Amazon S3 and not Edge location.

32
Q
Which of the following is the most cost-effective AWS Support Plan to use if you need access to AWS Support API for programmatic case management?
A.Basic
B.Enterprise
C.Developer
D.Business
A

C.Developer

Explanation:
AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, whitepapers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can select a support plan that best aligns with your AWS use case.

AWS Support offers four support plans: Basic, Developer, Business, and Enterprise. The Basic plan is free of charge and offers support for account and billing questions and service limit increases. The other plans offer an unlimited number of technical support cases with pay-by-the-month pricing and no long-term contracts, providing the level of support that meets your needs.

All AWS customers automatically have around-the-clock access to these features of the Basic support plan:

  • Customer Service: one-on-one responses to account and billing questions
  • Support forums
  • Service health checks
  • Documentation, whitepapers, and best-practice guides

In addition, customers with a Business or Enterprise support plan have access to these features:

  • Use-case guidance: what AWS products, features, and services to use to best support your specific needs.
  • AWS Trusted Advisor, which inspects customer environments. Then, Trusted Advisor identifies opportunities to save money, close security gaps, and improve system reliability and performance.
  • An API for interacting with Support Center and Trusted Advisor. This API allows for automated support case management and Trusted Advisor operations.
  • Third-party software support: help with Amazon Elastic Compute Cloud (EC2) instance operating systems and configuration. Also, help with the performance of the most popular third-party software components on AWS.

The AWS Support API provides access to some of the features of the AWS Support Center. This API allows programmatic access to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status. AWS provides this access for AWS Support customers who have a Business or Enterprise support plan.

Since the Business plan is more affordable than the Enterprise, the correct answer is the Business support plan.

Both Basic and Developer support plans are incorrect since these types do not have access to the AWS Support API.

Enterprise support plan is incorrect because although this one has access to the AWS Support API, it is still more expensive compared with the Business plan. Remember that the scenario says to choose the most cost-effective AWS Support Plan.

33
Q
Which of the following can you use to connect your on-premises data center and your cloud architecture in AWS? (Select TWO.)
A.Amazon Route 53
B.VPC Peering
C.Virtual Private Gateway
D.NAT Gateway
E.Egress-Only Internet Gateway
A

A.Amazon Route 53
C.Virtual Private Gateway

Explanation:
Enterprise environments are often a mix of cloud, on-premises data centers, and edge locations. Hybrid cloud architectures help organizations integrate their on-premises and cloud operations to support a broad spectrum of use cases using a common set of cloud services, tools, and APIs across on-premises and cloud environments.

An Amazon VPC VPN connection can link your data center (or network) to your Amazon Virtual Private Cloud (VPC). A customer gateway is the anchor on your side of that connection. It can be a physical or software appliance. The anchor on the AWS side of the VPN connection is called a virtual private gateway.

The following diagram shows your network, the customer gateway, the VPN connection that goes to the virtual private gateway, and the VPC. There are two lines between the customer gateway and virtual private gateway because the VPN connection consists of two tunnels to provide increased availability for the Amazon VPC service. If there’s a device failure within AWS, your VPN connection automatically fails over to the second tunnel so that your access isn’t interrupted.

From time to time, AWS also performs routine maintenance on the virtual private gateway, which may briefly disable one of the two tunnels of your VPN connection. Your VPN connection automatically fails over to the second tunnel while this maintenance is performed. When you configure your customer gateway, it’s therefore important that you configure both tunnels.

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.tutorialsdojo.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other.

This service can also help you create a hybrid cloud architecture using the Amazon Route 53 Resolver which provides recursive DNS for your Amazon VPC and on-premises networks over AWS Direct Connect or AWS Managed VPN.

NAT Gateway is incorrect because this just enables EC2 instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances.

Egress-Only Internet Gateway is incorrect because this works like a NAT Gateway but for IPv6 traffic only. An egress-only Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in your VPC to the Internet, and prevents the Internet from initiating an IPv6 connection with your instances.

VPC Peering is incorrect because this is just a networking connection between two VPCs, and not between your on-premises data center and VPC. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.

34
Q

In AWS Trusted Advisor, which of the following options are included among the five categories being considered to analyze your AWS environment and provide the best practice recommendations? (Select TWO.)

A.Fault Tolerance
B.Storage Capacity
C.Performance
D.Infrastructure
E.Instance Usage
A

A.Fault Tolerance
C.Performance

Explanation:
AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. It inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps.

Whether establishing new workflows, developing applications, or as part of ongoing improvement, take advantage of the recommendations provided by Trusted Advisor on a regular basis to help keep your solutions provisioned optimally.

Trusted Advisor includes an ever-expanding list of checks in the following five categories:

Cost Optimization – recommendations that can potentially save you money by highlighting unused resources and opportunities to reduce your bill.

Security – identification of security settings that could make your AWS solution less secure.

Fault Tolerance – recommendations that help increase the resiliency of your AWS solution by highlighting redundancy shortfalls, current service limits, and over-utilized resources.

Performance – recommendations that can help to improve the speed and responsiveness of your applications.

Service Limits – recommendations that will tell you when service usage is more than 80% of the service limit.

Hence, the correct answers in this scenario are: Performance and Fault Tolerance.

All other options (Instance Usage, Infrastructure and Storage Capacity) are incorrect since these are not valid categories in Trusted Advisor.

35
Q

How can you apply and easily manage the common access permissions to a large number of IAM users in AWS?

A.Apply permissions to multiple IAM Users by using a cross-account role
B.Attach the exact same IAM Policy to all of the IAM Users
C.Attach the necessary policies or permissions required to a new IAM Group then afterwards, add the IAM Users to the IAM group
D.Attach the IAM Policy to an IAM Role then afterwards, associate that role to all of the IAM Users

A

C.Attach the necessary policies or permissions required to a new IAM Group then afterwards, add the IAM Users to the IAM group

Explanation:
An IAM group is a collection of IAM users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users. For example, you could have a group called Admins and give that group the types of permissions that administrators typically need. Any user in that group automatically has the permissions that are assigned to the group.

If a new user joins your organization and needs administrator privileges, you can assign the appropriate permissions by adding the user to that group. Similarly, if a person changes jobs in your organization, instead of editing that user’s permissions, you can remove him or her from the old groups and add him or her to the appropriate new groups.

Note that a group is not truly an “identity” in IAM because it cannot be identified as a Principal in a permission policy. It is simply a way to attach policies to multiple users at one time.

Hence, the correct solution for this requirement is to: Attach the necessary policies or permissions required to a new IAM Group then afterwards, add the IAM Users to the IAM group.

The option that says: Attach the exact same IAM Policy to all of the IAM Users is incorrect because this entails a high administrative overhead to execute. Doing this to each and every IAM Users will take you a lot of time instead of just using an IAM Group.

The option that says: Attach the IAM Policy to an IAM Role then afterwards, associate that role to all of the IAM Users is incorrect because this is also not an effective way of applying the permissions to a large number of IAM Users. It is better to use IAM Groups to apply and easily manage the common access permissions to a large number of IAM users in AWS.

The option that says: Apply permissions to multiple IAM Users by using a cross-account role is incorrect because this is only applicable if you want to delegate access to resources that are in different AWS accounts that you own.

36
Q

Which of the following is a key financial benefit of migrating systems hosted on your on-premises data center to AWS?

A.Opportunity to replace variable capital expenses (CAPEX) with low upfront costs
B.Opportunity to replace upfront operation expenses (OPEX) with low variable operational expenses (OPEX)
C.Opportunity to replace upfront capital expenses (CAPEX) with low variable costs
D.Opportunity to replace variable operational expenses (OPEX) with low upfront capital expenses (CAPEX)

A

C.Opportunity to replace upfront capital expenses (CAPEX) with low variable costs

Explanation:
Amazon Web Services offers a broad set of global cloud-based products including compute, storage, databases, analytics, networking, mobile, developer tools, management tools, IoT, security, and enterprise applications: on-demand, available in seconds, with pay-as-you-go pricing. From data warehousing to deployment tools, directories to content delivery, over 140 AWS services are available.

New services can be provisioned quickly, without the upfront capital expense. This allows enterprises, start-ups, small and medium-sized businesses, and customers in the public sector to access the building blocks they need to respond quickly to changing business requirements.

In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses as web services—now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with your business. With the cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.

Hence, the correct answer in this scenario is: Opportunity to replace upfront capital expenses (CAPEX) with low variable cost.

The option that says: Opportunity to replace upfront operational expenses (OPEX) with low variable operational expenses (OPEX) is incorrect because although moving to AWS provides an opportunity for low variable expenditures, the main benefit is actually the opportunity to replace upfront capital expenses (CAPEX) and not the operational expenses (OPEX).

The option that says: Opportunity to replace variable operational expenses (OPEX) with low upfront capital expenses (CAPEX) is incorrect because the primary benefit is the opportunity to replace upfront capital expenses (CAPEX) and not the OPEX.

The option that says: Opportunity to replace variable capital expenses (CAPEX) with low upfront costs is incorrect because it is actually the other way around. AWS provides the opportunity to replace upfront capital expenses (CAPEX) of your on-premises data center with low variable cost.

37
Q

Which of the following cloud architecture principles below is followed if you distribute your workloads across multiple Availability Zones in AWS as well as using Amazon RDS Multi-AZ?

A.Think parallel
B.Implement elasticity
C.Decouple your components
D.Design for failure

A

D.Design for failure

Explanation:
There are various best practices that you can follow which can help you build an application in the cloud. The notable ones are:

  1. Design for failure
  2. Decouple your components
  3. Implement elasticity
  4. Think parallel

In Design for failure, it encourages you to be a pessimist when designing architectures in the cloud; assume things will fail. In other words, always design, implement, and deploy for automated recovery from failure.

In particular, assume that your hardware will fail. Assume that outages will occur. Assume that some disaster will strike your application. Assume that you will be slammed with more than the expected number of requests per second someday. Assume that with time your application software will fail too. By being a pessimist, you end up thinking about recovery strategies during design time, which helps in designing an overall system better.

Designing with an assumption that underlying hardware will fail, will prepare you for the future when it actually fails. This design principle will help you design operations-friendly applications. If you can extend this principle to pro-actively measure and balance load dynamically, you might be able to deal with variance in network and disk performance that exists due to the multi-tenant nature of the cloud.

AWS specific tactics for implementing this best practice are as follows:

  1. Failover gracefully using Elastic IPs: Elastic IP is a static IP that is dynamically re-mappable. You can quickly remap and failover to another set of servers so that your traffic is routed to the new servers. It works great when you want to upgrade from old to new versions or in case of hardware failures
  2. Utilize multiple Availability Zones: Availability Zones are conceptually like logical datacenters. By deploying your architecture to multiple availability zones, you can ensure high availability. Utilize Amazon RDS Multi-AZ deployment functionality to automatically replicate database updates across multiple Availability Zones.
  3. Maintain an Amazon Machine Image so that you can restore and clone environments very easily in a different Availability Zone; Maintain multiple Database slaves across Availability Zones and setup hot replication.
  4. Utilize Amazon CloudWatch (or various real-time open source monitoring tools) to get more visibility and take appropriate actions in case of hardware failure or performance degradation. Setup an Auto Scaling group to maintain a fixed fleet size so that it replaces unhealthy Amazon EC2 instances by new ones.
  5. Utilize Amazon EBS and set up cron jobs so that incremental snapshots are automatically uploaded to Amazon S3 and data is persisted independent of your instances.
  6. Utilize Amazon RDS and set the retention period for backups, so that it can perform automated backups.

By focusing on concepts and best practices - like designing for failure, decoupling the application components, understanding and implementing elasticity, combining it with parallelization, and integrating security in every aspect of the application architecture - cloud architects can understand the design considerations necessary for building highly scalable cloud applications.

Hence, the correct answer is: Design for failure

Decouple your components is incorrect because this principle simply reinforces the Service-Oriented Architecture (SOA) design principle that the more loosely coupled the components of the system, the bigger and better it scales. This can be implemented by using Amazon SQS to isolate components and act as a buffer between them.

Think parallel is incorrect because this just internalizes the concept of parallelization when designing architectures in the cloud. It advocates to not only implement parallelization wherever possible but also automate it because the cloud allows you to create a repeatable process every easily.

Implement elasticity is incorrect because this principle is primarily implemented by automating your deployment process and streamlining the configuration and build process of your architecture. This ensures that the system can scale without any human intervention.

38
Q

“Increase speed and ______” is one of the six advantages of Cloud Computing which refers to the reduction of acquisition time for making new compute resources available to your developers from weeks to just minutes.

A.High Availability
B.Reliability
C.Elasticity
D.Agility

A

D.Agility

Explanation
Cloud computing is the on-demand delivery of compute power, database, storage, applications, and other IT resources via the internet with pay-as-you-go pricing.

Whether you are using it to run applications that share photos to millions of mobile users or to support business critical operations, a cloud services platform provides rapid access to flexible and low cost IT resources. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use.

There are six advantages of using Cloud Computing:

  1. Trade capital expense for variable expense

– Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume.

  1. Benefit from massive economies of scale

– By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.

  1. Stop guessing capacity

– Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice.

  1. Increase speed and agility

– In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.

  1. Stop spending money running and maintaining data centers

– Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers.

  1. Go global in minutes

– Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost.

Hence, the correct answer is Agility.

Elasticity is incorrect because this refers to the ability to provision the right amount of resources that you actually need, knowing you can instantly scale up or down with the needs of your business.

Reliability is incorrect because this is actually one of the pillars of the AWS Well-Architected Framework and not part of the six advantages of using Cloud Computing. This is the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues

High Availability is incorrect because this is just a measurement of a system’s ability to provide its designed functionality even in the event of a system outage.

39
Q

Which of the following Amazon EC2 instance purchasing options can help you address compliance requirements and reduce costs by allowing you to use your existing server-bound software licenses?

A.Reserved Instance
B.Dedicated Host
C.Dedicated Instance
D.On-Demand Instance

A

B.Dedicated Host

Explanation:
An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use. Dedicated Hosts can help you address compliance requirements and reduce costs by allowing you to use your existing server-bound software licenses.

Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software licenses, including Microsoft Windows Server, Microsoft SQL Server, SUSE Linux Enterprise Server, Red Hat Enterprise Linux, or other software licenses that are bound to VMs, sockets, or physical cores, subject to your license terms.

You can use Dedicated Hosts and Dedicated instances to launch Amazon EC2 instances on physical servers that are dedicated to your use. An important difference between a Dedicated Host and a Dedicated instance is that a Dedicated Host gives you additional visibility and control over how instances are placed on a physical server, and you can consistently deploy your instances to the same physical server over time. As a result, Dedicated Hosts enable you to use your existing server-bound software licenses and address corporate compliance and regulatory requirements.

The following table highlights the key similarities and differences in the features available to you when using Dedicated Hosts and Dedicated instances:

You have the option to launch instances onto a specific Dedicated Host, or you can let Amazon EC2 place the instances automatically. Controlling instance placement allows you to deploy applications to address licensing, corporate compliance, and regulatory requirements.

On-Demand Instances purchasing option is incorrect because this only enables you to pay for compute capacity per hour or per second depending on which instances you run. You cannot use your existing server-bound software licenses with this option.

Dedicated Instances purchasing option is incorrect because although Dedicated instances also run on dedicated hardware, Dedicated Hosts provide further visibility and control by allowing you to place your instances on a specific, physical server.

Reserved Instances purchasing option is incorrect as you would not be able to use your existing server-bound software licenses with this one. You have to use a Dedicated Host instead.

40
Q

What are the things that you can implement to improve the security of your Identity and Access Management (IAM) users? (Select TWO.)

A.Block incoming traffic via Network ACL
B.Block incoming traffic via Security Groups
C.Configure a strong password policy for your users
D.Enable AWS Mobile Push Notifications
E.Enable Multi-Factor Authentication (MFA)

A

C.Configure a strong password policy for your users
E.Enable Multi-Factor Authentication (MFA)

Explanation:
AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.

You can improve the security of your Identity and Access Management (IAM) users by applying the following IAM best practices:

Rotate credentials regularly: Change your own passwords and access keys regularly, and make sure that all IAM users in your account do as well. That way, if a password or access key is compromised without your knowledge, you limit how long the credentials can be used to access your resources. You can apply a password policy to your account to require all your IAM users to rotate their passwords. You can also choose how often they must do so.

Configure a strong password policy for your users: If you allow users to change their own passwords, require that they create strong passwords and that they rotate their passwords periodically. On the Account Settings page of the IAM console, you can create a password policy for your account. You can use the password policy to define password requirements, such as minimum length, whether it requires non-alphabetic characters, how frequently it must be rotated, and so on.

Enable MFA: For extra security, we recommend that you require multi-factor authentication (MFA) for all users in your account. With MFA, users have a device that generates a response to an authentication challenge. Both the user’s credentials and the device-generated response are required to complete the sign-in process. If a user’s password or access keys are compromised, your account resources are still secure because of the additional authentication requirement.

Hence, the correct answers in this scenario are: Enable Multi-Factor Authentication (MFA) and Configure a strong password policy for your users.

The options that say: Block incoming traffic via Network ACL and Block incoming traffic via Security Groups are wrong because these are not related to IAM but more on VPC Networking.

The option that says: Enable AWS Mobile Push Notification is incorrect because this is just a feature of Amazon SNS and is not related to IAM.

41
Q

Which service should you use if you need a scalable, fast, and flexible nonrelational database service?

A.Amazon RDS
B.Amazon RedShift
C.Amazon DynamoDB
D.Amazon S3

A

C.Amazon DynamoDB

explanation:
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database so that you don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. DynamoDB also offers encryption at rest, which eliminates the operational burden and complexity involved in protecting sensitive data.

For decades, the predominant data model that was used for application development was the relational data model used by relational databases such as Oracle, DB2, SQL Server, MySQL, and PostgreSQL. It wasn’t until the mid to late 2000s that other data models began to gain significant adoption and usage. To differentiate and categorize these new classes of databases and data models, the term ‘NoSQL’ was coined. Often the term ‘NoSQL’ is used interchangeably with ‘nonrelational’.

Hence, the correct answer is Amazon DynamoDB.

Amazon Redshift is incorrect because this is a data warehousing service that is specifically designed for online analytic processing (OLAP) and business intelligence (BI) applications which require complex queries against large datasets.

Amazon RDS or Amazon Relational Database Service is incorrect because it is primarily used as a relational database just as its name implies.

Amazon S3 is incorrect because this is commonly used as scalable object storage and not as a nonrelational database.

42
Q

Which among the options below is a highly available and scalable cloud Domain Name System (DNS) web service in AWS?

A.Rekognition
B.Active Directory Domain Service
C.Route 53
D.Lightsail

A

C.Route 53

Explanation:
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.tutorialsdojo.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.

Amazon Route 53 effectively connects user requests to infrastructure running in AWS – such as Amazon EC2 instances, Elastic Load Balancing load balancers, or Amazon S3 buckets – and can also be used to route users to infrastructure outside of AWS. You can use Amazon Route 53 to configure DNS health checks to route traffic to healthy endpoints or to independently monitor the health of your application and its endpoints. Amazon Route 53 Traffic Flow makes it easy for you to manage traffic globally through a variety of routing types, including Latency Based Routing, Geo DNS, Geoproximity, and Weighted Round Robin—all of which can be combined with DNS Failover in order to enable a variety of low-latency, fault-tolerant architectures.

Using Amazon Route 53 Traffic Flow’s simple visual editor, you can easily manage how your end-users are routed to your application’s endpoints—whether in a single AWS region or distributed around the globe. Amazon Route 53 also offers Domain Name Registration – you can purchase and manage domain names such as example.com and Amazon Route 53 will automatically configure DNS settings for your domains.

Hence, the correct answer is Route 53.

Rekognition is incorrect because this is just a service that makes it easier for you to add powerful visual analysis to your applications.

Active Directory Domain Service is incorrect because this is just a core Windows service that provides the foundation for many enterprise-class Microsoft-based solutions, including Microsoft SharePoint, Microsoft Exchange, and .NET applications.

Lightsail is incorrect because this is just an easy-to-use cloud platform that offers everything you need to build an application or website, plus a cost-effective, monthly plan.

43
Q

Which of the following allows you to set coverage targets and receive alerts when your utilization drops below the threshold you define?

A.AWS Trusted Advisor
B.AWS Cost Explorer
C.AWS Budgets
D.Amazon CloudWatch Billing Alarm

A

C.AWS Budgets

Explanation:
AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.

You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Reservation alerts are supported for Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache, and Amazon Elasticsearch reservations.

The AWS Budgets Dashboard is your hub for creating, tracking, and inspecting your budgets. From the AWS Budgets Dashboard, you can create, edit, and manage your budgets, as well as view the status of each of your budgets. You can also view additional details about your budgets, such as a high-level variance analysis and a budget criteria summary.

Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic.

Hence, the correct answer is AWS Budgets.

AWS Trusted Advisor is incorrect because this is just an online tool that provides you real-time guidance to help you provision your resources following AWS best practices.

Amazon CloudWatch Billing Alarm is incorrect because although you can use this to monitor your estimated AWS charges, this service still does not allow you to set coverage targets and receive alerts when your utilization drops below the threshold you define.

AWS Cost Explorer is incorrect because it only lets you visualize, understand, and manage your AWS costs and usage over time. You cannot define any threshold using this service, unlike AWS Budgets.

44
Q

A company is in the process of choosing the most suitable AWS Region to migrate their applications. Which of the following factors should they consider? (Select TWO.)

A.Proximity to your end-users for on-site visits to your on-premises data center
B.Zone Security
C.Potential volume discounts for the specific AWS Region
D.Enhance customer experiences by reducing latency to users
E.Support country-specific data sovereignty compliance requirements

A

D.Enhance customer experiences by reducing latency to users
E.Support country-specific data sovereignty compliance requirements
Explanation:
Companies around the world are moving to a cloud-based infrastructure to increase IT agility, gain unlimited scalability, improve reliability, and lower costs. They want the flexibility to expand their operations at a rapid pace without worrying about setting up new IT infrastructure. They want to enhance their end-user and customer experiences by minimizing latency, the time it takes for their data packets to travel, so they can avoid delays and interruptions.

As well, customers want to be able to easily support any country-specific data sovereignty requirements, which means they need the flexibility to have a wide selection of geographic regions of data centers from which to choose to deploy their application workloads.

The AWS Global Infrastructure delivers a cloud infrastructure companies can depend on—no matter their size, changing needs, or challenges. The AWS Global Infrastructure is designed and built to deliver the most flexible, reliable, scalable, and secure cloud computing environment with the highest quality global network performance available today. Every component of the AWS infrastructure is designed and built for redundancy and reliability, from regions to networking links to load balancers to routers and firmware.

Hence, the correct answers in this scenario are: enhance customer experiences by reducing latency to users and support country-specific data sovereignty compliance requirements.

The option that says: proximity to your end-users for on-site visits to your on-premises data center is incorrect because an AWS Region is separate from your on-premises data center. When choosing an AWS Region, the factor that you should consider would be the proximity to your end-users in order to minimize latency and not for on-site visits.

The option that says: potential volume discounts for the specific AWS Region is incorrect because volume discounts can be attained through the use of AWS Organizations and Consolidated Billing.

The option that says: zone security is incorrect because this is just a customer-specific control based on the AWS Shared Responsibility Model. This is not a valid factor to consider in choosing the right AWS Region to deploy your applications.

45
Q

Which of the following characteristics correctly describes the Amazon Simple Storage Service? (Select TWO.)

A.An object storage service
B.A hybrid cloud storage service
C.A high-performance block storage service
D.A highly durable storage infrastructure
E.A durable, high throughput file system

A

A.An object storage service
D.A highly durable storage infrastructure

Explanation:
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.

Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world. Amazon S3 gives any developer access to the same highly scalable, highly available, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites.

Amazon S3 provides customers with a highly durable storage infrastructure. It has a Versioning feature that offers an additional level of protection by providing a means of recovery when customers accidentally overwrite or delete objects. This allows you to easily recover from unintended user actions and application failures. You can also use Versioning for data retention and archiving.

Hence, the correct options that correctly describe Amazon S3 are:

  • An object storage service
  • A highly durable storage infrastructure

The option that says: A durable, high throughput file system is incorrect because this describes the Amazon Elastic File System (EFS) instead of Amazon S3. Amazon EFS is a fully-managed service that makes it easy to set up, scale, and cost-optimize file storage in the Amazon Cloud.

The option that says: A high-performance block storage service is incorrect because this describes Amazon Elastic Block Storage (EBS) instead of Amazon S3. Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale.

The option that says: A hybrid cloud storage service is incorrect because this describes AWS Storage Gateway instead of Amazon S3. AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure. The term “hybrid” refers to the connection of your on-premises data center to AWS.

46
Q

Which of the following options below is solely the responsibility of the customer in accordance with the AWS shared responsibility model?

A.Zone Security
B.Patching of the host operating system
C.Awareness and Training
D.Configuration Management

A

A.Zone Security

Explanation:
Deploying workloads on Amazon Web Services (AWS) helps streamline time-to-market, increase business efficiency, and enhance user performance for many organizations. But as you capitalize on this strategy, it is important to understand your role in securing your AWS environment. Based on the AWS Shared Responsibility Model, AWS provides a data center and network architecture built to meet the requirements of the most security-sensitive organizations, while you are responsible for securing services built on top of this infrastructure, notably including network traffic from remote networks.

This customer/AWS shared responsibility model also extends to IT controls. Just as the responsibility to operate the IT environment is shared between AWS and its customers, so is the management, operation and verification of IT controls shared. AWS can help relieve customer burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by the customer. As every customer is deployed differently in AWS, customers can take advantage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment.

Customers can then use the AWS control and compliance documentation available to them to perform their control evaluation and verification procedures as required. Below are examples of controls that are managed by AWS, AWS Customers and/or both.

Inherited Controls: Controls which a customer fully inherits from AWS.

  • Physical and Environmental controls

Shared Controls: Controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services.

Examples include:

  • Patch Management: AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
  • Configuration Management: AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.
  • Awareness & Training: AWS trains AWS employees, but a customer must train their own employees.

Customer Specific: Controls which are solely the responsibility of the customer based on the application they are deploying within AWS services.

Examples include:

  • Service and Communications Protection or Zone Security which may require a customer to route or zone data within specific security environments.

Hence, the correct answer is Zone Security.

Both Configuration Management and Awareness & Training are incorrect because they are considered as shared controls between AWS and the customer.

Patching of the host operating system is incorrect because this is the responsibility of AWS. Take note that the customer is responsible for managing and patching the guest, and not the host, operating system.

47
Q
A company has web servers running on Amazon EC2 instances that access a RESTful API hosted on their on-premises data center. What kind of architecture is the company using?
A.Hybrid architecture
B.Serverless architecture
C.Platform as a Service (PaaS)
D.Software as a Service (SaaS)
A

A.Hybrid architecture

Explanation:
Enterprise environments are often a mix of cloud, on-premises data centers, and edge locations. Hybrid cloud architectures help organizations integrate their on-premises and cloud operations to support a broad spectrum of use cases using a common set of cloud services, tools, and APIs across on-premises and cloud environments.

Customers can seamlessly integrate their on-premises and cloud storage, networking, identity management, and security policies to enable use cases such as data center extension to the cloud, backup, and disaster recovery to the cloud, and hybrid data processing.

Since the company has web servers running on Amazon EC2 instances that access a RESTful API hosted on their on-premises data center, they are considered to be using a hybrid cloud computing deployment model. Hence, the correct answer is Hybrid architecture.

Serverless architecture is incorrect because the company is using EC2 instances for its web servers instead of Lambda or S3 as a static web hosting service. The serverless architecture enables you to build and run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning which you usually do if you have EC2 instances.

Infrastructure as a service (IaaS) is incorrect because this is not a type of architecture but rather, a type of Cloud Computing Model which removes the need for organizations to manage the underlying infrastructure (usually hardware and operating systems) and allow the customers to focus on the deployment and management of their applications.

Software as a Service (SaaS) is incorrect because this is just a type of Cloud Computing Model which provides you with a completed product that is run and managed by a specific service provider.

48
Q

Which of the following is true on how AWS lessens the time to provision your IT resources?
A.It provides an automated system of requesting and fulfilling IT resources from third-part vendors
B.It provides express service to deliver your servers to your data centers fast
C.It provides an AI-powered IT ticketing platform for fulfilling resource request
D.It provides various ways to programmatically provision IT resources

A

D.It provides various ways to programmatically provision IT resources

Explanation:
Cloud computing is the on-demand delivery of compute power, database, storage, applications, and other IT resources via the internet with pay-as-you-go pricing.

Whether you are using it to run applications that share photos to millions of mobile users or to support business critical operations, a cloud services platform provides rapid access to flexible and low cost IT resources. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use.

With Cloud Computing, you can stop spending money running and maintaining data centers. You can then focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers.

With the cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster. AWS provides you various ways and tools to programmatically provision IT resources such as AWS CLI, AWS API and the web-based AWS Management Console.

Hence, the correct answer is: It provides various ways to programmatically provision IT resources.

The option that says: It provides an AI-powered IT ticketing platform for fulfilling resource requests is incorrect because AWS doesn’t have this kind of ticketing platform. What AWS actually does is it allows you to programmatically provision IT resources using AWS CLI, AWS API, and the web-based AWS Management Console.

The option that says: It provides an automated system of requesting and fulfilling IT resources from third-party vendors is incorrect because AWS primarily is the cloud vendor and it doesn’t rely on third-party vendors to provision your resources.

The option that says: It provides express service to deliver your servers to your data centers fast is incorrect because AWS actually handles the underlying servers needed to run the cloud resources you requested. Remember that Cloud Computing is the on-demand delivery of compute power, database, storage, applications, and other IT resources via the Internet and not from your on-premises data centers.

49
Q

What is the minimum number of Availability Zones that you should set up for your Application Load Balancer in order to create a highly available architecture?

A.3
B.1
C.4
D.2

A

D.2

Explanation:
Suppose that you start out running your app or website on a single EC2 instance, and over time, traffic increases to the point that you require more than one instance to meet the demand. You can launch multiple EC2 instances from your AMI and then use Elastic Load Balancing to distribute incoming traffic for your application across these EC2 instances. This increases the availability of your application. Placing your instances in multiple Availability Zones also improves the fault tolerance in your application. If one Availability Zone experiences an outage, traffic is routed to the other Availability Zone.

A load balancer serves as the single point of contact for clients. Clients send requests to the load balancer, and the load balancer sends them to targets, such as EC2 instances, in two or more Availability Zones. At the very minimum, you have to select at least two Availability Zones from your VPC. To configure your load balancer, you have to create target groups and then register targets with your target groups. You also create listeners to check for connection requests from clients, and listener rules to route requests from clients to the targets in one or more target groups.

Hence, the correct answer is 2 Availability Zones.

1 Availability Zone is incorrect because if there is an AZ outage then your application will be completely unavailable. You need to have at least 2 AZs to make your application highly available.

Both 3 and 4 Availability Zones are incorrect because although these will certainly provide you with a higher level of availability, you simply just need a minimum of 2 AZs to make a highly available architecture.

50
Q

In the VPC dashboard of your AWS Management Console, which of the following services or features below can you manage? (Select TWO.)

A.Route 53
B.Security Groups
C.CloudFront
D.Lambda
E.Network ACLs
A

B.Security Groups
E.Network ACLs

Explanation:
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications.

You can easily customize the network configuration for your Amazon VPC. For example, you can create a public-facing subnet for your web servers that has access to the Internet, and place your backend systems such as databases or application servers in a private-facing subnet with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet.

In your VPC dashboard, you can manage all of the components of your VPCs such as the Subnets, Internet Gateways, NAT Gateways, Elastic IPs and many more. You can also control the security of your VPC by configuring the Network ACLs and Security Groups.

Hence, the correct answers are Network ACLs and Security Groups.

All of the other options (CloudFront, Lambda and Route 53) are all incorrect as these services have their own respective dashboards.

51
Q

A company is planning to adopt a hybrid cloud architecture with AWS. Which of the following can they use to assist them in estimating their costs? (Select TWO.)

A.AWS Pricing Calculator
B.Cost allocation tag
C.Consolidated Billing
D.AWS Total Cost of Ownership (TCO) Calculator
E.AWS Cost Explorer
A

A.AWS Pricing Calculator
D.AWS Total Cost of Ownership (TCO) Calculator

Explanation:
AWS offers you a pay-as-you-go approach for pricing of more than 165 cloud services. With AWS you pay only for the individual services you use, for as long as you use them, and without requiring long-term contracts or complex licensing. AWS pricing is similar to how you pay for utilities, such as water and electricity. You only pay for the services you consume, and when you stop using them, there are no additional costs or termination fees.

To estimate the costs of migrating on-premises infrastructure to AWS, you can use the AWS Total Cost of Ownership (TCO) Calculator. AWS helps you reduce Total Cost of Ownership (TCO) by reducing the need to invest in large capital expenditures and providing a pay-as-you-go model that empowers you to invest in the capacity you need and use it only when the business requires it.

The AWS Total Cost of Ownership (TCO) Calculator allows you to estimate the cost savings when using AWS and provide a detailed set of reports that can be used in executive presentations. The calculators also give you the option to modify assumptions that best meet your business needs.

To estimate a bill, use the AWS Pricing Calculator. You can enter your planned resources by service, and the Pricing Calculator provides an estimated cost per month. The AWS Pricing Calculator is an easy-to-use online tool that enables you to estimate the monthly cost of AWS services for your use case based on your expected usage. It is continuously updated with the latest pricing for all AWS services in all Regions.

To forecast your costs, use the AWS Cost Explorer. Use cost allocation tags to divide your resources into groups, and then estimate the costs for each group.

Hence, the correct answers are AWS Pricing Calculator and AWS Total Cost of Ownership (TCO) Calculator.

AWS Cost Explorer is incorrect because this service can only forecast your costs based on your previous usage. Remember that the scenario says that the company is just planning to adopt a hybrid cloud architecture with AWS which means that they can’t use the Cost Explorer yet to forecast nor estimate their cost.

Cost allocation tag is incorrect because this is primarily used to make it easier for you to categorize and track your AWS costs by tagging your resources.

Consolidated Billing is incorrect because this just allows you to track the combined costs of all the linked AWS accounts in your organization. This will not help you estimate your upcoming AWS costs.

52
Q

Which of the following is true regarding the Developer support plan in AWS? (Select TWO.)
A.Limited access to the 7 Core Trusted Advisor checks
B.No access to the AWS Support API
C.Full access to the AWS Support API
D.Has access to the full set of Trusted Advisor checks
E.Recommended if you have business and/or mission critical workloads in AWS

A

A.Limited access to the 7 Core Trusted Advisor checks
B.No access to the AWS Support API

Explanation:
AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, whitepapers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can select a support plan that best aligns with your AWS use case.

AWS Support offers four support plans: Basic, Developer, Business, and Enterprise. The Basic plan is free of charge and offers support for account and billing questions and service limit increases. The other plans offer an unlimited number of technical support cases with pay-by-the-month pricing and no long-term contracts, providing the level of support that meets your needs.

All AWS customers automatically have around-the-clock access to these features of the Basic support plan:

  • Customer Service: one-on-one responses to account and billing questions
  • Support forums
  • Service health checks
  • Documentation, whitepapers, and best-practice guides

In addition, customers with a Business or Enterprise support plan have access to these features:

  • Use-case guidance: what AWS products, features, and services to use to best support your specific needs.
  • AWS Trusted Advisor, which inspects customer environments. Then, Trusted Advisor identifies opportunities to save money, close security gaps, and improve system reliability and performance.
  • An API for interacting with Support Center and Trusted Advisor. This API allows for automated support case management and Trusted Advisor operations.
  • Third-party software support: help with Amazon Elastic Compute Cloud (EC2) instance operating systems and configuration. Also, help with the performance of the most popular third-party software components on AWS.

The AWS Support API provides access to some of the features of the AWS Support Center. This API allows programmatic access to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status. AWS provides this access for AWS Support customers who have a Business or Enterprise support plan.

Hence, the correct answers are limited access to the 7 Core Trusted Advisor checks and no access to the AWS Support API.

The option that says: full access to the AWS Support API is incorrect because only the Business and Enterprise support plan have access to this feature.

The option that says: recommended if you have business and/or mission critical workloads in AWS is incorrect because the Developer support plan is recommended if you are only experimenting or testing in AWS.

The option that says: has access to the full set of Trusted Advisor checks is incorrect because a Developer support plan only has limited access to the 7 Core Trusted Advisor checks.

53
Q
Which service should a company use to centrally manage policies and consolidate billing across multiple AWS accounts?
A.AWS Cost Explorer
B.AWS Trusted Advisor
C.AWS Organizations
D.AWS budgets
A

C.AWS Organizations

Explanation:
AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Whether you are a growing startup or a large enterprise, Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts.

Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. Through integrations with other AWS services, you can use Organizations to define central configurations and resource sharing across accounts in your organization. AWS Organizations is available to all AWS customers at no additional charge.

AWS Budgets is incorrect because it only gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define.

AWS Trusted Advisor is incorrect because this is just an online tool that provides you real-time guidance to help you provision your resources following AWS best practices.

AWS Cost Explorer is incorrect because it only lets you visualize, understand, and manage your AWS costs and usage over time. You cannot define any threshold using this service, unlike AWS Budgets.

54
Q

You need to host a new Microsoft SQL Server database in AWS for an urgent project. Which AWS services should you use to meet this requirement? (Select TWO.)

A.Amazon EC2
B.Amazon Relational Database Service (Amazon RDS)
C.Amazon Aurora
D.Amazon RedShift
E.Amazon Aurora Backtrack
A

A.Amazon EC2
B.Amazon Relational Database Service (Amazon RDS)

Explanation:
Amazon Web Services offers you the flexibility to run Microsoft SQL Server for as much or as little time as you need and select from a number of versions and editions. SQL Server on Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Store (Amazon EBS) gives you complete control over every setting, just like when it’s installed on-premises. Amazon Relational Database Service (Amazon RDS) is a managed service that takes care of all the maintenance, backups, and patching for you.

Hence, the correct answers in this scenario are: Amazon EC2 and Amazon Relational Database Service (Amazon RDS).

Amazon Aurora is incorrect because this is primarily used as a MySQL or PostgreSQL-compatible relational database. Although you can use the AWS Schema Conversion Tool to migrate your existing Microsoft SQL Server to Amazon Aurora, this service is still not applicable in this scenario since the requirement is urgent and you will be hosting a brand new database, not an already existing one.

Amazon Redshift is incorrect because this is just a fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. This service can’t be used to host a relational database like Microsoft SQL Server.

Amazon Aurora Backtrack is incorrect because this is just a feature of Amazon Aurora which allows you to restore or “backtrack” a DB cluster to a specific time, without restoring data from a backup. Hence, this is not a suitable option to host a Microsoft SQL Server database. This feature somewhat rewinds the DB cluster to the time you specify. Backtracking is not a replacement for backing up your DB cluster so that you can restore it to a point in time.

55
Q
A company has hybrid cloud architecture where their on-premises data center interacts with their cloud resources in AWS. Which of the following services in AWS could you use to deploy a web application to servers running on-premises? (Select TWO.)
A.AWS CloudFormation
B.AWS Elastic Beanstalk
C.AWS CodeDeploy
D.AWS Batch
E.AWS OpsWorks
A

C.AWS CodeDeploy
E.AWS OpsWorks

Explanation:
Enterprise environments are often a mix of cloud, on-premises data centers, and edge locations. Hybrid cloud architectures help organizations integrate their on-premises and cloud operations to support a broad spectrum of use cases using a common set of cloud services, tools, and APIs across on-premises and cloud environments.

Customers can seamlessly integrate their on-premises and cloud storage, networking, identity management, and security policies to enable use cases such as data center extension to the cloud, backup, and disaster recovery to the cloud, and hybrid data processing.

AWS offers services that integrate application deployment and management across on-premises and cloud environments for a robust hybrid architecture. Below are the following services that you can use to manage or deploy applications to your servers running on-premises:

OpsWorks – AWS OpsWorks is a configuration management service that helps customers configure and operate applications, both on-premises and in the AWS Cloud, using Chef and Puppet.

CodeDeploy – AWS CodeDeploy automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier to rapidly release new features, avoids downtime during application deployment, and handles the complexity of updating applications.

Hence, the correct answers in this scenario are AWS OpsWorks and AWS CodeDeploy.

Both AWS CloudFormation and AWS Elastic Beanstalk are incorrect because these services can only deploy applications to your AWS resources and not to the servers located in your on-premises data center.

AWS Batch is incorrect because this service simply has a set of batch management capabilities that enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. It doesn’t have the capability to deploy applications to your on-premises servers.

56
Q

What should you provide to your developers to allow them to access your AWS services through the AWS CLI?

A.SSH Keys
B.IAM Username and passwords
C.API Keys
D.Access Keys

A

D.Access Keys

Explanation:
The AWS Access Key ID and AWS Secret Access Key are your AWS credentials. They are associated with an AWS Identity and Access Management (IAM) user or role that determines what permissions you have.

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). If you don’t have access keys, you can create them from the AWS Management Console. As a best practice, do not use the AWS account root user access keys for any task where it’s not required. Instead, create a new administrator IAM user with access keys for yourself.

Hence, the correct answer is that you should provide your developers with access keys to allow them to access your AWS services through the AWS CLI.

Providing IAM username and passwords is incorrect because these are the credentials you use to manage your AWS services using the web-based Management Console.

Providing API keys is incorrect because this is primarily provisioned in Amazon API Gateway and not for AWS CLI.

Providing SSH keys is incorrect because this is only useful if you want to connect and control your EC2 instances by establishing an SSH connection.

57
Q

A company is using Amazon S3 to store their static media contents such as photos and videos. Which of the following should you use to provide specific users access to the bucket?

A.Network Access Control List
B.Bucket Policy
C.Security Group
D.SSH Key

A

B.Bucket Policy

Explanation:
Bucket policy and user policy are two of the access policy options available for you to grant permission to your Amazon S3 resources. Both use JSON-based access policy language.

For your bucket, you can add a bucket policy to grant other AWS accounts or IAM users permissions for the bucket and the objects in it. Any object permissions apply only to the objects that the bucket owner creates. Bucket policies supplement, and in many cases, replace ACL-based access policies.

You express bucket policy (and user policy) using a JSON file. You can create a policy that grants anonymous read or write permission on all objects in a bucket. By specifying the principal with a wild card (*), the policy grants anonymous accesss, and should be used carefully.

Hence, the correct answer in this scenario is Bucket policy.

Security Group is incorrect because this is primarily used as a virtual firewall for your EC2 instances, and not S3 buckets, to control inbound and outbound traffic.

SSH key is incorrect because this is only used if you want to establish an SSH connection to your EC2 instances and not for S3 buckets.

Network Access Control List is incorrect because this is just an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. This has nothing to do with providing users access to your S3 bucket.

58
Q

Which AWS services should you use to store rapidly changing data with low read and write latencies? (Select TWO.)

A.Amazon EBS
B.Amazon AppStream 2.0
C.Amazon RDS
D.AWS Snowball
E.Amazon S3
A

A.Amazon EBS
C.Amazon RDS

Explanation:
AWS offers multiple cloud-based storage options that you can use for your infrastructure. Each has a unique combination of performance, durability, availability, cost, and interface, as well as other characteristics such as scalability and elasticity. These additional characteristics are critical for web-scale cloud-based solutions. As with traditional on-premises applications, you can use multiple cloud storage options together to form a comprehensive data storage hierarchy.

Amazon S3 is optimal for storing numerous classes of information that are relatively static and benefit from its durability, availability, and elasticity features. However, in a number of situations, Amazon S3 is not the optimal solution. It has the following anti-patterns:

File system - Amazon S3 uses a flat namespace and isn’t meant to serve as a standalone, POSIX-compliant file system. However, by using delimiters (commonly either the ‘/’ or ‘' character) you are able to construct your keys to emulate the hierarchical folder structure of a file system within a given bucket. Alternatively, you can simply use EFS.

Structured data with query - Amazon S3 doesn’t offer query capabilities: to retrieve a specific object you need to already know the bucket name and key. Thus, you can’t use Amazon S3 as a database by itself. Instead, you need to pair Amazon S3 with a database, such as DynamoDB, to index and query metadata about Amazon S3 buckets and objects.

Rapidly changing data - Data that must be updated very frequently might be better served by a storage solution with lower read / write latencies, such as Amazon EBS volumes, Amazon RDS or other relational databases, or Amazon DynamoDB.

Hence, the correct answers are Amazon EBS and Amazon RDS just as mentioned above. These services are suitable to use in storing rapidly changing data with low read and write latencies.

AWS Snowball is incorrect because this is just a petabyte-scale data transport solution that uses devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud.

Amazon S3 is incorrect because this service is optimal for storing numerous classes of information that are relatively static and not rapidly changing data just as what was mentioned in this scenario.

Amazon AppStream 2.0 is incorrect because this is just a fully managed application streaming service which you can use to centrally manage your desktop applications.

59
Q

Which of the following is true regarding Elastic Load Balancing?

A.It is a virtual server that allows you to run your applications in the AWS Cloud
B.It translates domain names (such as www.tutorialsdojo.com) into numeric IP addresses such as 192.0.2.1 that Amazon EC2 instances use to connect to each other
C.It distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, in multiple Availability Zones
D.It automatically increases or decreases the number of instances as the demand of your application changes

A

C.It distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, in multiple Availability Zones

Explanation:
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones.

Elastic Load Balancing offers three types of load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault-tolerant. They are:

Application Load Balancer - This is best suited for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers. Operating at the individual request level (Layer 7), Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request.

Network Load Balancer - This is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load Balancer is also optimized to handle sudden and volatile traffic patterns.

Classic Load Balancer - This provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and connection level. Classic Load Balancer is intended for applications that were built within the EC2-Classic network.

Hence, the correct answer is: It distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, in multiple Availability Zones.

The option that says: It automatically increases or decreases the number of instances as the demand of your application changes is incorrect because this refers to Auto Scaling.

The option that says: It translates domain names (such as www.tutorialsdojo.com) into numeric IP addresses (such as 192.0.2.1) that Amazon EC2 instances use to connect to each other is incorrect because this refers to Route 53.

The option that says: It is a virtual server that allows you to run your applications in the AWS Cloud is incorrect because this refers to Amazon EC2.

60
Q

Which of the following is capable of inspecting your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps?

A.AWS Inspector
B.AWS Cost Explorer
C.AWS Trusted Advisor
D.AWS Budgets

A

C.AWS Trusted Advisor

Explanation:
AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. It inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps.

Whether establishing new workflows, developing applications, or as part of ongoing improvement, take advantage of the recommendations provided by Trusted Advisor on a regular basis to help keep your solutions provisioned optimally.

Trusted Advisor includes an ever-expanding list of checks in the following five categories:

Cost Optimization – recommendations that can potentially save you money by highlighting unused resources and opportunities to reduce your bill.

Security – identification of security settings that could make your AWS solution less secure.

Fault Tolerance – recommendations that help increase the resiliency of your AWS solution by highlighting redundancy shortfalls, current service limits, and over-utilized resources.

Performance – recommendations that can help to improve the speed and responsiveness of your applications.

Service Limits – recommendations that will tell you when service usage is more than 80% of the service limit.

Hence, the correct answer in this scenario is AWS Trusted Advisor.

AWS Cost Explorer is incorrect because this is just a tool that enables you to view and analyze your costs and usage. You can explore your usage and costs using the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. It has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time.

AWS Budgets is incorrect because it simply gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define.

AWS Inspector is incorrect because it is just an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.

61
Q

Which type of Elastic Load Balancer supports path-based routing, host-based routing, and bi-directional communication channels using WebSockets?

A.Network Load Balancer
B.Classic Load Balancer
C.Both Application Load balancer and network load balancer
D.Application Load Balancer

A

D.Application Load Balancer

Explanation:
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones.

Elastic Load Balancing offers three types of load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault-tolerant. They are:

Application Load Balancer - This is best suited for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers. Operating at the individual request level (Layer 7), Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request.

Network Load Balancer - This is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load Balancer is also optimized to handle sudden and volatile traffic patterns.

Classic Load Balancer - This provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and connection level. Classic Load Balancer is intended for applications that were built within the EC2-Classic network.

Application Load Balancers support path-based routing, host-based routing, WebSockets and support for containerized applications. For path-based routing, you can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL. For host-based routing, you can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.

Hence, the correct type of elastic load balancer to use is the Application Load Balancer.

Network Load Balancer and Classic Load Balancer are incorrect because they don’t support path-based nor host-based routing.

The option that says: Both Application Load Balancer and Network Load Balancer is incorrect because although the Application Load Balancer supports path-based routing and host-based routing, the Network Load Balancer does not.

62
Q
You need to launch a new EC2 Instance for a beta program which is scheduled to change its instance family, operating system and tenancy exactly 3 months after its trial period. Which type of Reserved Instance (RI) should you use?
A.Scheduled RI
B.Convertible RI
C.Zonal RI
D.Standard RI
A

B.Convertible RI

Explanation;
Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.

Standard Reserved Instances (RI) provide you with a significant discount (up to 75%) compared to On-Demand instance pricing and can be purchased for a 1-year or 3-year term. The average discount off On-Demand instances varies based on your term and chosen payment options (up to 40% for 1-year and 60% for a 3-year term). Customers have the flexibility to change the Availability Zone, the instance size, and networking type of their Standard Reserved Instances.

Convertible Reserved Instances (RI) provide you with a significant discount (up to 54%) compared to On-Demand Instances and can be purchased for a 1-year or 3-year term. Purchase Convertible Reserved Instances if you need additional flexibility, such as the ability to use different instance families, operating systems, or tenancies over the Reserved Instance term.

For Convertible Reserved Instance (RI), it can be exchanged during the term for another Convertible Reserved Instance with new attributes including instance family, instance type, platform, scope, or tenancy. You can also opt for a 3-year term to avail of more discounts.

Hence, the correct answer in this scenario is Convertible RI.

Standard RI is incorrect because although some of its attributes (such as the instance size) can be modified during the term, the instance family cannot be modified which is what the scenario requires. You cannot exchange a Standard Reserved Instance, only modify it.

Scheduled RI is incorrect because this type only enables you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. Unlike the Convertible RI, this cannot be exchanged during the term for another Reserved Instance with new attributes including instance family, instance type, platform, scope, or tenancy.

Zonal Standard RI is incorrect because this only refers to a Reserved Instance that you purchase for a specific Availability Zone and most importantly, a Standard RI type will not allow you to modify the instance family. The scope of a Reserved Instance can either be a Regional or Zonal RI. It has no instance size flexibility which means that the Reserved Instance discount applies to instance usage for the specified instance type and size only.

63
Q

Which of the following are true regarding Amazon Relational Database Service (Amazon RDS)? (Select TWO.)

A.It is a fully managed relational database service
B.Simplifies the management of time-consuming database administration tasks
C.Makes it easy to set up, operate and scale a relational database
D.Automatically scales up the relational database instance size based on the incoming workload
E.Provides 99.99999999999% reliability and durability

A

B.Simplifies the management of time-consuming database administration tasks
C.Makes it easy to set up, operate and scale a relational database

Explanation:
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.

Amazon RDS is available on several database instance types such as optimized for memory, performance or I/O, and provides you with six familiar database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. You can use the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS.

Hence, the correct answers are:

  • Simplifies the management of time-consuming database administration tasks
  • Makes it easy to set up, operate, and scale a relational database

The option that says: Provides 99.99999999999% reliability and durability is incorrect because this refers to S3.

The option that says: Automatically scales up the relational database instance size based on the incoming workload is incorrect because RDS does not automatically scale your instance size, but it can automatically scale your storage capacity if you enable the storage auto scaling feature. You can use Read Replicas or upgrade your database instance type for scaling but these are manual tasks and not done automatically.

The option that says: It is a fully managed relational database service is incorrect because RDS is just a managed database service. Since the customer still has some control over your RDS instance, this service is considered as a managed service and not a fully managed service.

64
Q

Which of the following is a key design principle when running an application in AWS?

A.Loigcal coupling
B.Tight coupling
C.Semantic coupling
D.Loose coupling

A

D.Loose coupling

Explanation:
The AWS Cloud includes many design patterns and architectural options that you can apply to a wide variety of use cases. Some key design principles of the AWS Cloud include scalability, disposable resources, automation, loose coupling managed services instead of servers, and flexible data storage options.

As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies — a change or a failure in one component should not cascade to other components.

A way to reduce interdependencies in a system is to allow the various components to interact with each other only through specific, technology-agnostic interfaces, such as RESTful APIs. In that way, technical implementation detail is hidden so that teams can modify the underlying implementation without affecting other components. As long as those interfaces maintain backward compatibility, deployments of different components are decoupled. This granular design pattern is commonly referred to as a microservices architecture.

Hence, the correct answer is: Loose coupling

Tight coupling is incorrect because it should be the opposite kind of coupling. The components of an IT system should be loosely, and not tightly, coupled to each other to reduce interdependencies.

Both Logical coupling and Semantic coupling are incorrect because these are related to Object-oriented programming and not a key design principle in AWS.

65
Q

Which of the following are regarded as regional services in AWS? (Select TWO.)

A.Amazon EFS
B.AWS Security Token Service
C.AWS Batch
D.Amazon Route 53
E.Amazon EC2
A

A.Amazon EFS
C.AWS Batch

Explanation:
AWS Batch is a regional service that simplifies running batch jobs across multiple Availability Zones within a region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs.

Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.

An AWS resource can be a Global, Regional or a Zonal service. A Global service means that it covers all of the AWS Regions across the globe, while a regional service means that a resource is only applicable to one specific region at a time. A regional service may or may not have the ability to replicate the same resource to another region. Lastly, a Zonal service can only exist in one Availability Zone.