50-99 Flashcards

(55 cards)

1
Q

Q50. An Internal Security Audit of AWS resources within a company found that a number of Amazon EC2 instances running Microsoft Windows workloads were missing several important operating-system-level patches. A Solutions Architect has been asked to fix existing patch deficiencies, and to develop a workflow to ensure that future patching requirements are identified and taken care of quickly. The solutions architect has decided to use AWS System Manager. It is important that EC2 instance reboots do not occur at the same time on all Windows workloads to meet organizational up-time requirements Which workflow will meet these requirements in an automated manner ?

A) Add a Patch Group tag with a value of Windows Servers to all existing EC2 Instances. Ensure that all Windows EC2 instances are assigned this tag Associate the AWS-Default Patch Baseline to the Windows Servers patch group Define an AWS system Manager maintenance window, conduct patching within it and associate it with the windows servers patch group. Register instances with the maintenance windows using associated subnet IDs. Assign the AWSRunPatchBaseline document as a task within each maintenance window

B) Add a Patch Group tag with a value of Windows server to all existing EC2 instances. Ensure that all Windows EC2 instances are assigned this tag. Associate the AWS Windows patch Baseline to the Windows Servers patch group create an Amazon Cloudwatch events rule configured to use a cron expression to schedule the execution of patching using the AWS Systems Manager run command Assign the AWS RunWindowPatchBaseline document as a task associated with the Windows Servers patch group create an AWS Systems Manager State Manager document to define commands to be executed during patch execution.

C) Add a Patch Group tag with a value of either Windows Server 1 or Windows Servers to all existing EC2 instances ensure that all Windows EC2 instances are assigned this tag Associate the AWS-DefaultPatchBaseline with both Windows Servers patch groups. Define two non-overlapping AWS Systems. Manager maintenance windows, conduct patching within them, and associate each with a different patch group Register targets with specific maintenance windows using the patch group tags. Assign the AWS RunPatchBaseline document as a task within each maintenance window.

D) Add a Patch Group tag with a value of either windows Servers 1 or Windows Servers to all existing EC2 instances Ensure that all Windows EC2 instances are assigned this tag Associate the AWS-Windows PatchBaseline with both Windows Servers patch group Define two non-overlapping AWS Systems Manager maintenance windows, conduct patching within them, and associate each with a different patch group Assign the AWS-RunWindowsPatchBaseline document as a task within each maintenance window create an AWS systems Manager State manager document to define commands to be executed during patch execution

A

C) Add a Patch Group tag with a value of either Windows Server 1 or Windows Servers to all existing EC2 instances to ensure that all Windows EC2 instances are assigned this tag Associate the AWS-DefaultPatchBaseline with both Windows Servers patch groups. Define two non-overlapping AWS Systems. Manager maintenance windows, conduct patching within them and associate each with a different patch group Register targets with specific maintenance windows using the patch group tags. Assign the AWS RunPatchBaseline document as a task within each maintenance window.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Q51. A company must deploy multiple independent instances of an application. The front-end application is internet accessible. However corporate policy stipulates that the backends are to be isolated 1 application setup should be automated to minimize the opportunity for mistakes as new instances are deployed Which option meets the requirements and MINIMIZES costs?

A) Use an AWS CloudFormation template to create identical IAM roles for each region. Use AWS CloudFormation stacksets to deploy each application instance by using parameters to customize for each instance and use security groups to isolate each instance while permitting access to the central server

B) Create each instance of application IAM roles and resources in separate accounts by using AWS CloudFormation Stacksets include a VPN connection to the VPN gateway of the central administration server

C) Duplicate the application IAM roles and resources in separate accounts by using a single AWS CloudFormation template include VPC peering to connect the VPC of each application instance to a central VPC

D) Use the parameters of the AWS CloudFormation template to customize the deployment into separate accounts include a NAT gateway to allow communication back to the central administration server

A

A) Use an AWS CloudFormation template to create identical IAM roles for each region. Use AWS CloudFormation stacksets to deploy each application instance by using parameters to customize for each instance and use security groups to isolate each instance while permitting access to the central server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Q52. A group of Amazon EC2 instances has been configured as a high-performance computing (HPC) cluster. The instances are running in a placement group, and are able to communicate with each other at network speeds of up to 20 Gbps The cluster needs to communicate with a control EC2 instance outside of the placement group. The control instance has the same instance type and AMI as the other instances, and is con How can the Solutions Architect improve the network speeds between the control instance and the instances in the placement group?

A) Terminate the control instance and relaunch it in the placement group

B) Ensure that the instance is communicating using their private IP addresses

C) Ensure that the control instance is using an Elastic Network Adapter

D) Move the control instance inside the placement group

A

C) Ensure that the control instance is using an Elastic Network Adapter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Q53. A company runs a dynamic mission-critical web application that has an SLA of 99.99% Global application users access the application 24/7. The application is currently hosted on-premises and routinely falls to meet its SLA, especially when millions of users access the application concurrently Remote users complain of latency.

How should this application be redesigned to be scalable and allow for automatic failover at the lowest cost?

A) Use Amazon Route 53 failover routing with geolocation-based routing Host the website on automatically scaled Amazon EC2 instances behind an Application load Balancer with an additional application load balancer and EC2 instances for the application layer in each region Use a multi-AZ deployment with MySQL as the data layer

B) Use Amazon Route 53 round robin routing to distribute the load evenly to several regions with health checks. Host the website on automatically scaled Amazon ECS with AWS fargate technology containers behind a Network Load Balancer with an additional Network Load Balancer and Fargate containers for the application layer in each region Use Amazon Aurora replicas for the data layer

C) Use Amazon Route 53 latency-based routing to the route to the nearest region with health checks S3 in each region and use Amazon API gateway with AWS Lambda for the application layer. Use Amazon DynomoDB global tables as the data layer with Amazon DynamoDB accelerator (DAX) for caching

D) Use Amazon Route 53 geolocation-based routing Host the website on automatically scaled AWS far gate container behind a Network Load Balancer with an additional Network Load Balancer and Fargate containers for the application layer in each region. Use Amazon Aurora Multi-Master for Aurora My SQL as the data layer

A

C) Use Amazon Route 53 latency-based routing to the route to the nearest region with health checks S3 in each region and use Amazon API gateway with AWS Lambda for the application layer. Use Amazon DynomoDB global tables as the data layer with Amazon DynamoDB accelerator (DAX) for caching

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Q54. A Solutions Architect has created an AWS CloudFormation template for a three-tie application that contains an Auto Scaling group of Amazon Ec2 instances running a custom AMI

The Solution Architect wants to ensure that future updates to the custom AMI can be deployed to a running stack by first updating the template to refer to the new AMI and then invoking update stack to replace the EC2 instances with instances launched from the new AMI

How can updates to the AMI be deployed to meet these requirements?

A) Create a changeset for the new version of the template view the changes to the running EC2 instances to ensure that the AMI is correctly updated and then execute the changeset.

B) Edit the AWS Autoscaling launch configuration resource in the template, changing its deletion policy to replace

C) Edit the AWS AutoScaling AutoscalingGroup resource in the template, inserting an updated policy attribute

D) Create a new stack from the updated template once it is successfully deployed modify the DNS records to point the new stack and delete the old stack

A

A) Create a changeset for new version of the template view the changes to the running EC2 instances to ensure that the AMI is correctly updated and then execute the changeset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Q55. A Solutions Architect is designing a multi-account structure that has 10 existing accounts. The design must meet the following requirements: - Consolidate all accounts into one organization -Allow full access to the Amazon EC2 service from the master account and the secondary accounts Minimize the effort required to add additional secondary accounts Which combination of steps should be included in the solution? (Select Two)

A) Create an organization from the master account send invitations to the secondary accounts from the master account Accept the invitation and create an OU

B) Create an organization from the master account send a join request to the master account from each secondary account Accept the request and create an OU

C) Create a VPC peering connection between the master account and the secondary accounts. Accept the request for the VPC peering connection

D) Create a service control policy (SCP) that enables full EC2 access, and attach the policy to the ou

E) Create a full EC2 access policy and map the policy to a role in each account Trust every other account to assume the role

A

A) Create an organization from the master account send invitations to the secondary accounts from the master account Accept the invitation and create an OU

D) Create a service control policy (SCP) that enables full EC2 access, and attach the policy to the ou

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Q56. A company’s application is increasingly popular and experiencing latency because of high volume reads on the database server The service has the following properties - A highly available REST API hosted in one region using an Application Load Balancer (ALB) with auto-scaling -A MySQL database hosted on an Amazon EC2 instance in a single availability zone

The company wants to reduce latency increase in region database read performance and have multi-region disaster recovery capabilities that can perform a live recovery automatically without any data or performance loss (HA/DR) Which deployment strategy will meet these requirements?

A) Use AWS CloudFormation stacksets to deploy the API layer in two regions Migrate the database to an Amazon Aurora with MySQL database cluster with multiple read replicas in one region and a read replica in a different region than the source database cluster. Use Amazon Route 53 health checks to trigger a DNS failover to the standby region if the health checks to the primary load balancer fail in the event of Route 53 failover promote the cross-region database replica to be the master and build out new read replicas in the standby region

B) Use Amazon elatiCache for Redis Multi-AZ with automatic failover to cache the database read queries. Use AWS OpsWorks to deploy the API layer, cache layer and existing database layer in two regions. In the event of failure, use Amazon Route 53 health checks on the database to trigger a DNS failover to the standby region if the health checks in the primary region fail. Back up the MySQL database frequently and in the event of a failure in an active region copy the backup to the standby region and restore the standby database.

C) Use AWS CloudFormation Stack sets to deploy the API layer in two regions. Add the database to an auto-scaling group Add a read replica to the database in the second region Use Amazon Route 53 health checks on the database to trigger a DNS failover to the standby region if the health checks in the primary region fail to promote the cross-region database replica to be the master and build out new read replicas in the standby region. D) Use Amazon ElastiCache for Redis Multi-AZ with automatic failover to cache the database read queries. Use AWS OpsWorks to deploy the API layer cache layer and existing database layer in two regions Use Amazon Route 53 health checks on the ALB to trigger a DNS failover to the standby region if the health checks in the primary region fail Back up the MySQL database frequently and in the event of a failure in an active region copy the backup to the standby region and restore the standby database

A

A) Use AWS CloudFormation stacksets to deploy the API layer in two regions Migrate the database to an Amazon Aurora with MySQL database cluser with multiple read replicas in one region and a read replica in a different region than the source database cluster. Use Amazon Route 53 health checks to trigger a DNS failover to the standby region if the health checks to the primary load balancer fail in the event of Route 53 failover promote the cross region database replica to be the master and build out new read replicas in the standby region

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Q57. A company currently uses a single 1 Gbps AWS Direct Connect connection to establish connectivity between an AWS Region and its data center. The company has five Amazon VPCs all of which connected to the data center using the same Direct Connect connection. The network team is worried about the single point of failure and is interested in improving the redundancy of the connection to AWS while keeping costs to a minimum. Which solution would improve the redundancy of the connection to AWS while meeting the cost requirements?

A) Provision another 1 GPS Direct connect connection and create new VIFs to each of the VPCs configure the VIFs in a load-balancing fashion using BGP

B) Set up VPN tunnels from the data center to each VPC. Terminate each VPN tunnel at the virtual private gateway (VGW) of the respective VPC and set up BGP for route management.

C) Set up a new point to point multiprotocol label (MPLS) connection to the AWS region that’s being used configure BGP to use this new circuit as passive so that no traffic flows through this unless the AWS Direct Connect fails

D) Create a public VIF on the direct Connect connection and set up a VPN tunnel which will terminate on the virtual private gateway (VGW) of the respective VPC using the public VIE use BGP to handle the failover to the VPN connection

A

D) Create a public VIF on the direct Connect connection and set up a VPN tunnel which will terminate on the virtual private gateway (VGW) of the respective VPC using the public VIE use BGP to handle the failover to the VPN connection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Any Company has acquired numerous companies over the past few years. The CIO for a company would like to keep the resources for each acquired company separate. The CIO also would like to enforce a chargeback model where each company pays for the AWS services it uses The Solution Architect is tasked with designing an AWS architecture that allows Any Company to achieve the following -Implementing a detailed chargeback mechanism to ensure that each company pays for the resources it uses

  • Any Company can pay for AWS services for all its companies through a single invoice
  • Developers in each acquired company have access to resources in their company only
  • Developers in an acquired company should not be able to affect resources in any other company
  • A single identity store is used to authenticate developers across all companies Which of the following approaches would meet these requirements?* (Select TWO)

A) Create a multi-account strategy with an account per company use consolidated billing to ensure that AnyCompany needs to pay a single bill only

B) Create a single account strategy with a virtual private cloud (VPC) for each company Reduce impact across companies by not creating any VPC peering links. As everything is in a single account, there will be a single invoice. Use tagging to create a detailed bill for each company.

C) Create IAM users for each developer in the account to which they require access. Create policies that allow the users access to all resources in that account Attach the policies to the IAM user

D) Create a federated identity store against the company’s active directory create IAM roles with appropriate permissions and set the trust relationships with AWS and the identity store use AWS STS to grant users access based on the groups they belong to in the identity store

E) Create a multi-account strategy with an account per company for billing purposes, use a tagging solution that uses a tag to identify the company that creates each resource

A

A) Create a multi-account strategy with an account per company use consolidated billing to ensure that AnyCompany needs to pay a single bill only

E) Create a multi account strategy with an account per company for billing purposes, use a tagging solution that uses a tag to identity the company that creates each resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Q59. A company has standard three-tier architecture using two Availability Zones, During the company’s offseason, users report that the website is not working. The Solution Architect finds that no changes have been made to the environment recently, the website is reachable, and it is possible to log in.

However, when the Solution Architect selects the “Find a store near you” function, the maps provided on the site by a third-party Restful API call do not work about 50% of the after refreshing the page.

The outbound API calls are made through Amazon EC2 NAT instances What is the MOST likely reason for this failure and how can it be mitigated in the future ?

A) The network ACL for one subnet is blocking outbound web traffic. Open the network ACL and prevent the administration from making future changes through IAM

B) The fault is in the third-party environment Contact the third party that provides the maps and request a fix that will provide better uptime.

C) One NAT instance has become overloaded Replace both EC2 NAT instances with a larger-sized instance and make sure to account for growth when making the new instance size

D) One of the NAT instances failed to Recommend replacing the EC2 NAT instances with a NAT gateway

A

D) One of the NAT instances failed Recommend replacing the EC2 NAT instances with a NAT gateway

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Q60. A company deployed a three-tier web application in two regions us eat-1 and eu west 1. The application must be active in both regions at the same time. The database tier of the application uses a single Amazon RDS Aurora database globally, with a master in us-east-1 and a read replica in eu-west-1.

Both regions are connected by a VPN The company wants to ensure that the application remains available even in the event of a region-level failure of all of the application’s components. It is acceptable for the application to be in read-only mode for up to 1 hour.

The company plans to configure two Amazon Route 53 recordsets, one for each of the regions. How should the company complete the configuration to meet its requirements while providing the lowest latency for the application end users? (Select TWO)

A) Use failover routing and configure the us east-1 recordset as primary and the eu-west-1 record set as secondary configure an HTTP health check for the web application in us east-1 and associate it to the us eat-1 record set

B) Use weighted routing and configure each record set with a weight of 50. Configure an HTTP health check for each region and attach it to the recordset for that region Use latency-based routing for both recordsets configure a health check for each region and attach it to the recordset for that region.

C) Configure an Amazon CloudWatch alarm for the health checks in us-east-1, and have it invoke an AWS Lambda function that promotes the read replica in eu-west-1

D) Configure Amazon RDS event notifications to react to the failure of the database in us-east-1 by invoking an AWS Lambda function that promotes the read replica in eu west-1

A

A) Use failover routing and configure the us east-1 recordset as primary and the eu-west-1 record set as secondary configure an HTTP health check for the web application in us east-1 and associate it to the us eat-1 record set C) Configure an Amazon CloudWatch alarm for the health checks in us-east-1, and have it invokes an AWS Lambda function that promotes the read replica in eu-west-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Q61. A company runs a Windows Server host in a public subnet that is configured to allow a team of administrators to connect over RDP to troubleshoot issues with hosts in private subnet. The host must be available at all times outside of a scheduled maintenance window and needs to receive the latest operating system updates within 3 days of release. What should be done to manage the host with the LEAST amount of administrative effort?

A) Run the host in a single-instance AWS Elastic Beanstalk environment configures the environment with a custom AMI to use a hardened machine image from AWS Marketplace. Apply system updates with AWS Systems Manager Patch Manager

B) Run the host on AWS WorkSpaces Use Amazon WorkSpaces Application Manager (WAM) to harden the host configures windows automatic updates to occur every 3 days.

C) Run the host in an Auto Scaling group with a minimum and maximum instance count of 1 Use a hardened machine image from AWS Marketplace. Apply system updates with AWS systems manager patch manager

D) Run the host in AWS OpsWorks Stacks. Use a chef recipe to harden the AMI during instance launch Use an AWS Lamba scheduled event to run the upgrade operating system stack command to apply system updates

A

C) Run the host in an Auto Scaling group with a minimum and maximum instance count of 1 Use a hardened machine image from AWS Marketplace. Apply system updates with AWS systems manager patch manager

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Q62. A company plans to move regulated and security-sensitive businesses to AWS. The Security team is developing a framework to validate the adoption of AWS best practices and industry-recognized compliance standards. The AWS Management console is the preferred method for teams to provision resources. Which strategies should a Solution Architect use to meet the business requirements and continuously assess, audit, and monitor the configurations of AWS resources? (Select TWO)

A) Use AWS Config rules to periodically audit changes to AWS resources and monitor the compliance of the configuration Develop AWS Config custom rules using AWS Lambda to establish a test-driven development approach, and further automate the evaluation to configure changes against the required controls.

B) Use Amazon CloudWatch Logs agent to collect all the AWS SDK logs. Search the log data using a predefined patterns that match mutating API Calls send notifications using Amazon CloudWatch alarms when unintended changes are performed Archive log data by using a batch export to Amazon S3 and then Amazon Glacier for long term retention and auditability

C) Use AWS Cloud Trail events to assess management activities of all AWS accounts ensure that Cloud Trail is enabled in all accounts and available AWS services Enable trails, encrypt CloudTrail event log files with an AWS KMS key, and monitor recorded activities with CloudWatch Logs

D) Use the Amazon CloudWatch Events near real-time capabilities to monitor system events patterns and trigger AWS Lambda functions to automatically revert non authorized changed in AWS resources Also, target Amazon SNS topics to enable notifications and improve the response time of incident responses

E) Use Cloud Trail integration with Amazon SNS to automatically notify unauthorized API activities Ensure that cloud Trail is enabled in all acounts and available AWS services Evaluate the usage of Lambda functions to automatically revert nonauthorized changes in AWS resources

A

A) Use AWS Config rules to periodically audit changes to AWS resources and monitor the compliance of the configuration Develop AWS Config custom rules using AWS Lambda to establish a test-driven development approach, and further automate the evaluation to configure changes against the required controls.

E) Use Cloud Trail integration with Amazon SNS to automatically notify unauthorized API activities Ensure that cloud Trail is enabled in all accounts and available AWS services Evaluate the usage of Lambda functions to automatically revert nonauthorized changes in AWS resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Q63. A company has a large on-premises Apache Hadoop cluster with a 20 PB HDFS database. The cluster is growing every quarter by roughly 200 instances and 1 PB.

The company’s goals are to enable resiliency for its Hadoop Data, limit the impact of losing cluster nodes and significantly reduce costs.

The current cluster runs 24/7 and supports a variety of analysis workloads, including interactive queries and batch processing Which solution would meet these requirements with the LEAST expense and downtime ?

A) Use AWS snowmobile to migrate the existing cluster data to Amazon S3 Create a persistent Amazon EMR cluster initially sized to handle the interactive workload based on historical data from the on-premises cluster store the data on EMRES. Minimize costs using reserved instances for master and core nodes and spot instances for tasks nodes, and auto-scale task nodes based on Amazon CloudWatch metrics Create job-specific, optimized clusters for batch workloads that are similarly optimized

B) Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster of a Similar Size and configuration to the current cluster store the data on EMRES Minimize costs by using reserved instances. As the workload grows each quarter, purchase additional Reserved instances and add to the cluster

C) Use AWS Snowball to migrate the existing cluster data to Amazon S3 create a persistent Amazon EMR cluster initially sized to handle the interactive workload based on historical data from on-premises cluster store the data on EMRFS minimize cost using reserved instances for master and core nodes and spot instances for the task nodes and auto-scale task nodes based on Amazon CloudWatch metrics create job-specific optimized cluster for batch workloads that are similarly optimized.

D)Use AWS direct connect to migrate the existing cluster data to Amazon S3 create a persistent Amazon EMR cluster initially sized to handle the interactive workload based on historical data from the on-premises cluster store the data on EMRES minimize costs using reserved instances for master and core nodes and Spot instances for task nodes, and auto-scale task nodes based on Amazon CloudWatch metrics. Create a job-specific optimized cluster for batch workloads that are similarly optimized.

A

A) Use AWS snowmobile to migrate the existing cluster data to Amazon S3 Create a persistent Amazon EMR cluster initially sized to handle the interactive workload based on historical data from the on premises cluster store the data on EMRES. Minimize costs using reserved instances for master and core nodes and spot instances for tasks nodes, and auto scale task nodes based on Amazon CloudWatch metrics Create job specific, optimized clusters for batch workloads that are similarly optimized

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Q64. A company is running a large application on-premises. Its technology stack consists of Microsoft .Net for the webserver platform and Apache Cassandra for the database.

The Company wants to migrate this application to AWS to improve service reliability. The IT team also wants to reduce the time it spends on capacity management and maintenance of this infrastructure.

The development team is willing and available to make code changes to support the migration.

Which design is the LEAST complex to manage after the migration?

A) Migrate the web servers to Amazon EC2 instances in Auto Scaling group that is running .NET migrate the existing Cassandra database to Amazon Aurora with multiple read replicas and run both in a Multi-AZ mode

B) Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in running in a multi-AZ configuration

C) Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuration migrate the existing Cassandra database to Amazon DynamoDB

D) Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NET Migrate the existing Cassandra database to Amazon DynamoDB

A

C) Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuration migrate the existing Cassandra database to Amazon DynamoDB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Q65. A company has a requirement that only allows specially hardened AMI to be launched into public subnets in a VPC and for the AMIS to be associated with a specific security group. Allowing non-compliant instances to launch into the public subnet could present a significant security risk if they are allowed to operate.

A mapping of approved AMIs to subnets to security groups exists in an Amazon DynamoDB table in the same AWS account. The company created an AWS Lambda function that when invoked will terminate a given Amazon EC2 instance if the combination of AMI subnet and security group are not approved in the DynamoDB table What should the Solutions Architect do to MOST quickly mitigate the risk of compliance deviations?

A) Create an Amazon CloudWatch Events rule that matches each time an EC2 instance is launched using one of the allowed AMIS, and associate it with the Lambda function as the target

B) For the Amazon S3 bucket receiving the AWS Cloud Trail logs, create an S3 event notification configuration with a filter to match when logs contain the oc2 Run instances action, and associate it with the Lambda function as the target

C) Enable AWS Cloud Trail and configure it to stream to an Amazon ClourWatch Logs group create a metric filter in CloudWatch to match when the ec2 Run instance action occurs, and the trigger the Lambda function when the metric is greater than 0

D) Create an Amazon CloudWatch Events rule that matches each time an EC2 instance is launched and associate it with the Lambda function as the target.

A

D) Create an Amazon CloudWatch Events rule that matches each time an EC2 instance is launched and associate it with the Lambda function as the target.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Q66. A Solutions Architect must migrate an existing on-premises web application with 70 TB of static files supporting a public open-data initiative. The Architect wants to upgrade to the latest version of the host operating system as part of the migration effort Which is the FASTEST and MOST cost-effective way to perform the migration?

A) Run a physical-to-virtual conversion on the application server Transfer the server image over the internet and transfer the static data to Amazon S3

B) Run a physical-to-virtual conversion on the application server. Transfer the server image over AWS Direct Connect, and transfer the static data to Amazon S3

C) Re-platform the server to Amazon EC2, and use AWS Snowball to transfer the static data to Amazon S3

D) Re-platform the server by using the AWS Server Migration Service to move the code and data to a new Amazon EC2 instance

A

C) Re-platform the server to Amazon EC2, and use AWS Snowball to transfer the static data to Amazon S3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Q67 A company has an application that generates a weather forecast that is updated every 15 minutes with an output resolution of 1 billion unique positions, each approximately 20 bytes in size (20 Gigabytes per forecast) Every hour, the forecast data is globally accessed approximately 5 million times (1,400 requests per second), and up to 10 times more during weather events. The forecast data is overwritten every update.

Users of the current weather forecast application expect responses to queries to be returned in less than two seconds for each request. Which design meets the required request rate and response time?

A) Store forecast locations in an Amazon ES cluster Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with AWS Lambda functions responding to queries as the origin Enable API caching on the API Gateway stage with a cache-control timeout set for 15 minutes.

B) Store forecast locations in an Amazon EFS volume Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group of an Auto Scaling fleet of Amazon EC2 instances that have mounted the Amazon EFS volume. Set the cache-control timeout for 15 minutes in the CloudFront distribution

C) Store forecast locations in an Amazon ES cluster Use an Amazon CloudFront distribution targeting an API Gateway endpoint with AWS Lambda functions responding to queries as the origin Create an Amazon Lambda (Edge function that caches the data locally at edge locations for 15 minutes

D) Store forecast locations in Amazon S3 as individual objects Create an Amazon CloudFront distribution targeting an Elastic Load Balancing group of an Auto Scaling fleet of EC2 instances, querying the origin of the S3 object Set the cache-control timeout for 15 minutes in the CloudFront distribution

A

A) Store forecast locations in an Amazon ES cluster Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with AWS Lambda functions responding to queries as the origin Enable API caching on the API Gateway stage with a cache-control timeout set for 15 minutes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Q68. A company is using AWS CloudFormation to deploy its infrastructure. The company is concerned that, if a production CloudFormation stack is deleted, important data stored in Amazon RDS databases or Amazon EBS volumes might also be deleted. How can the company prevent users from accidentally deleting data in this way?

A) Modify the CloudFormation templates to add a Deletion policy attribute to RDS and EBS resources

B) Configure a stack policy that disallows the deletion of RDS and EBS resources

C) Modify IAM policies to deny deleting RDS and EBS resources that are tagged with an “aws:cloudformation: stack-name” tag

D) Use AWS Config rules to prevent deleting ROS and EBS resources

A

A) Modify the CloudFormation templates to add a Deletion policy attribute to RDS and EBS resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Q69. A company would like to implement a serverless application by using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB They deployed a proof of concept and stated that the average response time is greater than what their upstream services can accept Amazon CloudWatch metrics did not indicate any issues with DynamoDB but showed that some Lambda functions were hitting their timeout

Which of the following actions should the Solutions Architect consider improving performance? (Select TWO)

A) Configure the AWS Lambda function Io reuse containers to avoid unnecessary startup time

B) Increase the amount of memory and adjust the timeout on the Lambda function Complete performance testing to identify the ideal memory and timeout configuration for the Lambda function

C) Increase the amount of CPU, and adjust the timeout on the Lambda function Complete performance testing to identify the ideal CPU and timeout configuration for the Lambda function

D) Create an Amazon ElastiCache cluster running Memcached and configure the Lambda function for VPC integration with access to the Amazon ElastiCache cluster

E) Enable API cache on the appropriate stage in Amazon API Gateway and override the TIL for individual methods that require a lower TTL than the entire stage

A

B) Increase the amount of memory and adjust the timeout on the Lambda function Complete performance testing to identify the ideal memory and timeout configuration for the Lambda function

E) Enable API cache on the appropriate stage in Amazon API Gateway and override the TIL for individual methods that require a lower TTL than the entire stage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Q70. A company is using AWS to run an internet-facing production application written in Node.js, The Development team is responsible for pushing new versions of their software directly to production The application software is updated multiple times a day. The team needs guidance from a Solutions Architect to help them deploy the software to the production fleet quickly and with the least amount of disruption to the service Which option meets these requirements?

A) Prepackage the software into an AMI and then use Auto Scaling to deploy the production fleet. For software changes, update the AMI and allow Auto Scaling to automatically push the new AMI to production

B) Use AWS CodeDeploy to push the prepackaged AMI to production. For software changes, reconfigure CodeDeploy with new AMI identification to push the new AMI Io the production fleet

C) Use AWS Elastic Beanstalk to host the production application For software changes, upload the new application version to Elastic Beanstalk to push this to the production fleet using a blue/green deployment method

D) Deploy the base AMI through Auto Scaling and bootstrap the software using user data For software changes, SSH to each of the instances and replace the software with the new version

A

C) Use AWS Elastic Beanstalk to host the production application For software changes, upload the new application version to Elastic Beanstalk to push this to the production fleet using a blue/green deployment method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Q71. A company used Amazon EC2 instances to deploy a web fleet to host a blog site. The EC2 instances are behind an Application Load Balancer (ALB) and are configured in an Auto Scaling group. The web application stores all blog content on an Amazon EFS volume. The company recently added a feature for bloggers to add a video to their posts, attracting 10 times the previous user traffic at peak times of the day.

Users report buffering and timeout issues while attempting to reach the site or watch videos. Which is the MOST cost-efficient and scalable deployment that will resolve the issues for users?

A) Reconfigure Amazon EFS to enable maximum I/O

B) Update the blog site to use instance store volumes for storage Copy the site contents to the volumes at launch and to Amazon S3 at shutdown

C) Configure an Amazon CloudFront distribution Point the distribution to an S3 bucket and migrate the videos from EFS to Amazon S3

D) Set up an Amazon CloudFront distribution for all site contents and point the distribution at the ALB

A

C) Configure an Amazon CloudFront distribution Point the distribution to an S3 bucket and migrate the videos from EFS to Amazon S3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Q72. A company runs its containerized batch jobs on Amazon ECS. The jobs are scheduled by submitting a container image, a task definition, and the relevant data to an Amazon S3 bucket Container images may be unique per job Running the jobs as quickly as possible is of utmost importance, so submitting job artifacts to the S3 bucket triggers the job to run immediately Sometimes there may be no jobs running at all

However, jobs of any size can be submitted with no prior warning to the IT Opera include CPU and memory resource requirements What solution will allow the batch jobs to complete as quickly as possible after being scheduled?

A) Schedule the jobs on an Amazon ECS cluster using the Amazon EC2 launch type Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs

B) Schedule the jobs directly on EC2 instances Use Reserved instances for the baseline minimum load and use On-Demand Instances in an Auto Scaling group to scale up the platform based on demand

C) Schedule the jobs on an Amazon ECS cluster using the Fargate launch type Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.

D) Schedule the jobs on an Amazon ECS cluster using the Fargate launch typo Use Spot Instances in an Auto Scaling group to scale the platform based on demand Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs

A

A) Schedule the jobs on an Amazon ECS cluster using the Amazon EC2 launch type Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs

24
Q

Q73. A company receives clickstream data files to Amazon S3 every five minutes A Python script runs as a cron job once a day on an Amazon EC2 instance to process each file and load it into a database hosted on Amazon RDS. The cron job takes 15 to 30 minutes to process 24 hours of data. The data consumers ask for the data be available as soon as possible Which solution would accomplish the desired outcome?

A) Increase the size of the instance to speed up processing and update the schedule to run once an hour

B) Convert the cron job to an AWS Lambda function and trigger this new function using a cron job on an EC2 instance

C) Convert the cron job to an AWS Lambda function and schedule it to run once an hour using Amazon CloudWatch Events

D) Create an AWS Lambda function that runs when a file is delivered to Amazon S3 using S3 event notifications

A

D) Create an AWS Lambda function that runs when a file is delivered to Amazon S3 using S3 event notifications

25
Q74. A media company has a 30-TB repository of digital news videos. These videos are stored on tape in an on-premises tape library and referenced by a Media Asset Management (MAM) System. The company wants to enrich the metadata for these videos in an automated fashion and put them into a searchable catalog by using a MAM feature. The company must be able to search based on information in the video, such as objects, scenery items, or people's faces A catalog is available that contains faces of people who have appeared in the videos that include an image of each person The company would like to migrate these videos to AWS The company has a high-speed AWS Direct Connect connection with AWS and would like to move the MAM solution video content directly from its current file system *How can these requirements be met by using the LEAST amount of ongoing management overhead and causing MINIMAL disruption to the existing system?* A) Set up an AWS Storage Gateway, file gateway appliance on-premises. Use the MAM solution to extract the videos from the current archive and push them into the file gateway Use the catalog of faces to build a collection in Amazon Rekognition Build an AWS Lambda function that invokes the Rekaonition Javascript SDK to have Rokognition pull the video from the Amazon S3 files backing the file gateway retrieve the required metadata and push the metadata into the MAM solution B) Set up an AWS Storage Gateway, tape gateway appliance on-premises Use the MAM solution to extract the videos from the current archive and push them into the tape gateway Use the catalog of faces to build a collection in Amazon Rekognition Build an AWS Lambda function that invokes the Rekognition Javascript SDK to have Amazon Rokognition process the video in the tape gateway, retrieve the required metadata and push the metadata into the MAM solution C) Configure a video ingestion stream by using Amazon Kinesis Video Streams. Use the catalog of faces to build a collection in Amazon Rekognition Stream the videos from the MAM solution into Kinesis Video Streams Configure Amazon Rekognition to process the streaming videos. Then use a stream consumer to retrieve the required metadata, and push the metadata into the MAM solution Configure the stream to store the videos in Amazon S3 D) Set up an Amazon EC2 instance that runs the OpenCV libraries Copy the videos, images, and face catalog from the on-premises library into an Amazon EBS volume mounted on this EC2 instance Process the videos to retrieve the required metadata and push the metadata into the MAM solution, while also copying the video files to an Amazon S3 bucket
B) Set up an **AWS Storage Gateway, tape gateway appliance** on premises Use the MAM solution to extract the videos from the current archive and push them into the tape gateway Use the catalog of faces to build a collection in Amazon Rekognition Build an AWS Lambda function that invokes the Rekognition Javascript SDK to have Amazon Rokognition process the video in the tape gateway, retrieve the required metadata and push the metadata into the MAM solution
26
Q75. A company that is new to AWS reports it has exhausted its service limits across several accounts that are on the Basic Support plan. The company would like to prevent this from happening in the future. *What is the MOST efficient way of monitoring and managing all service limits in the company's accounts?* A) Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, provide notifications using Amazon SNS if the limits are close to exceeding the threshold B) Reach out to AWS Support to proactively increase the limits across all accounts. That way, the customer avoids creating and managing infrastructure just to raise the service limits C) Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor programmatically increase the limits that are close to exceeding the threshold D) Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, and use Amazon SNS for notifications if a limit is close to exceeding the threshold Ensure that the accounts are using the AWS Business Support plan at a minimum
A) Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, provide notifications using **Amazon SNS if the limits** are close to exceeding the threshold
27
Q76. A company runs a memory-intensive analytics application using on-demand Amazon EC2 C5 compute-optimized instances The application is used continuously and application demand doubles during working hours. The application currently scales based on CPU usage. When scaling in occurs, a lifecycle hook is used because the instance requires 4 minutes to clean the application state before terminating Because users reported poor performance during working hours, scheduled scaling actions were implemented so additional instances would be added during working hours. **The Solutions Architect has been asked to reduce the cost of the application Which solution is MOST cost-effective?** A) Use the existing launch configuration that uses C5 instances and update the application AMI to include the Amazon CloudWatch agent Change the Auto Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after working hours, and use Spot Instances to cover the increased demand during working hours B) Update the existing launch configuration to use R5 instances and update the application AMI to include SSM Agent Change the Auto Scaling policies to scale based on memory utilization Use Reserved Instances for the number of instances required after working hours, and use Spot instances with On- Demand Instances to cover the increased demand during working hours C) Use the existing launch configuration that uses C5 instances and update the application AMI to include SSM Agent Leave the Auto Scaling policies to scale based on CPU utilization. Use scheduled Reserved Instances for the number of instances required after working hours, and use Spot Instances to cover the increased demand during working hours D) Create a new launch configuration using R5 instances and update the application AMI to include the Amazon CloudWatch agent Change the Auto Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after working hours and use Standard Reserved instances with On-Demand Instances to cover the increased demand during working hours
D) Create a new launch configuration using R5 instances and update the application AMI to include the Amazon CloudWatch agent Change the Auto Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after working hours and **use Standard Reserved instances** with On-Demand Instances to cover the increased demand during working hours
28
Q77 A company's CISO has asked a Solutions Architect to re-engineer the company's current CI/CD practices to make sure patch deployments to its application can happen as quickly as possible with minimal downtime if vulnerabilities are discovered the company must also be able to quickly roll back a change in case of errors. The web application is deployed in a fleet of Amazon EC2 instances behind an Application Load Balancer. The company is currently using GitHub to host the application source code and has configured an AWS CodeBuild project to build the application. The company also intends to use AWS CodePipeline to trigger builds from GitHub commits using the existing CodeBuild project What CI/CD configuration meets all of the requirements? A) Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for in-place deployment Monitor the newly deployed code, and, if there are any issues, push another code update B) Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for blue/green deployments. Monitor the newly deployed code, and, if there are any issues, trigger a manual rollback using CodeDeploy. C) Configure CodePipeline with a deploy stage using AWS CloudFormation to create a pipeline for test and production stacks Monitor the newly deployed code, and, if there are any issues, push another code update D) Configure the CodePipeline with a deploy stage using AWS OpsWorks and in-place deployments. Monitor the newly deployed code, and, if there are any issues, push another code update
B) Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for **blue/green** deployments. Monitor the newly deployed code, and, if there are any issues, trigger a manual rollback using CodeDeploy.
29
Q78. A Solutions Architect is migrating a 10 TB PostgreSQL database to Amazon RDS for PostgreSQL The company's internet link is 50 MB with a VPN in the Amazon VPC. and the Solutions Architect needs to migrate the data and synchronize the changes before the cutover The cutover must take place within an 8 day period *What is the LEAST complex method of migrating the database securely and reliably?* A) Order an AWS Snowball device and copy the database using the AWS DMS When the database is available in Amazon S3, use AWS DMS to load it to Amazon RDS, and configure a job to synchronize changes before the cutover B) Create an AWS DMS job to continuously replicate the data from on-premises to AWS. Cutover to Amazon RDS after the data is synchronized C) Order an AWS Snowball device and copy a database dump to the device. After the data has been copied to Amazon S3, import it to the Amazon ROS instance. Set up log shipping over a VPN to synchronize changes before the cutover D) Order an AWS Snowball device and copy the database by using the AWS Schema Conversion Tool When the data is available in Amazon S3, use AWS DMS to load it to Amazon RDS, and configure a job to synchronize changes before the cutover
A) Order an AWS Snowball device and copy the database using the **AWS DMS** When the database is available in **Amazon S3,** use **AWS DMS** to load it to Amazon RDS, and configure a job to synchronize changes before the cutover
30
079. A company has a website that enables users to upload videos Company policy states the uploaded videos must be analyzed for restricted content. An uploaded video is placed in Amazon S3, and a message is pushed to an Amazon SOS queue with the videos' location. A backend application pulls this location from Amazon SOS and analyzes the video. The video analysis is compute-intensive and Occurs sporadically during the day The website scales with demand. The video analysis application runs on a fixed number of instances Peak demand occurs during the holidays, so the company must add instances to the application during this time. All instances used are currently on-demand Amazon EC2 T2 instances. *The company wants to reduce the cost of the current solution Which of the following solutions is MOST cost-effective?* A) Keep the website on T2 instances. Determine the minimum number of website instances required during off-peak times and use Spot Instances to cover them while using Reserved instances to cover peak demand. Use Amazon 'EC2 R4 and Amazon EC2 R5 Reserved Instances in an Auto Scaling group for the video analysis application B) Keep the website on T2 instances. Determine the minimum number of website instances required during off-peak times and use Reserved Instances to cover them while using On-Demand Instances to cover peak demand Use Spot Fleet for the video analysis application comprised of Amazon EC2 C4 and Amazon EC2 CS Spot Instances C) Migrate the website to AWS Elastic Beanstalk and Amazon EC2 C4 instances. Determine the minimum number of website instances required during off-peak times and use On-Demand Instances to cover them while using Spot capacity to cover peak demand Use Spot Fleet for the video analysis application comprised of C4 and Amazon EC2 CS instances D) Migrate the website to AWS Elastic Beanstalk and Amazon EC2 R4 instances Determine the minimum number of website instances required during off-peak times and use Reserved instances to cover them while using On-Demand Instances to cover peak demand Use Spot Fleet for the video analysis application comprised of R4 and Amazon EC2 RS instances
B) Keep the website on T2 instances. Determine the minimum number of website instances required during off peak times and **use Reserved Instances to cover them** while using On- Demand Instances to cover peak demand Use Spot Fleet for the video analysis application comprised of Amazon EC2 C4 and Amazon EC2 CS Spot Instances
31
Q81. A photo sharing and publishing company receives 10,000 to 150,000 images daily. The company receives the images from multiple suppliers and users registered with the service. The company is moving to AWS and wants to enrich the existing metadata by adding data using Amazon Rekognition The following is an example of the additional data: list celebrities [name of the personality) wearing (color] looking [happy, sad) near [location example Eiffel Tower in Paris] *As part of the cloud migration program, the company uploaded existing image data to Amazon S3 and told users to upload images directly to Amazon S3 What should the Solutions Architect do to support these requirements?* A) Trigger AWS Lambda based on an S3 event notification to create additional metadata using Amazon Rekognition Use Amazon DynamoDB to store the metadata and Amazon ES to create an index. Use a web front-end to provide search capabilities backed by Amazon ES B) Use Amazon Kinesis to stream data based on an S3 event Use an application running in Amazon EC2 to extract metadata from the images. Then store the data on Amazon DynamoDB and Amazon CloudSearch and create an index Use a web front-end with search capabilities backed by CloudSearch C) Start an Amazon SOS queue based on S3 event notifications. Then have Amazon SOS send the metadata information to Amazon DynamoDB. An application running on Amazon EC2 extracts data from Amazon Rekognition using the API and adds data to DynamoDB and Amazon ES Use a web front-end to provide search capabilities backed by Amazon ES. D) Trigger AWS Lambda based on an event notification to create additional metadata using Amazon Rekognition. Use Amazon RDS MySQL Multi-AZ to store the metadata information and use Lambda to create an index. Use a web front end with search capabilities backed by Lambda
A) Trigger AWS Lambda based on an **S3** event notification to create additional metadata using Amazon Rekognition Use Amazon DynamoDB to store the metadata and Amazon ES to create an index. Use a web front-end to provide search capabilities backed by Amazon ES
32
082. A retail company is running an application that stores invoice files in an Amazon S3 bucket and metadata about the files in an Amazon DynamoDB table. The application software runs in both us east•1 and eu west- 1. The S3 bucket and DynamoDB table are in us-east-1. * The company wants to protect itself from data corruption and loss of connectivity to either Region Which option meets these requirements?* A) Create a DynamoDB global table to replicate data between us east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in us east-1. Enable versioning on the S3 bucket. B) Create an AWS Lambda function triggered by Amazon CloudWatch Events to make regular backups of the DynamoDB table Set up S3 cross-region replication from us east-1 to eu-west-1 Set up MFA delete on the S3 bucket in us east-1. C) Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1 Enable versioning on the S3 bucket. Implement strict ACLs on the S3 bucket. D) Create a DynamoDB global table to replicate data between us east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in us-east-1 Set up S3 Cross-reg
D) Create a DynamoDB global table to replicate data between us east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in us-east-1 **Set up S3 Cross-reg**
33
083. A company wants to manage the ·costs associated with a group of 20 applications that are infrequently used but are still business-critical by migrating to AWS. The applications are a mix of Java and Node.Js spread across different instance clusters. The company wants to minimize costs while standardizing by using a single deployment methodology Most of the applications are part of month-end processing routines with a small number of concurrent users, but they are occasionally run at other times Average application memory consumption is less than 1 GB, though some applications use as much as 2.5 GB o! memory during peak processing. The most important application in the group is a billing report written in Java that accesses multiple data sources and often runs for several hours. Which is the MOST cost-effective solution? A) Deploy a separate AWS Lambda function for each application. Use AWS Cloud Trail logs and Amazon CloudWatch alarms to verify consolation of critical jobs B) Deploy Amazon ECS containers on Amazon EC2 with Auto Scaling configured for memory utilization of 75% Deploy an ECS task for each application being migrated with ECS task scaling Monitor services and hosts by using Amazon CloudWatch C) Deploy AWS Elastic Beanstalk for each application with Auto Scaling to ensure that all requests have sufficient resources Monitor each AWS Elastic Beanstalk deployment by using CloudWatch alarms. D) Deploy a new Amazon EC2 instance cluster that co-hosts all applications by using EC2 Auto Scaling and Application Load Balancers. Scale cluster size based on a cus\om metric set on instance memory utilization. Purchase 3-year Reserved Instance reservations equal to the GroupMaxSize parameter of the Auto Scaling group
B) Deploy Amazon ECS containers on Amazon EC2 with Auto Scaling configured for memory utilization of **75% Deploy** an ECS task for each application being migrated with ECS task scaling Monitor services and hosts by using Amazon CloudWatch
34
Q84. A bank is re-architecting its mainframe-based credit card approval processing application to a cloud-native application on the AWS cloud. The new application will receive up to 1,000 requests per second at peak load. There are multiple steps to each transaction, and each step must receive the result of the previous step. The entire request must return an authorization response within less than 2 seconds with zero data loss. Every request must receive a response. The solution must be Payment Card Industry Data Security Standard (PCI DSS)· compliant. *Which option will meet all of the bank's objectives with the LEAST complexity and LOWEST cost while also meeting compliance requirements?* A) Create an Amazon API Gateway to process inbound requests using a single AWS Lambda task that performs multiple steps and returns a JSON object with the approval status. Open a support case to increase the limit for the number of concurrent lambdas to allow room for bursts of activity due to the new application. B) Create an Application Load Balancer with an Amazon ECS cluster on Amazon EC2 Dedicated Instances in a target group to process incoming requests. Use Auto Scaling to scale the cluster out/in based on average CPU utilization. Deploy a web service that processes all of the approval steps and returns a JSON object with the approval status. C) Deploy the application on Amazon EC2 on Dedicated Instances. Use an Elastic Load Balancer in front of a farm of application servers in an Auto Scaling group to handle incoming requests. Scale out/in based on a custom Amazon CloudWatch metric for the number of inbound requests per second after measuring the capacity of a single instance D) Create an Amazon API Gateway to process inbound requests using a series of AWS Lambda processes, each with an Amazon SOS input queue. As each step completes, it writes its result to the next step's queue. The final step returns a JSON object with the approval status. Open a support case to increase the limit for the number of concurrent Lambdas to allow room for bursts of activity due to the new application.
B) Create an Application Load Balancer with an Amazon ECS cluster on Amazon EC2 Dedicated Instances in a target group to process incoming requests. Use Auto Scaling to scale the cluster **out/in** based on average CPU utilization. Deploy a web service that processes all of the approval steps and returns a JSON object with the approval status.
35
Q85. A Solutions Architect is designing a system that will collect and store data from 2000 internet-connected sensors. Each sensor produces 1 KB of data every second. The data must be available for analysis within a few seconds of is being sent to the system and stored for analysis indefinitely *Which is the MOST Cost-effective solution for collecting and storing the data?* A) Put each record in Amazon Kinesis Data Streams. Use an AWS Lambda function lo write each record lo an object in Amazon S3 with a prefix that organizes the records by the hour and hashes the record's key. Analyze recent data from Kinesis Data Streams and historical data from Amazon S3 B) Put each record in Amazon Kinesis Data Streams. Set up Amazon Kinesis Data Firehose lo read records from the stream and group them into objects in Amazon S3. Analyze recent data from Kinesis Data Streams and historical data from Amazon S3 C) Put each record into an Amazon DynamoDB table. Analyze the recent data by querying the table. Use an AWS Lambda function connected to a DynamoDB stream to group records together, write them into objects in Amazon S3, and then delete the record from the DynamoDB table. Analyze recent data from the DynamoDB table and historical data from Amazon S3 D) Put each record into an object in Amazon S3 with a prefix that organizes the records by the hour and hashes the record's key. Use S3 lifecycle management to transition objects to S3 infrequent access storage to reduce storage costs. Analyze recent and historical data by accessing the data in Amazon S3
B) Put each record in **Amazon Kinesis Data Streams**. Set up **Amazon Kinesis Data Firehose** lo read records from the stream and group them into objects in Amazon S3. Analyze recent data from **Kinesis Data Streams** and historical data from Amazon S3
36
Q86. A company runs a loT platform on AWS. loT sensors in various locations send data to the company's Node.js API servers on Amazon EC2 instances running behind an Application Load Balancer. The data is stored in an Amazon RDS MySQL DB instance that uses a 4 TB General Purpose SSD volume. The number of sensors the company has deployed in the field has increased over time and is expected to grow significantly. The API servers are consistently overloaded and RDS metrics show high write latency. *Which of the following steps together will resolve the issues permanently and enable growth as new sensors are provisioned, while keeping this platform cost-efficient? (Select TWO.)* A) Resize the MySQL General Purpose SSD storage to 6 TB to improve the volume's IOPS B) Re-architect the database tier to use Amazon Aurora instead of an RDS MySQL OB instance and add read replicas. C) Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data D) Use AWS X-Ray to analyze and debug application issues and add more API servers to match the load E) Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL OB instance
D) Use **AWS X-Ray** to analyze and debug application issues and add more API servers to match the load E) Re-architect the database tier to use **Amazon DynamoDB instead of an RDS MySQL OB instance**
37
Q87. An auction website enables s users to bid on collectible items. The auction rules require that each bid is processed only once and in the order, it was received. The current implementation is based on a fleet of Amazon EC2 web servers that write bid records into Amazon Kinesis Data Streams. A single t2 large instance has a cron job that runs the bid processor, which reads incoming bids from Kinesis Data Streams and processes each bid, The auction site is growing in popularity, but users are complaining that some bids are not registering. Troubleshooting indicates that. the bid processor is too slow during peak demand hours, sometimes crashes while processing, and.occasionally loses track of which record is being processed. What changes should make the bid processing more reliable? A) Refactor the web application to use the Amazon Kinesis Producer Library (KPL) when posting bids to Kinesis Data Streams. Refactor the bid processor to flag each record in Kinesis Data Streams as being unread, processing, and processed. At the start of each bid processing run, scan Kinesis Data Streams for unprocessed records B) Refactor the web application to post each incoming bid to an Amazon SNS topic in place of Kinesis Data Streams. Configure the SNS topic to trigger an AWS Lambda function that processes each bid as soon as a user submits it. C) Refactor the web application to post each incoming bid to an Amazon SQS FIFO queue in place of Kinesis Data Streams. Refactor the bid processor to continuously consume the SOS queue. Place the bid processing EC2 instance in an Auto Scaling group with a minimum and a maximum size of 1. D) Switch the EC2 instance type from l2 large to a larger general compute instance type. Put the bid processor EC2 instances In an Auto Scaling group that scales out the number of EC2 instances running the bid processor, based on the Incoming Records metric in Kinesis Data Streams.
A) Refactor the web application to use the **Amazon Kinesis Producer Library (KPL)** when posting bids to Kinesis Data Streams. Refactor the bid processor to flag each record in Kinesis Data Streams as being unread, processing, and processed. At the start of each bid processing run, scan Kinesis Data Streams for unprocessed records
38
Q88. A company is moving a business-critical. multi-tier application to AWS. The architecture consists of a desktop client application and server infrastructure. The server infrastructure resides in an on-premises data center that frequently fails to maintain the application uptime SLA of 99.95%. A Solutions Architect must re-architect the application to ensure that it can meet or exceed the SLA The application contains a PostgreSQL database running on a single virtual machine. The business logic and presentation layers are load-balanced between multiple virtual machines Remote users complain about slow load times while using this latency-sensitive application *Which of the following will meet the availability requirements with little change to the application while improving user experience and minimizing costs?* A) Migrate the database to a PostgreSQL database in Amazon EC2. Host the application and presentation layers in automatically scaled Amazon ECS containers behind an Application Load Balancer. Allocate an Amazon Workspaces Workspace for each end-user to improve the user experience B) Migrate the database to an Amazon RDS Aurora PostgreSQL configuration. Host the application and presentation layers in an Auto Scaling configuration on Amazon EC2 instances behind an Application Load Balancer. Use Amazon AppStream 2.0 to improve the user experience C) Migrate the database to a,n Amazon RDS PostgreSQL Multi-AZ configuration. Host the application and presentation layers in automatically scaled AWS Faregate containers behind a Network Load Balancer. Use Amazon Elastic ache to improve the user experience D) Migrate the database to an Amazon Redshift cluster with at least two nodes. Combine and host the application and presentation layers in automatically scaled Amazon ECS containers behind an Application Load Balancer. Use Amazon CloudFront to improve the user experience
C) Migrate the database to a,n **Amazon RDS PostgreSQL Multi-AZ** configuration. Host the application and presentation layers in automatically scaled **AWS Faregate** containers behind a Network Load Balancer. Use Amazon Elastic ache to improve the user experience
39
Q89. An on-premises application will be migrated to the cloud. The application consists of a single Elasticsearch virtual machine with data source feeds from local systems that will not be migrated and a Java web application on Apache Tomcat running on three virtual machines. The Elasticsearch server currently uses 1 TB of storage out of 16 TB available storage, and the web application is updated every 4 months. Multiple users access the web application from the internet. *There is a 10Gbit AWS Direct Connect connection established, and the application can be migrated over a scheduled 48-hour change window Which strategy will have the LEAST impact on the Operations staff after the migration?* A) Create an Elasticsearch server on Amazon EC2 right-sized with 2 TB of Amazon EBS and a public AWS Elastic Beanstalk environment for the web application. Pause the data sources export the Elasticsearch index from on-premises and import into the EC2 Elasticsearch server. Move data source feeds to the new Elasticsearch server and move users to the web application B) Create an Amazon ES cluster for Elasticsearch and a public AWS Elastic Beanstalk environment for the web application. Use AWS OMS to replicate Elastic search data. When replication has finished, move data source feeds to the new Amazon ES cluster endpoint and move users to the new web application C) Use the AWS SMS to replicate the virtual machines into AWS. When the migration is complete pause the data source feeds and start the migrated Elasticsearch and web application instances. Place the web application instances behind a public Elastic Load Balancer. Move the data source feeds to the new Elasticsearch server and move users to the new web Application Load Balancer D) Create an Amazon ES cluster for Elasticsearch and a public AWS Elastic Beanstalk environment for the web application. Pause the data source feeds, export the Elasticsearch index from on-premises and import into the Amazon ES cluster Move the data source feeds to the new Amazon ES cluster endpoint and move users to the new web application
B) Create an Amazon ES cluster for Elasticsearch and a public AWS Elastic Beanstalk environment for the web application. **Use AWS OMS** to replicate Elastic search data. When replication has finished, move data source feeds to the new Amazon ES cluster endpoint and move users to the new web application
40
Q90. A Solutions Architect must update an application environment within AWS Elastic. Beanstalk using a blue/green deployment methodology. The Solutions Architect creates an environment that is identical to the existing application environment and deploys the application to the new environment What should be done next to complete the update? A) Redirect to the new environment using Amazon Route 53 B) Select the Swap Environment URLs option. C) Replace the Auto Scaling launch configuration D) Update the DNS records to point to the green environment
B) Select the **Swap Environment URLs** option.
41
Q91. A company has a legacy application running on servers on-premises. To increase the application's reliability, the company wants to gain actionable insights using application logs. A Solutions Architect has been given the following requirements for the solution: * Aggregate logs using AWS. * Automate log analysis for errors * Notify the Operations team when errors go beyond a specified threshold. • What solution meets the requirements? A) Install Amazon Kinesis Agent on servers, send logs to Amazon Kinesis Data Streams and use Amazon Kinesis Data Analytics to identify errors, create an Amazon CloudWatch alarm to notify the Operations team of errors B) Install an AWS X-Ray agent on servers, send logs to AWS Lambda and analyze them to identify errors, use Amazon CloudWatch Events to notify the Operations team or errors C) Install Logslash on servers, send logs to Amazon S3 and use Amazon Athena to identify errors, use send mail to notify the Operations team of errors D) Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use metric filters to identify errors, create a CloudWatch alarm to notify the Operations team of errors
D) Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use **metric filters** to identify errors, create a CloudWatch alarm to notify the Operations team of errors
42
Q92. An enterprise runs 103 line-of-business applications on virtual machines in an on-premises data center. Many of the applications are simple PHP, Java, or Ruby web applications, are no longer actively developed, and serve little traffic Which approach should be used to migrate these applications to AWS with the LOWEST infrastructure costs? A) Deploy the applications to single-instance AWS Elastic Beanstalk environments without a load balancer B) Use AWS SMS to create AMls for each virtual machine and run them in Amazon EC2. C) Convert each application to a Docker image and deploy to a small Amazon ECS cluster behind an Application Load Balancer D) Use VM Import/Export to create AMls for each virtual machine and run them in single-instance AWS Elastic Beanstalk environments by configuring a custom image.
D) Use VM **Import/Export** to create AM ls for each virtual machine and run them in single-instance AWS Elastic Beanstalk environments by configuring a **custom image**
43
Q93. A company wants to launch an online shopping website in multiple countries and must ensure that customers are protected against potential "man-in-the-middle" attacks. Which architecture will provide the MOST secure site access? A) Use Amazon Route 53 for domain registration and DNS services. Enable DNSSEC for all Route 53 requests. Use AWS Certificate Manager {ACM) to register TLS/SSL certificates for the shopping website and use Application Load Balancers configured with those TLS/SSL certificates for the site. Use the Server Name Identification extension in all client requests to the site B) Register 2048-bit encryption keys from a third-party certificate service. Use a third-party DNS provider that uses the customer-managed keys for DNSSec. Upload the keys to ACM and use ACM to automatically deploy the certificates for secure web services to an EC2 front-end web server fleet by using NGINX. Use the Server Name Identification extension in all client requests to the site. C) Use Route 53 for domain registration Register 2048-bit encryption keys from a third-party certificate service. Use a third-party DNS service that supports DNSSEC for DNS requests that use the customer-managed keys. Import the customer-managed keys to ACM to deploy the certificates to Classic Load Balancers configured with those TLS/SSL certificates for the site. Use the Server Name Identification extension in all client requests to the site. D) Use Route 53 for domain registration and host the company DNS root servers on Amazon EC2 instances running Bind. Enable DNSSEC for DNS requests. Use ACM to register TLS/SSL certificates (or the shopping website and use, application Load Balancers configured with those TLS/SSL certificates for the site. Use the Server Name identification extension in all client requests to the site.
A) Use Amazon Route 53 for domain registration and DNS services. Enable DNSSEC for all Route 53 requests. Use AWS Certificate Manager (ACM) to register TLS/SSL certificates for the shopping website and use Application Load Balancers configured with those TLS/SSL certificates for the site. Use the Server Name Identification extension in all client requests to the site ## Footnote **\*No mention of Running Bind (in A,D), thus A**
44
094. A company runs an ordering system on AWS using Amazon SOS and AWS Lambda, with each order received as a JSON message. Recently, the company had a marketing event that led to a tenfold increase in orders. With this increase, the following undesired behaviors started in the ordering system: * Lambda failures while processing orders lead to queue backlogs. * The same orders have been processed multiple times. A Solutions Architect has been asked to solve the existing issues with the ordering system and add the following resiliency features: * Retain problematic orders for analysis. * Send notification if errors go beyond a threshold value How should the Solutions Architect meet these requirements? A) Receive multiple messages with each Lambda invocation. add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create a dead letter queue for messages that could not be processed, create an Amazon CloudWatch alarm on Lambda errors for notification B) Receive single messages with each Lambda invocation, put additional Lambda workers to poll the queue, delete messages after processing, increase the message timer for the messages. use Amazon CloudWatch Logs for messages that could not be processed, create a CloudWatch alarm on Lambda errors for notification C) Receive multiple messages with each Lambda invocation, use long polling when receiving the messages, log the errors from the message processing code using Amazon CloudWalch Logs, create a dead letter queue with AWS Lambda to capture failed invocations, create CloudWatch events on Lambda errors for notification D) Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create a delay queue for messages that could not be processed, create an Amazon CloudWatch metric on Lambda errors for notification
A) Receive multiple messages with each Lambda invocation. add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create **a dead letter queue for _messages_** that could not be processed, create an Amazon CloudWatch alarm on Lambda errors for notification
45
Q95. A company has an existing on-prem three-tier web application. The Linux web servers serve content from a centralized file share on a NAS server because the content is refreshed several times a day from various sources. The existing infrastructure is not optimized, and the company would like to move to AWS in order to gain the ability to scale resources up and down in response to load. On-premises and AWS resources are connected using AWS Direct Connect. How can the company migrate the web infrastructure to AWS without delaying the content refresh process? A) Create a cluster of web server Amazon EC2 instances behind a Classic Load Balancer on AWS. Share an Amazon EBS volume among all instances for the content. Schedule a periodic synchronization of this volume and the NAS server. B) Create an on-premises file gateway using AWS Storage Gateway to replace the NAS server and replicate content to AWS. On the AWS side, mount the same Storage Gateway bucket to each web server Amazon EC2 instance to serve the content. C) Expose an Amazon EFS share to on-premises users to serve as the NAS server. Mount the same EFS share to the webserver Amazon EC2 instances to serve the content D) Create web server Amazon EC2 instances on AWS in an Auto Scaling group Configure a nightly process where the webserver instances are updated from the NAS server.
C) Expose an **Amazon EFS** share to on-premises users to serve as the NAS server. Mount the same EFS share to the webserver Amazon EC2 instances to serve the content
46
Q96. A Solutions Architect is designing a network solution for a company that has applications running in a data center in Northern Virginia. The applications in the company"s data center require predictable performance to applications running in a virtual private cloud (VPC) located in us-east-1, and a secondary VPC in us-west-2 within the same account. The company data center is collocated in an AWS Direct Connect facility that serves the us-east-1 region. The company has already ordered an AWS Direct Connect connection and a cross-connect has been established. Which solution will meet the requirements at the LOWEST cost? A) Provision a Direct Connect gateway and attach the virtual private gateway (VGW) for the VPC in us• east-1 and the VGW for the VPC in us-west-2. Create a private VIF on the Direct Connect connection and associate it to the Direct Connect gateway B) Create private VIFs on the Direct Connect connection for each of the company's VPCs in the us-east-1 and us-west-2 regions. Configure the company's data center router to connect directly with the VPCs in those regions via the private VIFs. C) Deploy a transit VPC solution using Amazon EC2 based router instances in the us-east-1 region. Establish IPsec VPN tunnels between the transit routers and virtual private gateways (VGWs) located in the us-east-1 and us-west-2 regions, which are attached to the company's VPCs in those regions. Create a public VIF on the Direct Connect connection and establish IPsec VPN tunnels over the public VIF between the transit routers and the company's data center router D) Order a second Direct Connect connection to a Direct Connect facility with connectivity to the us-west- 2 region. Work with a partner to establish a network extension link over dark fiber from the Direct Connect facility to the company"s data center. Establish private VIFs on the Direct Connect connections for each of the company's VPCs in the respective regions. Configure the company's data center router to connect directly with the VPCs in those regions via the private VIFs.
A) Provision a Direct Connect gateway and attach the virtual private gateway (VGW) for the VPC in us• east-1 and the VGW for the VPC in us-west-2. Create a private VIF on the Direct Connect connection and associate it to the **Direct Connect gateway**
47
Q97. A Solutions Architect is designing the storage layer for a recently purchased application. The application will be running on Amazon EC2 instances and has the following layers and requirements: * Data layer: A POSIX file system shared across many system. * Service layer: Static file content that requires block storage with more than 1OOK IOPS. Which combination of AWS services will meet these needs? (Select TWO.) A) Data layer -Amazon S3 B) Data layer -Amazon EC2 Ephemeral Storage C) Data layer - Amazon EFS D) Service layer -Amazon EBS volumes with Provisioned IOPS E) Service layer - Amazon EC2 EphemeraI Storage
C) Data layer - Amazon **EFS** D) Service layer -Amazon EBS volumes with Provisioned **IOPS**
48
Q98. A company is using an Amazon CloudFront distribution to distribute both static and dynamic content from a web application running behind an Application Load Balancer. The web application requires user authorization and session tracking for dynamic content. The CloudFront distribution has a single cache behavior configured to forward the authorization, Host, and User-Agent HTTP whitelist headers and a session cookie to the origin. All other cache behavior settings are set to their default value. A valid ACM certificate is applied to the Cloud Front distribution with a matching CNAME in the distribution settings. The ACM certificate is also applied to the HTTPS listener for the Application Load Balancer. The CloudFront origin protocol policy is set to HTTPS only. Analysis of the cache statistics report shows that the miss rate for this distribution is very high. A valid ACM certificate is applied to the Cloud Front distribution with a matching CNAME in the distribution settings. The ACM certificate is also applied to the HTTPS listener for the Application Load Balancer. The CloudFront origin protocol policy is set to HTTPS only. Analysis of the cache statistics report shows that the miss rate for this distribution is very high. What can the Solutions Architect do to improve the cache hit rate for this distribution without causing the SSL/TLS handshake between CloudFront and the Application Load Balancer to fail? A) Create two cache behaviors for static and dynamic content. Remove the User-Agent and Host HTTP headers from the whitelist headers section on both of the cache behaviors. Remove the session cookie from the whitelist cookies section and the Authorization HTTP header from the whitelist headers section for cache behavior configured for static content. B) Remove the User-Agent and Authorization HTTP headers from the whitelist headers section of the cache behavior. Then update the cache behavior to use pre-signed cookies for authorization. C) Remove the Host HTTP header from the whitelist headers section and remove the session cookie from the whitelist cookies section for the default cache behavior. Enable automatic object compression and use **Lambda@Edge** viewer request events for user authorization. D) Create two cache behaviors for static and dynamic content. Remove the User-Agent HTTP header from the whitelist headers section on both of the cache behaviors. Remove the session cookie from the whitelist cookies section and the Authorization HTTP header from the whitelist headers section for cache behavior configured for static content.
C) Remove the Host HTTP header from the whitelist headers section and remove the session cookie from the whitelist cookies section for the default cache behavior. Enable automatic object compression and use **Lambda@Edge** viewer request events for user authorization.
49
Q99. A company is designing a new highly available web application on AWS. The application requires consistent and reliable connectivity from the application servers in AWS to a backend REST API hosted in the company's on-premis13s environment The backend connection between AWS and on-premises will be routed over an AWS Direct Connect connection through private virtual interlace. Amazon Route 53 will be used to manage private DNS records for the application to resolve the IP address on the backend REST API. Which design would provide a reliable connection to the backend API? A) Implement at least two backend endpoints for the backend REST API, and use Route 53 health checks to monitor the availability of each backend endpoint and perform DNS-level failover. B) Install a second Direct Connect connection from a different network carrier and attach it to the same virtual private gateway as the first Direct Connect connection. C) Install a second cross-connect for the same Direct Connect connection from the same network carrier, and join both connections to the same link aggregation group (LAG) on the same private virtual interface. D) Create an IPSec VPN connection routed over the public internet from the on-premises data center to AWS and attach it to the same virtual private gateway as the Direct Connect connection.
B) Install a **second Direct Connect connection** from a **different network carrier** and attach it to the same virtual private gateway as the first Direct Connect connection.
50
Q100. A public retail web application uses an Application Load Balancer (ALB) in front of Amazon EC2 instances running across multiple Availability Zones (AZs) in a Region backed by an Amazon RDS MySQL Multi-AZ deployment. Target group health checks are configured to use HTTP and pointed at the product catalog page. Auto Scaling is configured to maintain the web fleet size based on the ALB health check. Recently, the application experienced an outage. Auto Scaling continuously replaced the instances during the outage. A subsequent investigation determined that the webserver metrics were within the normal range, but the database tier was experiencing high load, resulting in severely elevated query response times. Which of the following changes together would remediate these issues while improving monitoring capabilities for the availability and functionality of the entire application stack for future growth? (Select TWO.) A) Configure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web application to reduce the load on the backend database tier. B) Configure the target group health check to point at a simple HTML page instead of a product catalog page and the Amazon Route 53 health check against the product page to evaluate a full application. functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails. C) Configure the target group health check to use a TCP check of the Amazon EC2 web server and the Amazon Route 53 health check against the product page to evaluate full application functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails. D) Configure an Amazon CloudWatch alarm for Amazon RDS with an action to recover a high-load, impaired RDS instance in the database tier. E) Configure an Amazon ElastiCache cluster and place it between the web application and RDS MySQL instances to reduce the load on the backend database tier.
A) Configure read replicas for Amazon RDS MySQL and use the **single reader endpoint** in the web application to reduce the load on the backend database tier. C) Configure the target group health check to use a **TCP check** of the Amazon EC2 web server and the Amazon Route 53 health check against the product page to evaluate full application functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails.
51
A large company has increased its utilization of AWS overtime in an unmanaged way. As such, they have a large number of independent AWS accounts across different( business units, projects, and environments. The company has created a Cloud Center of Excellence team. which is responsible for managing all aspects of the AWS Cloud, including their AWS accounts. Which of the following should the Cloud Center of Excellence team do to BEST address their requirements in a centralized way? (Select TWO.) A) Control all AWS account root user credentials. Assign AWS IAM users in the account of each user who needs to access AWS resources. Follow the policy of least privilege in assigning permissions to each user. B) Tag all AWS resources with details about the business unit, project. and the environment. Send all AWS Cost and Usage reports to a central Amazon S3 bucket, and use tools such as Amazon Athena and Amazon OuickSight to collect billing details by business unit. C) Use the AWS Marketplace to choose and deploy a Cost Management tool. Tag all AWS resources with details about the business unit, project, and environment. Send all AWS Cost and Usage reports for the AWS account to this tool for analysis. D) Set up AWS Organizations. Enable consolidated billing, and link all existing AWS accounts to a master billing account. Tag all AWS resources with details about the business unit. the project, and the environment. Analyze Cost and Usage reports using tools such as Amazon Athena and Amazon Quick Sight, to collect billing details by business unit. E) Using a master AWS account, create IAM users within the master account. Define IAM roles in the other AWS accounts, which cover each of the required functions in the account. Follow the policy of least privilege in assigning permissions to each role, then enable the IAM users to assume the roles that they need to use.
D) **Set up AWS Organizations**. Enable consolidated billing, and link all existing AWS accounts to a master billing account. Tag all AWS resources with details about the business unit. the project, and the environment. Analyze Cost and Usage reports using tools such as Amazon Athena and Amazon Quick Sight, to collect billing details by business unit. E) Using a **master AWS account, create IAM users** within the master account. Define IAM roles in the other AWS accounts, which cover each of the required functions in the account. Follow the **policy of least privilege** in assigning permissions to each role, then enable the IAM users to assume the roles that they need to use.
52
Q102. A media storage application uploads user photos to Amazon S3 for processing. End users are reporting that some uploaded photos are not being processed properly. The Application Developers trace the logs and find that AWS Lambda is experiencing execution issues when thousands of users a.re on the system simultaneously. Issues are caused by * Limits around concurrent executions. * The performance of Amazon DynamoDB when saving data. Which actions can be taken to increase the performance and reliability of the application? (Select TWO.) A) Evaluate and adjust the read capacity units (RCUS) for the DynamoDB tables B) Evaluate and adjust the write capacity units (WCUS) for the DynamoDB tables C) Add an Amazon ElastiCache layer to increase the performance of Lambda functions. D) Configure a dead letter queue that will reprocess failed or timed-out Lambda functions E) Use S3 Transfer Acceleration to provide lower-latency access to end-users.
B) Evaluate and adjust the **write capacity units (WCUS)** for the DynamoDB tables D) Configure a **dead letter queue** that will reprocess failed or timed-out Lambda functions
53
Q103. An online retailer needs to regularly process large product catalogs which are handled in batches. These are sent out to be processed by people using the Amazon Mechanical Turk service, but the retailer has asked its Solutions Architect to design a workflow orchestration system that allows it to handle multiple concurrent Mechanical Turk operations, deal with the result assessment process, and reprocess failures. Which of the following options gives the retailer the ability to interrogate the state of every workflow with the LEAST amount of implementation effort? A) Trigger Amazon CloudWatch alarms based upon message visibility in multiple Amazon SQS queues (one queue per workflow stage) and send messages via Amazon SNS to trigger AWS Lambda functions to process the next step. Use Amazon ES and Kibana to visualize Lambda processing logs to see the workflow states B) Hold workflow information in an Amazon RDS instance with AWS Lambda functions polling RDS for status changes. Worker Lambda functions then process the next workflow steps. Amazon QuickSight will visualize workflow states directly out of Amazon RDS. C) Build the workflow in AWS Step Functions, using it to orchestrate multiple concurrent workflows. The status of each workflow can be visualized in the AWS Management Console, and historical data can be written to Amazon S3 and visualized using Amazon QuickSight. D) Use Amazon SWF to create a workflow that handles a single batch of catalog records with multiple worker tasks to extract the data, transform it, and send it through Mechanical Turk. Use Amazon ES and Kibana to visualize AWS Lambda processing logs to see the workflow states.
D) Use Amazon SWF to create a workflow that handles a single batch of catalog records with multiple worker tasks to extract the data, transform it, and send it through **Mechanical Turk.** Use Amazon ES and Kibana to visualize AWS Lambda processing logs to see the workflow states.
54
Q104. A company has developed a web application that runs on Amazon EC2 instances in one AWS Region. The company has taken on new business in other countries and must deploy its application into other regions to meet l o w -latency requirements for its users. The regions can be segregated, and an application running in one region does not need to communicate with instances in other regions. How should the company's Solutions Architect automate the deployment of the application so that it can be MOST efficiently deployed into multiple regions? A) Write a bash script that uses the AWS CLI to query the current state in one region and output a JSON representation. Pass the JSON representation to the AWS CLI specifying the region parameter to deploy the application to other regions B) Write a bash script that uses the AWS CU to query the current state in one region and output an AWS CloudForrnation template. Create a CloudFormation stack from the template by using the AWS CLI, specifying the --region parameter to deploy the application to other regions C) Write a CloudFormation template describing the application's infrastructure in the resources section. Create a CloudFormation stack from the template by using the AWS CU, specify multiple regions using the region parameter to deploy the application. D) Write a CloudFormation template describing the application's infrastructure in the Resources section. Use a CloudFormation stack set from an administrator account to launch stack instances that deploy the application to other regions. application to other regions.
C) Write a CloudFormation template describing the application"s infrastructure in the resources section. Create a CloudFormation **stack from the template** by using the AWS CU, specify multiple regions using the --regions parameter to deploy the application.
55
Q104. A company has developed a web application that runs on Amazon EC2 instances in one AWS Region. The company has taken on new business in other countries and must deploy its application into other regions to meet l o w -latency requirements for its users. The regions can be segregated, and an application running in one region does not need to communicate with instances in other regions. How should the company's Solutions Architect automate the deployment of the application so that it can be MOST efficiently deployed into multiple regions? A) Write a bash script that uses the AWS CLI to query the current state in one region and output a JSON representation. Pass the JSON representation to the AWS CLI. specifying the region parameter to deploy the application to other regions B) Write a bash script that uses the AWS CLI to query the current state in one region and output an AWS CloudForrnation template. Create a CloudFormation stack from the template by using the AWS CU, specifying the --region parameter to deploy the application to other regions C) Write a CloudFormation template describing the application"s infrastructure in the resources section. Create a CloudFormation stack from the template by using the AWS CLI, specify multiple regions using the region parameter to deploy the application. D) Write a CloudFormation template describing the application's infrastructure in the Resources section. Use a CloudFormation stack set from an administrator account to launch stack instances that deploy the application to other regions.application to other regions.
C) Write a CloudFormation template describing the application"s infrastructure in the resources section. Create a CloudFormation **stack from the template** by using the AWS CLI, specify multiple regions using the region parameter to deploy the application.